CN117762320A - Data migration method and device and related equipment - Google Patents
Data migration method and device and related equipment Download PDFInfo
- Publication number
- CN117762320A CN117762320A CN202211128342.2A CN202211128342A CN117762320A CN 117762320 A CN117762320 A CN 117762320A CN 202211128342 A CN202211128342 A CN 202211128342A CN 117762320 A CN117762320 A CN 117762320A
- Authority
- CN
- China
- Prior art keywords
- data
- node
- memory
- metadata
- computing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 124
- 230000005012 migration Effects 0.000 title claims abstract description 83
- 238000013508 migration Methods 0.000 title claims abstract description 83
- 238000012545 processing Methods 0.000 claims abstract description 241
- 230000015654 memory Effects 0.000 claims abstract description 170
- 238000004364 calculation method Methods 0.000 claims abstract description 27
- 238000004590 computer program Methods 0.000 claims abstract description 9
- 230000004044 response Effects 0.000 claims abstract description 8
- 230000008569 process Effects 0.000 claims description 68
- 230000008014 freezing Effects 0.000 claims description 6
- 238000007710 freezing Methods 0.000 claims description 6
- 238000004891 communication Methods 0.000 description 14
- 230000006870 function Effects 0.000 description 13
- 206010047289 Ventricular extrasystoles Diseases 0.000 description 8
- 238000005129 volume perturbation calorimetry Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 6
- 238000012546 transfer Methods 0.000 description 6
- 230000005540 biological transmission Effects 0.000 description 5
- 238000012549 training Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 3
- 230000002093 peripheral effect Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000000802 evaporation-induced self-assembly Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Landscapes
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The application provides a data migration method for improving dead space of data processed by a distributed processing system. The method comprises the following steps: in response to the completion of first data processing in a memory of a first node, determining a second node according to a first processing task, wherein second data is stored in the memory of the second node, and the second data is processed after the first data in the first processing task; and sending the calculation metadata to a second node, wherein the second node is used for continuously executing the first processing task according to the calculation metadata. In addition, the application also provides a corresponding apparatus, a computing device cluster, a chip, a computer readable storage medium and a computer program product.
Description
Technical Field
The present disclosure relates to the field of distributed data processing technologies, and in particular, to a data migration method, apparatus, and related devices.
Background
With the development of technology, distributed processing systems are widely used. The distributed processing system includes a plurality of data processing nodes. Each data processing node in the distributed processing system has a relatively independent processor and memory that can be used for data processing. Thus, upon receiving a data processing task, the distributed processing system may select a more idle data processing node from which to process the data.
When the data processing node processes the data, the data to be processed can be stored in a slow storage medium such as a hard disk, the data to be processed is transferred from the hard disk to the memory, and finally the data to be processed is read from the memory by the processor. Therefore, the data read-write speed of the memory is high, and the processor can read the data to be processed from the memory at a high speed.
However, if the data amount of the data to be processed is large, the above-described data processing method has a problem of slow processing speed.
Disclosure of Invention
In view of this, the present application provides a data migration method for improving the dead space of a distributed processing system for processing data. The application also provides corresponding apparatus, computing device clusters, chips, computer-readable storage media, and computer program products.
In a first aspect, the present application provides a data migration method, which may be applied to a first node in a distributed processing system, and may also be applied to a device for data migration in a distributed processing system. Specifically, when the data migration method provided by the application is executed, whether the first data in the memory of the first node is processed or not can be judged. The first data is part of data in the data to be processed corresponding to the first processing task, and the first data is loaded into the memory of the first node before the first processing task is executed. If the first data is processed, a second node is determined based on the first processing task. The second node is a node storing second data in the memory. The second data is part of the data which is different from the first data in the data to be processed and corresponds to the first processing task, and the first processing task needs to process the second data after processing the first data. After the second node is determined, the computing metadata may be sent to the second node. The computational metadata is used to resume the reconstruction of the first processing task at the second node. Thus, after receiving the calculation metadata, the second node resumes reconstructing the first processing task according to the calculation metadata, thereby processing the second data stored in the memory of the second node.
In this way, although the data to be processed corresponding to the first processing task is stored in the first node and the second node respectively, it is not necessary to transfer all the data to be processed to the first node, but the first data already loaded into the memory of the first node is processed on the first node, and the computing metadata required for continuing to execute the first computing task is transferred to the second node, so that the second node continues to execute the first computing task, and the second data in the memory of the second node is processed. The position of the data to be processed is kept unchanged, the calculation force (namely, the metadata is calculated) is migrated, and all the data to be processed are not required to be written into the hard disk of the first node, and the data stored in the memory are not required to be replaced repeatedly. Thus, the data processing efficiency of the distributed processing system is improved.
In some possible implementations, the computing metadata includes intermediate data and/or state metadata. The intermediate data is data obtained after the first processing task finishes processing the first data. The state metadata is used to describe the execution state of the first processing task. In this way, the processing progress of the first processing task can be determined from the state metadata. Based on the intermediate data, processing may continue on the basis of the processing progress of the first processing task. It will be appreciated that for a portion of the processing tasks, intermediate data or state metadata may not be required to resume the reconstruction processing task at the second node, and then the computation metadata may not include intermediate data or state metadata.
In some possible implementations, state metadata may be used to describe the running state of a process, thread, or instruction in a first computing task. That is, state metadata may be used to describe the execution state of each process in the first computing task on the first node. Alternatively, the state metadata may be used to describe the execution state of each thread on the first node in the first process of the first computing task. Alternatively, the state metadata may be used to describe the execution state of each instruction on the first node in the first thread of the first process of the first computing task.
In some possible implementations, if the computing metadata is used to describe the execution state of a process in the first computing task, the computing metadata may be obtained through a thread freezing technique. Specifically, a plurality of threads corresponding to a first processing task on a first node may be frozen, thereby obtaining computing metadata based on the frozen plurality of threads. Thus, by freezing the threads, the execution state of each of the plurality of threads corresponding to the process can be determined, thereby determining the specific execution state of the process. Alternatively, the thread freezing technique described above may be a user space Checkpoint Reduction (CRU) technique.
In some possible embodiments, the second node may be determined according to the route correspondence and the second data. Specifically, after determining that the first data stored in the first node memory is processed, the second data that the first processing task needs to process next may be determined according to the first data. Then, according to the route correspondence, a second node storing second data in the memory can be determined. The corresponding relation between the data and the nodes is recorded in the corresponding relation of the route. Specifically, the routing correspondence is used to indicate that the first data is stored in the memory of the first node, and is also used to indicate that the second data is stored in the memory of the second node. In this way, the second node is determined based on the route correspondence, which facilitates migration of computing power to the node where the data that is needed to be processed next by the first processing task is located.
In some possible embodiments, the routing correspondence is pre-generated and stored in the storage module 134. Thus, when determining the second node, the route correspondence may be obtained from the storage module 134. Specifically, after receiving the first processing task, a first notification message may be sent to the second node, thereby instructing the second node to store the second data to the memory of the second node. Then, a route correspondence may be generated according to the first notification message and stored to the storage module 134. It may be appreciated that if the data to be processed corresponding to the first processing task further includes third data, a third notification message may also be sent to the third node to instruct the third node to load the third data into the memory. Correspondingly, the route correspondence generated according to the third notification message also indicates that the third data is stored in the memory of the third node.
In some possible embodiments, the data to be processed may be split according to the size of the memory of the node. Specifically, after the first processing task is received, the size of the data to be processed corresponding to the first processing task may be obtained, and then whether the data to be processed is larger than the remaining storage space of the memory of the first node is determined. If the data to be processed corresponding to the first processing task is larger than the first data, the data to be processed can not be integrally loaded into the memory of the first node, and the data to be processed can be split into first data and second data and a first notification message can be sent to the second node. In this way, before the first processing task is executed, the data to be processed corresponding to the first processing task can be split so as to perform computational migration. It will be appreciated that if the data to be processed is greater than the sum of the remaining storage space of the memory of the first node and the remaining storage space of the memory of the second node, the data to be processed may be split into three or even more data.
In some possible implementations, the second node may be notified after sending the computing metadata to the second node. For example, in order to increase the data read-write speed, the computing metadata may be written into the memory of the second node under the condition that the second node does not have awareness. Then to continue execution of the first processing task, a second notification message may be sent to the second node to cause the second node to determine that the computing metadata is loaded into the memory of the second node to resume rebuilding the first processing task.
In a second aspect, the present application provides a data migration apparatus, the apparatus comprising: the determining module is used for determining a second node according to a first processing task in response to the completion of the processing of first data in a memory of the first node, wherein second data is stored in the memory of the second node, and the second data is processed after the first data in the first processing task; and the sending module is used for sending the calculation metadata to a second node, and the second node is used for continuously executing the first processing task according to the calculation metadata.
In some possible embodiments, the computing metadata includes intermediate data and/or state metadata, where the intermediate data is data obtained after the first processing task finishes processing the first data, and the state metadata is used to describe an execution state of the first processing task.
In some possible implementations, the state metadata is used to describe an execution state of each process in the first computing task on the first node, or the state metadata is used to describe an execution state of each thread in the first process of the first computing task on the first node, or the state metadata is used to describe an execution state of each instruction in the first thread of the first process of the first computing task on the first node.
In some possible implementations, the computing metadata is used for describing an execution state of each process in the first computing task on the first node, and the determining module is further used for freezing a plurality of threads corresponding to the first processing task on the first node; the computing metadata is obtained based on the frozen plurality of threads.
In some possible embodiments, the determining module is specifically configured to determine the second data according to the first processing task and the first data; and determining the second node according to a route corresponding relation, wherein the route corresponding relation is used for indicating that the second data is stored in the memory of the second node.
In some possible implementations, the sending module is further configured to send a first notification message to the second node, where the first notification message is used to instruct the second node to store the second data to a memory of the second node; and generating the route corresponding relation according to the first notification message.
In some possible implementations, the determining module is further configured to obtain a size of data to be processed of the first processing task; and splitting the data to be processed into the first data and the second data in response to the data to be processed being greater than the memory space of the first node.
In some possible implementations, the sending module is further configured to send a second notification message to the second node, where the second notification message is used to instruct the second node to continue to perform the first processing task according to the computing metadata.
In a third aspect, the present application provides a cluster of computing devices, the computing devices comprising at least one computing device, the at least one computing device comprising at least one processor and at least one memory; the at least one memory is configured to store instructions that the at least one processor executes to cause the cluster of computing devices to perform the data migration method of the first aspect or any one of the possible implementations of the first aspect. It should be noted that the memory may be integrated into the processor or may be independent of the processor. The at least one computing device may also include a bus. The processor is connected with the memory through a bus. The memory may include a readable memory and a random access memory, among others.
In a fourth aspect, the present application provides a chip comprising a memory for storing instructions or program code and a processor for calling and executing the instructions or program code from the memory to perform the method of any of the preceding first aspects.
In a fifth aspect, the present application provides a computer readable storage medium having instructions stored therein which, when run on at least one computing device, cause the at least one computing device to perform the method of the first aspect or any implementation of the first aspect.
In a sixth aspect, the present application provides a computer program product comprising instructions which, when run on at least one computing device, cause the at least one computing device to perform the method of the first aspect or any implementation of the first aspect.
Further combinations of the present application may be made to provide further implementations based on the implementations provided in the above aspects.
Drawings
FIG. 1 is a schematic diagram of a node in a distributed processing system according to the present application;
FIG. 2 is a schematic diagram of an exemplary application scenario provided in an embodiment of the present application;
FIG. 3 is a schematic flow chart of a data migration method according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a computing device according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a computing device cluster according to an embodiment of the present application.
Detailed Description
The embodiments of the present application will be described below with reference to the drawings in the present application.
The terms first, second and the like in the description and in the claims of the present application and in the above-described figures, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances and are merely illustrative of the manner in which the embodiments of the application described herein have been described for objects of the same nature.
The distributed processing system includes a plurality of data processing nodes. Different data processing nodes in the same distributed processing system may be interconnected or interconnected by a management node. Each data processing node may include a processor, memory, and a hard disk. When the data to be processed is processed, the data processing node can firstly load the data to be processed to a local hard disk, and then load all or part of the data to be processed from the hard disk according to the size of the storage space of the memory, so that the processor can read the data in the memory for processing. Therefore, the data reading and writing speed of the memory is higher than that of the hard disk, and the processor can acquire and process the data to be processed at a higher speed. Transferring data from a hard disk (or other slow storage device) to memory may be referred to as storing data to memory or loading data to memory.
In addition, if the data volume of the data to be processed is larger and exceeds the size of the storage space of the memory of the distributed data processing node, part of the data to be processed can be loaded into the memory first so that the processor can process the data. When the data needed by the processor is not in the memory, the data which is already processed in the memory can be replaced by the data which needs to be processed. For example, the swap-in and swap-out of the data in the memory can be realized by a memory map (mmap) method.
In the above process, all the data to be processed needs to be transferred to the hard disk of the distributed storage node, and the data in the memory needs to be replaced by the data in the hard disk continuously. Thus, it takes a long time to store data in the hard disk, and it takes a long time to replace the data in the memory. Thus, the efficiency of data processing is reduced.
In the application scenario shown in fig. 1, for example, the distributed processing system comprises a data processing node 1, a data processing node 2 and a data processing node 3. It is assumed that the data to be processed includes data a stored in the data processing node 1, data B stored in the data processing node 2, and data C stored in the data processing node 3. Then the data processing node 1 may first obtain data B from the data processing node 2 and store the data B in the hard disk of the data processing node when the data processing node 1 processes the data to be processed. Next, the data processing node 1 loads the data a from the hard disk of the data processing node 1 to the memory of the data processing node 1, so that the processor processes the data a in the memory. After data a is processed, data B is loaded from the hard disk of data processing node 1 into the memory of data processing node 1. After the data B is processed, the data C is loaded from the hard disk of the data processing node 1 into the memory of the data processing node 1.
In the above process, on one hand, the data processing node 1 needs to receive the data B and the data C, and write the data B and the data C into the hard disk of the data processing node 1, where the writing speed of the hard disk is slower, and writing the data into the hard disk consumes a lot of time; on the other hand, the data processing node 1 needs to replace the data stored in the memory for a plurality of times, which also consumes time. For both of these reasons, the data processing speed of current distributed processing systems is slow.
That is, in current distributed processing systems, data is transferred to data processing nodes to process the data using the computing power provided by the data processing nodes. While the speed of data transfer is slower, resulting in slower data processing.
Based on this, embodiments of the present application provide a data migration method that may be performed by a data processing node in a distributed processing system. The method can be executed by a data migration device in the data processing node, and is used for migrating the calculation metadata in the data processing process on the basis of unchanged storage positions of the data to be processed, so that the calculation power migration based on the data is realized, and the data processing efficiency is improved.
In particular, it is assumed that the distributed processing system includes a first node and a second node, and that both the first node and the second node are data processing nodes. Then after receiving a first processing task to process the first data and the second data, the first node may load the first data into the memory of the first node and the second node may load the second data into the memory of the second node. The processor of the first node may then perform a first processing task based on the first data stored in the memory of the first node. The first data is gradually processed as the first processing task is executed. After the first data is processed, continuing the first processing task requires processing the second data. At this time, the data migration apparatus of the first node may determine the second node according to the first processing task and then transmit the calculation metadata to the second node. The memory of the second node is loaded with second data required for continuously executing the first processing task, and the calculated metadata is the data required for continuously executing the first processing task. Thus, after receiving the computing metadata, the second node may continue to perform the first processing task to process the second data according to the computing metadata. In this way, although the data to be processed corresponding to the first processing task is stored in the first node and the second node respectively, the first data and the second data do not need to be transferred to the first node in a unified way, but the first data which is already loaded into the memory of the first node is processed on the first node, and the data required for continuing to execute the first computing task is transferred to the second node, so that the second node continues to execute the first computing task, and the second data in the memory of the second node are processed. The position of the data to be processed is kept unchanged, the calculation force (namely, the metadata is calculated) is migrated, and all the data to be processed are not required to be written into the hard disk of the first node, and the data stored in the memory are not required to be replaced repeatedly. Thus, the data processing efficiency of the distributed processing system is improved.
As an example, the data migration apparatus described above may be deployed at a data processing node in a distributed processing system for implementing migration of computing metadata. For example, in the application scenario shown in fig. 2, the data migration apparatus 130 may be specifically applied to the first node 100 in the distributed processing system, and the distributed processing system further includes the second node 200. In particular, the data migration means 130 may be a software means running in the processor 120 of the first node 100.
As a possible implementation, in order to increase the speed of synchronization of the memory information, each node in the distributed processing system is connected through a high-speed interconnection unit. The high-speed interconnect unit may be, for example, a module with fast data transfer capability such as a remote direct data access (remote direct memory access, RDMA) network card.
In actual use, the data migration apparatus 130 may include a determination module 131 and a transmission module 132. The determining module 131 is configured to determine, after the first data processing is completed in the memory of the first node, the second node according to the first processing task. The transmitting unit 132 is configured to transmit the calculation metadata to the second node. In this way, the processor 220 of the second node 200 may resume reconstructing the first processing task on the second node 200 based on the computing metadata, thereby processing the data.
In practical applications, the data migration apparatus 130 may be implemented by software, or may be implemented by hardware.
The data migration means 130 may comprise code running on a computing instance as an example of a software functional unit. The computing instance may include at least one of a physical host (computing device), a virtual machine, and a container, among others. Further, the above-described computing examples may be one or more. For example, the data migration apparatus may include code running on multiple hosts/virtual machines/containers. It should be noted that, multiple hosts/virtual machines/containers for running the code may be distributed in the same region (region), or may be distributed in different regions. Further, multiple hosts/virtual machines/containers for running the code may be distributed in the same availability zone (availability zone, AZ) or may be distributed in different AZs, each AZ comprising a data center or multiple geographically close data centers. Wherein typically a region may comprise a plurality of AZs.
Also, multiple hosts/virtual machines/containers for running the code may be distributed in the same virtual private cloud (virtual private cloud, VPC) or in multiple VPCs. In general, one VPC is disposed in one region, and a communication gateway is disposed in each VPC for implementing inter-connection between VPCs in the same region and between VPCs in different regions.
Data migration apparatus as an example of a hardware functional unit, the data migration apparatus may include at least one computing device, such as a server or the like. Alternatively, the data migration apparatus may be a device or the like implemented using an application-specific integrated circuit (ASIC) or a programmable logic device (programmable logic device, PLD). The PLD may be implemented as a complex program logic device (complex programmable logical device, CPLD), a field-programmable gate array (FPGA), a general-purpose array logic (generic array logic, GAL), or any combination thereof.
The multiple computing devices included in the data migration apparatus may be distributed in the same region or may be distributed in different regions. The plurality of computing devices included in the data migration apparatus may be distributed in the same AZ or may be distributed in different AZ. Likewise, multiple computing devices included in the data migration apparatus may be distributed in the same VPC, or may be distributed in multiple VPCs. Wherein the plurality of computing devices may be any combination of computing devices such as servers, ASIC, PLD, CPLD, FPGA, and GAL.
For example, in some possible implementations, the data migration apparatus may be integrated in a network card of a node in the distributed processing system. Specifically, the storage unit and the logic processing unit may be built in a network card of the node. The logic processing unit is used for acquiring the calculation metadata and updating the calculation metadata into the storage unit of the network card. The logic processing unit is further configured to determine the second node, and send the computing metadata to the second node through other units in the network card.
Various non-limiting embodiments of the data migration process are described in detail below.
Referring to fig. 3, a flow chart of a data migration method in an embodiment of the present application is shown. The method can be applied to the application scenario shown in fig. 2, or can be applied to other applicable application scenarios. The following description will be given by taking an application scenario as an example, which is applied to the application scenario shown in fig. 2. And the function of each module is described in detail in the following embodiments.
It should be noted that, in the application scenario shown in fig. 2, the data migration apparatus 130 not only includes a determining module 131 and a sending module 132, but also may further include an obtaining module 133 and a storage module 134. Since the acquisition module 133 and the storage module 134 are optional implementations, the portions associated with the acquisition module 133 and the storage module 134 are represented by dashed lines in fig. 2. In addition, since the calculation metadata in the second node 200 is transmitted by the first node 110, the calculation metadata in the second node 200 in fig. 2 is also indicated by a dotted line.
The data migration method shown in fig. 3 specifically may include:
s301: the processor 120 performs a first processing task to process first data in the memory 110.
In this embodiment, the distributed processing system is configured to perform a first processing task. The first processing task is used for processing data. The data to be processed corresponding to the first processing task can be at least divided into first data and second data. The first data is stored in the hard disk of the first node 100 and then loaded into the memory 110 of the first node 100. The second data is stored on the hard disk of the second node 200 or is loaded into the memory 210 of the second node 200. The data amount of the first data is not greater than the size of the storage space of the memory 110, and the data amount of the second data is not greater than the size of the storage space of the memory 210.
And in the data to be processed corresponding to the first processing task, the second data is the data after the first data. That is, if the processing of the first data is completed, the first processing task next needs to process the second data. During the execution of the first processing task, the processor 120 may read the first data stored in the memory 110 and process the first data.
During processing of the first data, the processor 120 may generate intermediate data. The intermediate data is data obtained by processing part or all of the first data. For example, if the first processing task is a model training task, the intermediate data may be an untrained completed model. In this embodiment, intermediate data may be loaded into memory 110 and/or processor 120. If the intermediate data is stored in the processor 120, the intermediate data may be stored in a cache (register) or register of the processor 120, for example.
In addition, the processor 120 may also generate state metadata during processing of the first data. The state metadata is used to describe the execution state of the first processing task, i.e. to what extent the first processing task is executed by the first node 100. Alternatively, the execution states may include an unexecuted state, an execution completed state, and an in-execution state. Alternatively, the execution state may be used to describe to which step a process or thread or instruction is specifically executed.
Details of intermediate data and state metadata may be found below and are not repeated here.
As can be seen from the foregoing description, the first data is loaded into the memory 110, and the second data is loaded into the memory 210. In some possible implementations, the loading of the first data and the second data may be triggered by the data migration device 130.
Specifically, when the first processing task is allocated to the first node 100, the data migration apparatus 130 may determine that data to be processed corresponding to the first processing task is stored in the first node 100 and the second node 200, and then instruct the first node 100 to load the first data into the memory 110 and instruct the second node 200 to load the second data into the memory 210.
In some possible embodiments, the data migration device 130 may split the data to be processed according to the size of the data to be processed and the size of the memory. Specifically, after receiving the first processing task, the size of the data to be processed corresponding to the first processing task may be obtained. The data migration apparatus 130 may then compare the size of the data to be processed with the size of the remaining storage space of the memory 110.
If the size of the data to be processed corresponding to the first processing task is smaller than the size of the remaining storage space of the memory 110, the data migration apparatus 130 may not split the data to be processed. In this way, the data migration device 130 may load all the data to be processed corresponding to the first processing task into the memory 110, so that the first node 100 performs the first processing task. For this case, no computational power migration is required.
If the size of the data to be processed corresponding to the first processing task is greater than the size of the remaining storage space of the memory 110, the data migration device 130 may split the data to be processed into a plurality of data. The plurality of data includes first data and second data. The first data has a size smaller than the remaining storage space of the memory 110 and can be loaded into the memory 110. The second data, which has a size smaller than the remaining storage space of the memory 210, can be loaded into the memory 210. Alternatively, if the size of the data to be processed corresponding to the first processing task is greater than the sum of the remaining storage spaces of the memory 110 and the memory 210, the data migration device 130 may split the data to be processed into three or more data.
After determining the splitting manner of the data to be processed, the data migration apparatus 130 may instruct the node to load the data into the memory. Specifically, the sending module 132 of the data migration apparatus 130 may send a first notification message to the second node 200, where the first notification message is used to instruct the second node 200 to load the second data into the memory 210. Thus, after receiving the first notification message, the processor 220 may determine the second data from the hard disk of the second node 200 and load the second data into the memory 210. Therefore, the second data is loaded into the memory before the second data is processed, and the processing efficiency of the second data can be improved, so that the execution speed of the first processing task is improved.
Optionally, in some possible implementations, the first node 100 and the second node 100 each store data to be processed corresponding to the first processing task, and then the data migration device 130 may instruct the first node 100 to divide the data to be processed into first data and second data and load the first data into the memory 110, and instruct the second node 200 to divide the data to be processed into the first data and the second data and load the second data into the memory 210. It will be appreciated that if the data to be processed corresponding to the first processing task further includes third data, the data migration apparatus 130 may further instruct a third node (not shown in fig. 2) to divide the data to be processed into the first data, the second data, and the third data, and load the third data into a memory of the third node. Alternatively, the above-mentioned action of instructing the second node (and other nodes) to split and load the data to be processed into the memory may be performed by the determining module 131 in the data migration apparatus 130, or may be performed by the sending module 132.
After instructing the first node 100 to load the first data into the memory 110 and the second node 200 to load the second data into the memory 210, the data migration module 130 may generate the routing correspondence. The routing correspondence is used to indicate the correspondence between the first node 100 and the first data, and to indicate the correspondence between the second node 200 and the second data. That is, according to the routing correspondence, it may be determined that the first data is loaded into the memory of the first node 100, and the second data is processed by the second node 200. Optionally, the route correspondence is stored in the storage unit 134 of the data migration module.
S302: in response to the first data processing being completed, the determining module 131 determines the second node 200.
As the first processing task is executed, the first data stored in the memory 110 is continuously processed. After the first data stored in the memory 110 is processed, the first processing task cannot continue to be executed. If the first processing task cannot continue to be executed, the determining module 131 may determine the second node 200 so as to transfer the first processing task to the second node 200, and continue to execute the first processing task by the second node 200.
In some possible implementations, the determining module 131 determines the second node 200 based on the route correspondence. Specifically, after the first data is processed, the determining node 131 may determine the second data according to the first processing task and the first data. The determining node 131 then searches for a node corresponding to the second data from the route correspondence, thereby determining the second node. That is, the determining module 131 determines the data (i.e. the second data) that needs to be processed by the first processing task, and then determines the node (i.e. the second node 200) in which the second data is stored from the plurality of nodes of the distributed processing system according to the routing correspondence, so as to transfer the first processing task to the node in which the second data is stored, and continue to process the second data.
S303: the sending module 132 sends the calculated metadata to the memory 210.
After determining the second node 200, the sending module 132 may send the computing metadata to the memory 210. Wherein, the calculation metadata may be acquired by the acquisition module 133 and sent to the sending module 132. The computing metadata is particularly useful for transferring the first processing task.
Thus, after the computation metadata is written to the memory 210, the processor 220 of the second node 200 may resume reconstructing the first processing task from the computation metadata store in the memory 210, thereby continuing execution of the first processing task. The second node 200 resumes reconstructing the first processing task means that the first processing task is re-established on the second node 200, and continues to execute the first processing task on the basis of the execution progress of the first node 100.
Alternatively, the computing metadata may include state metadata and intermediate data. The state metadata is used for describing the execution state of the first processing task after the first data is processed. The intermediate data is data obtained after the first data is processed. Accordingly, the processor 220 may determine how much the first processing task is performed based on the state metadata, thereby continuing to process the intermediate data and the second data on the basis of the progress of the first node 100, and continuing to perform the first processing task.
In actual use, the first processing task may comprise one or more processes, each of which may be used to implement all or part of the functionality of the first processing task. One or more threads may be invoked to implement the functionality of a process during execution of the process by a processor. One or more instructions may be executed in sequence during execution of the thread by the processor. Each instruction is for processing data to be processed one or more times.
That is, the execution state of the first processing task may be the execution state of the process corresponding to the first task, the execution state of the thread in the process of the first task, or the execution state of the instruction in the thread of the process of the first task. Accordingly, in embodiments of the present application, state metadata may be used to describe the execution state of a process, thread, or instruction. That is, the state metadata in the calculation metadata may be used to indicate an execution state of a process in the first processing task, or may be used to indicate an execution state of a thread in the process of the first processing task, or may be used to indicate an execution state of an instruction in the thread of the process of the first processing task.
Specifically, assuming that the execution of the first processing task requires completion of 10 processes, state metadata may be used to describe how far the above 10 processes are respectively executed; and/or, assuming that the execution of the first processing task requires completion of the first process, the execution of the first process requires completion of 5 threads, the state metadata may be used to describe how far five threads in the first process are executed, respectively; and/or, assuming that execution of the first processing task requires completion of the first process, execution of the first process requires completion of the first thread, and execution of the first thread requires completion of the first instruction, the state metadata may be used to describe an execution state of the first instruction.
The data migration device 130 may obtain the computing metadata before sending the computing metadata to the memory 210. Alternatively, if state metadata in the computing metadata is used to describe the execution state of each process in the first computing task on the first node 100. The data migration device 130 may obtain the state metadata through a freezing process. Specifically, when the first processing task cannot continue to execute, the data migration device 130 may freeze all threads corresponding to the first processing task in the processor 120, and obtain the state metadata and the intermediate data from the memory 110. Alternatively, the data migration apparatus may freeze all threads corresponding to the first processing task through CRIU technology so as to obtain the computing metadata. Thus, the thread corresponding to the first processing task is frozen, and the calculation metadata is obtained, and the obtained calculation metadata can describe the execution condition of the thread in the first processing task. Since a process may include a plurality of threads, the computing metadata obtained by the above method may comprehensively describe the execution of the plurality of threads, thereby comprehensively describing the execution of the process. Thus, when the first processing task is restored and rebuilt, the execution condition of the process can be determined according to the execution condition of the integrated thread, so that the first processing task is restored and rebuilt based on the process.
Optionally, in some possible implementations, intermediate data and/or state metadata may also be stored in a cache or register of the processor 120, and then the data migration device 130 may obtain the computing metadata from the memory 110 and the cache or register of the processor 120.
After the computing metadata is obtained, the computing metadata may be sent to the memory 210 through the sending module 132. Alternatively, if the first node 100 and the second node 200 are connected through a transmission control protocol (transfer control protocol, TCP), the transmission module 131 may transmit the calculation metadata to the second node 200 through a TCP connection. The second node 200, upon receiving the computing metadata, may load the computing metadata into the memory 210.
Alternatively, if the first node 100 includes an RDMA network card, the transmit module 132 may directly access the memory 210 of the second node 200, thereby writing the computing metadata in the memory 210. Since the local processor may not sense the storage condition of the data in the local memory in the protocol such as RDMA, in the above implementation, the sending module 132 may send a second notification message to the second node 200 after the computing metadata is completely written into the memory 210, so that the processor 200 determines, according to the second notification message, that the computing metadata corresponding to the first processing task is loaded into the memory 210.
S304: the processor 220 resumes the first processing task based on the calculated metadata and processes the second data in the memory 210.
After the computing metadata is loaded into the second node 200, the processor 220 may resume the first processing task based on the computing metadata, thereby continuing to execute the first processing task on the basis of the first node 100. As can be seen from the foregoing description, the first processing task is used to process the first data and the second data sequentially, and then the processor 220 continues to execute the first processing task, which is equivalent to processing the second data in the memory 210 based on the calculated metadata.
In this way, although the data to be processed corresponding to the first processing task is stored in the first node and the second node respectively, the first data and the second data do not need to be transferred to the first node in a unified way, but the first data which is already loaded into the memory of the first node is processed on the first node, and the data required for continuing to execute the first computing task is transferred to the second node, so that the second node continues to execute the first computing task, and the second data in the memory of the second node are processed. The position of the data to be processed is kept unchanged, the calculation force (namely, the metadata is calculated) is migrated, and all the data to be processed are not required to be written into the hard disk of the first node, and the data stored in the memory are not required to be replaced repeatedly. Thus, the data processing efficiency of the distributed processing system is improved.
It should be noted that, in the embodiment of the present application, the division and the functional description of each unit in the data migration apparatus 130 are only taken as an example. For example, in other embodiments, the determining module 131 may be configured to perform any step in the data migration method, and similarly, the sending unit 132 may also be configured to perform any step in the data migration method, and the steps that the determining module 131 and the sending module 132 are responsible for implementing may be specified as needed, where the determining module 131 and the sending module 132 implement different steps in the data migration method to implement the function of the data migration device 130.
In the embodiment shown in fig. 3, the data migration apparatus 130 (including the determining module 131 and the sending module 132) related to the data migration process may be software configured on a computing device or a computing device cluster, and by running the software on the computing device or the computing device cluster, the computing device or the computing device cluster may implement the functions of the data migration apparatus. The data migration apparatus involved in the data migration process is described in detail below based on the hardware device implementation angle.
Fig. 4 shows a schematic structural diagram of a computing device, where the data migration apparatus may be disposed on the computing device, where the computing device may be a computing device in a cloud environment (such as a server), or a computing device in an edge environment, or a terminal device, and the like may be specifically configured to implement the functions of the determining module 131 and the sending module 132 in the embodiment shown in fig. 3.
As shown in fig. 4, computing device 400 includes a processor 410, a memory 420, a communication interface 430, and a bus 440. Communication between processor 410, memory 420, and communication interface 430 is via bus 440. Bus 440 may be a peripheral component interconnect standard (peripheral component interconnect, PCI) bus, or an extended industry standard architecture (extended industry standard architecture, EISA) bus, among others. The buses may be divided into address buses, data buses, control buses, etc. For ease of illustration, only one thick line is shown in fig. 4, but not only one bus or one type of bus. The communication interface 430 is used for communicating with the outside, for example, receiving the first processing task and the data to be processed corresponding to the first processing task, and sending the processing result corresponding to the first processing task.
Processor 410 may be, among other things, a central processing unit (central processing unit, CPU), an application specific integrated circuit (application specific integrated circuit, ASIC), a graphics processor (graphics processing unit, GPU), or one or more integrated circuits. The processor 410 may also be an integrated circuit chip with signal processing capabilities. In implementation, the functions of the various units in the data migration apparatus may be accomplished by integrated logic circuitry of hardware in the processor 410 or instructions in the form of software. The processor 410 may also be a general purpose processor, a data signal processor (digital signal process, DSP), a field programmable gate array (field programmable gate array, FPGA) or other programmable logic device, discrete gate or transistor logic devices, discrete hardware components, and may implement or perform the methods, steps, and logic blocks disclosed in embodiments of the present application. The method disclosed in the embodiments of the present application may be directly embodied in a hardware decoding processor or may be implemented by a combination of hardware and software modules in the decoding processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in memory 420 and processor 410 reads information in memory 420, which, in combination with its hardware, performs some or all of the functions in the data migration apparatus.
The memory 420 may include volatile memory (RAM), such as random access memory (random access memory). The memory 420 may also include a non-volatile memory (non-volatile memory), such as a read-only memory (ROM), a flash memory, an HDD, or an SSD.
The memory 420 has stored therein executable code that the processor 410 executes to perform the methods performed by the data migration apparatus described previously.
Specifically, in the case of implementing the embodiment shown in fig. 3, and in the case where the determining module 131 and the transmitting module 132 described in the embodiment shown in fig. 3 are implemented by software, software or program codes required for performing the functions of the determining module 131 and the transmitting module 132 in fig. 3 are stored in the memory 420, and the interaction of the data migration apparatus 130 with other devices is implemented through the communication interface 430, and the processor is configured to execute instructions in the memory 420 to implement a method performed by the data migration apparatus.
Fig. 5 illustrates a schematic diagram of a computing device cluster. Wherein the computing device cluster 50 shown in fig. 5 includes a plurality of computing devices, the above-described data migration apparatus may be distributed and deployed on one or more computing devices in the computing device cluster 50. As shown in fig. 5, the computing device cluster 50 includes a plurality of computing devices 500, each computing device 500 including a memory 520, a processor 510, a communication interface 530, and a bus 540, wherein the memory 520, the processor 510, and the communication interface 530 are communicatively connected to each other through the bus 540.
Processor 510 may employ CPU, GPU, ASIC or one or more integrated circuits. Processor 510 may also be an integrated circuit chip with signal processing capabilities. In implementation, some of the functions of the data migration apparatus may be accomplished with instructions in the form of integrated logic circuits or software through hardware in the processor 510. The processor 510 may also be a DSP, FPGA, general purpose processor, other programmable logic device, discrete gate or transistor logic device, discrete hardware components, and may implement or perform some of the methods, steps, and logic blocks disclosed in the embodiments of the present application. The steps of the method disclosed in the embodiments of the present application may be directly embodied in a hardware decoding processor or may be implemented by a combination of hardware and software modules in the decoding processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in the memory 520, and in each computing device 500, the processor 510 reads information in the memory 520, and in combination with the hardware, can perform part of the functions of the data migration apparatus.
The memory 520 may include ROM, RAM, static storage, dynamic storage, hard disk (e.g., SSD, HDD), etc. The memory 520 may store program codes, for example, a part or all of the program codes for implementing the determination module 131, a part or all of the program codes for implementing the transmission module 132, and the like. For each computing device 500, when the program code stored in the memory 520 is executed by the processor 510, the processor 510 performs a portion of the methods performed by the data migration apparatus based on the communication interface 530, such as a portion of the computing device 500 may be used to perform the methods performed by the determination module 131 described above, and another portion of the computing device 500 may be used to perform the methods performed by the transmission module 132 described above. Memory 520 may also store data such as: intermediate or result data generated during execution by the processor 510, such as the computational metadata described above.
The communication interface 503 in each computing device 500 is used to communicate with the outside, such as to interact with other computing devices 500, etc.
Bus 540 may be a peripheral component interconnect standard bus or an extended industry standard architecture bus, etc. For ease of illustration, the bus 540 within each computing device 500 in FIG. 5 is represented by only one thick line, but does not represent only one bus or one type of bus.
Communication paths are established between the plurality of computing devices 500 through a communication network to realize functions of the data migration apparatus. Any computing device may be a computing device in a cloud environment (e.g., a server), or a computing device in an edge environment, or a terminal device.
Furthermore, the present application also provides a computer-readable storage medium having stored therein instructions which, when executed on one or more computing devices, cause the one or more computing devices to perform the methods performed by the respective units of the data migration apparatus of the above embodiments.
Further, embodiments of the present application provide a computer program product that, when executed by one or more computing devices, performs any of the foregoing data migration methods. The computer program product may be a software installation package, which may be downloaded and executed on a computer in case any of the aforementioned data migration methods is required.
It should be further noted that the above-described apparatus embodiments are merely illustrative, and that the units described as separate units may or may not be physically separate, and that units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment. In addition, in the drawings of the embodiment of the device provided in the application, the connection relationship between the units indicates that there is a communication connection between them, and may be specifically implemented as one or more communication buses or signal lines.
From the above description of the embodiments, it will be apparent to those skilled in the art that the present application may be implemented by means of software plus necessary general purpose hardware, or of course may be implemented by dedicated hardware including application specific integrated circuits, dedicated CPUs, dedicated memories, dedicated components and the like. Generally, functions performed by computer programs can be easily implemented by corresponding hardware, and specific hardware structures for implementing the same functions can be varied, such as analog circuits, digital circuits, or dedicated circuits. However, a software program implementation is a preferred embodiment in many cases for the present application. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a readable storage medium, such as a floppy disk, a usb disk, a removable hard disk, a ROM, a RAM, a magnetic disk or an optical disk of a computer, etc., including several instructions for causing a computer device (which may be a personal computer, a training device, or a network device, etc.) to perform the method described in the embodiments of the present application.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product.
The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, training device, or data center to another website, computer, training device, or data center via a wired (e.g., coaxial cable, optical fiber, digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be stored by a computer or a data storage device such as a training device, a data center, or the like that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., a floppy Disk, a hard Disk, a magnetic tape), an optical medium (e.g., a DVD), or a semiconductor medium (e.g., a Solid State Disk (SSD)), or the like.
Claims (20)
1. A method of data migration, the method comprising:
in response to the completion of first data processing in a memory of a first node, determining a second node according to a first processing task, wherein second data is stored in the memory of the second node, and the second data is processed after the first data in the first processing task;
and sending the calculation metadata to a second node, wherein the second node is used for continuously executing the first processing task according to the calculation metadata.
2. The method according to claim 1, wherein the computing metadata includes intermediate data and/or state metadata, the intermediate data being data obtained after the first processing task finishes processing the first data, and the state metadata being used for describing an execution state of the first processing task.
3. The method of claim 2, wherein the state metadata is used to describe an execution state of each process in the first computing task on the first node, or wherein the state metadata is used to describe an execution state of each thread in the first process of the first computing task on the first node, or wherein the state metadata is used to describe an execution state of each instruction in the first thread of the first process of the first computing task on the first node.
4. A method according to claim 3, wherein the computing metadata is used to describe the execution state of each process in the first computing task on the first node, the method further comprising, prior to sending the computing metadata to a second node:
freezing a plurality of threads corresponding to the first processing task on the first node;
the computing metadata is obtained based on the frozen plurality of threads.
5. The method of any of claims 1-6, wherein determining the second node from the first processing task comprises:
determining the second data according to the first processing task and the first data;
and determining the second node according to a route corresponding relation, wherein the route corresponding relation is used for indicating that the second data is stored in the memory of the second node.
6. The method of claim 5, wherein prior to determining the second node, the method further comprises:
sending a first notification message to the second node, wherein the first notification message is used for indicating the second node to store the second data into a memory of the second node;
And generating the route corresponding relation according to the first notification message.
7. The method of claim 6, wherein prior to sending the first notification message to the second node, the method further comprises:
acquiring the size of data to be processed of the first processing task;
and splitting the data to be processed into the first data and the second data in response to the data to be processed being greater than the memory space of the first node.
8. The method of any of claims 1-7, wherein after sending the computing metadata to the second node, the method further comprises:
and sending a second notification message to the second node, wherein the second notification message is used for indicating the second node to continue to execute the first processing task according to the calculation metadata.
9. A data migration apparatus, the apparatus comprising:
the determining module is used for determining a second node according to a first processing task in response to the completion of the processing of first data in a memory of the first node, wherein second data is stored in the memory of the second node, and the second data is processed after the first data in the first processing task;
And the sending module is used for sending the calculation metadata to a second node, and the second node is used for continuously executing the first processing task according to the calculation metadata.
10. The apparatus according to claim 9, wherein the computing metadata includes intermediate data and/or state metadata, the intermediate data being data obtained after the first processing task has processed the first data, and the state metadata being used to describe an execution state of the first processing task.
11. The apparatus of claim 10, wherein the state metadata is to describe a state of execution of each process of the first computing task on the first node, or wherein the state metadata is to describe a state of execution of each thread of the first process of the first computing task on the first node, or wherein the state metadata is to describe a state of execution of each instruction of the first thread of the first process of the first computing task on the first node.
12. The apparatus of claim 11, wherein the computing metadata is used to describe an execution state of each process in the first computing task on the first node,
The determining module is further configured to freeze a plurality of threads corresponding to the first processing task on the first node; the computing metadata is obtained based on the frozen plurality of threads.
13. The device according to any one of claims 9 to 12, wherein,
the determining module is specifically configured to determine the second data according to the first processing task and the first data; and determining the second node according to a route corresponding relation, wherein the route corresponding relation is used for indicating that the second data is stored in the memory of the second node.
14. The apparatus of claim 13, wherein the device comprises a plurality of sensors,
the sending module is further configured to send a first notification message to the second node, where the first notification message is used to instruct the second node to store the second data into a memory of the second node; and generating the route corresponding relation according to the first notification message.
15. The apparatus of claim 14, wherein the device comprises a plurality of sensors,
the determining module is further configured to obtain a size of data to be processed of the first processing task; and splitting the data to be processed into the first data and the second data in response to the data to be processed being greater than the memory space of the first node.
16. The device according to any one of claims 9 to 15, wherein,
the sending module is further configured to send a second notification message to the second node, where the second notification message is used to instruct the second node to continue to execute the first processing task according to the computing metadata.
17. A cluster of computing devices, the cluster of computing devices comprising at least one computing device, each computing device comprising a processor and memory:
the memory is used for storing instructions;
the processor is configured to cause the cluster of computing devices to perform the method of any of claims 1-8 in accordance with the instructions.
18. A chip comprising a memory for storing instructions or program code and a processor for calling and executing the instructions or program code from the memory to perform the method of any of claims 1-8.
19. A computer readable storage medium having instructions stored therein which, when run on a computing device, cause the computing device to perform the method of any of claims 1 to 8.
20. A computer program product containing instructions which, when run on a computing device, cause the computing device to perform the method of any of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211128342.2A CN117762320A (en) | 2022-09-16 | 2022-09-16 | Data migration method and device and related equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211128342.2A CN117762320A (en) | 2022-09-16 | 2022-09-16 | Data migration method and device and related equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117762320A true CN117762320A (en) | 2024-03-26 |
Family
ID=90322308
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211128342.2A Pending CN117762320A (en) | 2022-09-16 | 2022-09-16 | Data migration method and device and related equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117762320A (en) |
-
2022
- 2022-09-16 CN CN202211128342.2A patent/CN117762320A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11537540B2 (en) | Memory system design using buffer(s) on a mother board | |
US9916215B2 (en) | System and method for selectively utilizing memory available in a redundant host in a cluster for virtual machines | |
US10970805B2 (en) | Graphics processing unit operation | |
US20210406091A1 (en) | Technologies to offload workload execution | |
US11886898B2 (en) | GPU-remoting latency aware virtual machine migration | |
CN115858103B (en) | Method, device and medium for virtual machine hot migration of open stack architecture | |
US11023825B2 (en) | Platform as a service cloud server and machine learning data processing method thereof | |
US20170018050A1 (en) | Communication between integrated graphics processing units | |
US10437754B1 (en) | Diagnostic fault management controller for distributed computing | |
CN110692043B (en) | System and method for load balancing backup data | |
CN111444148B (en) | Data transmission method and device based on MapReduce | |
CN112631994A (en) | Data migration method and system | |
CN112596669A (en) | Data processing method and device based on distributed storage | |
CN117762320A (en) | Data migration method and device and related equipment | |
US11481255B2 (en) | Management of memory pages for a set of non-consecutive work elements in work queue designated by a sliding window for execution on a coherent accelerator | |
CN108139980B (en) | Method for merging memory pages and memory merging function | |
JP2014153935A (en) | Parallel distributed processing control device, parallel distributed processing control system, parallel distributed processing control method, and parallel distributed processing control program | |
US11288070B2 (en) | Optimization of low-level memory operations in a NUMA environment | |
CN107656702A (en) | Accelerate the method and its system and electronic equipment of disk read-write | |
US20240220294A1 (en) | VM Migration Using Memory Pointers | |
CN116775510B (en) | Data access method, device, server and computer readable storage medium | |
CN118277344B (en) | Storage node interlayer merging method and device of distributed key value storage system | |
CN116737690A (en) | Data migration method, system and related device | |
CN117130795A (en) | Job processing method and device and related equipment | |
CN117453118A (en) | Data processing method, device, electronic equipment and computer readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |