CN114448897A - Target device migration method and device - Google Patents

Target device migration method and device Download PDF

Info

Publication number
CN114448897A
CN114448897A CN202111643792.0A CN202111643792A CN114448897A CN 114448897 A CN114448897 A CN 114448897A CN 202111643792 A CN202111643792 A CN 202111643792A CN 114448897 A CN114448897 A CN 114448897A
Authority
CN
China
Prior art keywords
server
network load
preset threshold
servers
load rate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111643792.0A
Other languages
Chinese (zh)
Other versions
CN114448897B (en
Inventor
李贵斌
吴学含
薛强
李家伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianyi Cloud Technology Co Ltd
Original Assignee
Tianyi Cloud Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianyi Cloud Technology Co Ltd filed Critical Tianyi Cloud Technology Co Ltd
Priority to CN202111643792.0A priority Critical patent/CN114448897B/en
Publication of CN114448897A publication Critical patent/CN114448897A/en
Application granted granted Critical
Publication of CN114448897B publication Critical patent/CN114448897B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • H04L47/125Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/147Network analysis or design for predicting network behaviour

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer And Data Communications (AREA)

Abstract

The invention provides a target device migration method and a target device migration device, wherein the method comprises the following steps: acquiring network load rates of N servers at the Kth moment, wherein K and N are positive integers; when the first server is determined to exist according to the network load rates of the N servers, determining a second server according to the network load rates of the N servers, wherein the first server is a server of which the network load rate at the Kth moment is greater than or equal to a first preset threshold value and the predicted value of the network load rate at the K +1 th moment is greater than or equal to the first preset threshold value, the second server is a server of which the network load rate at the Kth moment is less than a second preset threshold value, and migrating at least one target in the first server to the second server. By adopting the method, the network load balance among the servers can be realized, the meaningless target device migration can be avoided, and the accuracy of the load balance is improved.

Description

Target device migration method and device
Technical Field
The invention relates to the technical field of computers, in particular to a target device migration method and device.
Background
With the increasing requirements on the block storage technology in the cloud storage service, the data read-write performance of the storage service becomes a hot point of attention in the field of distributed block storage technology. Distributed block storage services over the Internet (Internet) Small Computer System Interface (iSCSI) standard storage protocol typically deploy multiple nodes, each providing several iSCSI targets (iSCSI targets). The cluster performance of distributed block storage is mainly limited by the network card load of each node. For example, when a distributed block storage cluster provides a storage service, if the network card load of a certain node is high, the performance of all targets on the node is reduced, and the performance becomes a performance bottleneck of the cluster.
In summary, how to implement network card load balancing between nodes is a significant concern.
Disclosure of Invention
The invention provides a target device migration method and a target device migration device, which are used for realizing network card load balance among nodes.
In a first aspect, the present invention provides a target migration method, including: acquiring network load rates of N servers at the Kth moment, wherein K and N are positive integers; when a first server is determined to exist according to the network load rates of the N servers, determining a second server according to the network load rates of the N servers, wherein the first server is a server of which the network load rate at the Kth moment is greater than or equal to a first preset threshold and the predicted value of the network load rate at the K +1 th moment is greater than or equal to the first preset threshold, the second server is a server of which the network load rate at the Kth moment is less than a second preset threshold, and the second preset threshold is less than or equal to the first preset threshold; migrating at least one target in the first server to the second server.
By adopting the method, the network load balance among the servers can be realized, the meaningless target device migration can be avoided, and the accuracy of the load balance is improved.
In a possible design, the second server is a server whose network load rate at the kth time is smaller than the second preset threshold and whose predicted value of the network load rate at the K +1 th time is the lowest; or, the second server is a server whose network load rate at the kth time is smaller than the second preset threshold and whose predicted value of the network load rate at the K +1 th time is smaller than a third preset threshold, where the third preset threshold is smaller than or equal to the second preset threshold.
By adopting the design, the second server can be ensured to be the server with lower network load rate at the Kth moment and the Kth +1 moment.
In one possible design, further comprising: determining M candidate servers according to the network load rates of the N servers, wherein the candidate servers are servers with network load rates larger than or equal to the first preset threshold value, and M is a positive integer; determining network load rate predicted values respectively corresponding to the M candidate servers at the K +1 th moment according to the network load rates and the network load rate estimation models respectively corresponding to the M candidate servers at the K th moment; and taking the server of which the network load rate predicted value is greater than or equal to the first preset threshold value in the M candidate servers as the first server.
By adopting the design, meaningless target device migration can be avoided, and the accuracy of load balancing is improved.
In one possible design, the network load rate estimation model is determined according to a discrete kalman algorithm.
In one possible design, when migrating at least one target in the first server to the second server, creating the at least one target in the second server and creating an association relationship between the at least one target and the second server; deleting the at least one target in the first server.
In a second aspect, the present invention provides a target migration apparatus, comprising:
the acquisition unit is used for acquiring the network load rates of N servers at the Kth moment, and K and N are positive integers; the processing unit is used for determining a second server according to the network load rates of the N servers when the first server is determined to exist according to the network load rates of the N servers, wherein the first server is a server with the network load rate at the Kth moment being greater than or equal to a first preset threshold value, the predicted value of the network load rate at the K +1 th moment being greater than or equal to the first preset threshold value, the second server is a server with the network load rate at the Kth moment, and the second preset threshold value is less than or equal to the first preset threshold value; migrating at least one target in the first server to the second server.
In a possible design, the second server is a server whose network load rate at the kth time is smaller than the second preset threshold and whose predicted value of the network load rate at the K +1 th time is the lowest; or, the second server is a server whose network load rate at the kth time is smaller than the second preset threshold and whose predicted value of the network load rate at the K +1 th time is smaller than a third preset threshold, where the third preset threshold is smaller than or equal to the second preset threshold.
In a possible design, the processing unit is further configured to determine M candidate servers according to network load rates of the N servers, where the candidate servers are servers whose network load rates are greater than or equal to the first preset threshold, and M is a positive integer; determining network load rate predicted values respectively corresponding to the M candidate servers at the K +1 th moment according to the network load rates and the network load rate estimation models respectively corresponding to the M candidate servers at the K th moment; and taking the server of which the network load rate predicted value is greater than or equal to the first preset threshold value in the M candidate servers as the first server.
In one possible design, the network load rate estimation model is determined according to a discrete kalman algorithm.
In one possible design, the processing unit is configured to, when migrating at least one target in the first server to the second server, create the at least one target in the second server and create an association relationship between the at least one target and the second server; deleting the at least one target in the first server.
In a third aspect, an embodiment of the present invention provides a computer-readable storage medium, which stores computer instructions that, when executed on a computer, cause the computer to perform the above method.
In addition, for technical effects brought by any one implementation manner of the second aspect and the third aspect, reference may be made to technical effects brought by different implementation manners of the first aspect, and details are not described here.
Drawings
Fig. 1 is a schematic structural diagram of a distributed block storage system according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating an overview of a target migration method according to an embodiment of the present invention;
fig. 3 is a flowchart illustrating a calculation of a network load factor prediction value according to an embodiment of the present invention;
fig. 4A is a timing diagram of migration of a source node target device according to an embodiment of the present invention;
fig. 4B is a flowchart of source node target device migration according to the embodiment of the present invention;
FIG. 5 is a schematic diagram of an apparatus according to an embodiment of the present invention;
fig. 6 is a second schematic structural diagram of an apparatus according to an embodiment of the present invention.
Detailed Description
To make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the accompanying drawings. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The application scenario described in the embodiment of the present invention is for more clearly illustrating the technical solution of the embodiment of the present invention, and does not form a limitation on the technical solution provided in the embodiment of the present invention, and it can be known by a person skilled in the art that with the occurrence of a new application scenario, the technical solution provided in the embodiment of the present invention is also applicable to similar technical problems. In the description of the present invention, the term "plurality" means two or more unless otherwise specified.
For how to implement network card load balancing of each node, the currently common solution is as follows:
1. the network card load balancing is carried out by using special hardware equipment, but the extra hardware equipment is required to be purchased when the hardware equipment is used for carrying out the load balancing, so that the extra cost is increased, and the problem of single-point failure exists;
2. configuring multi-network card binding to realize network card load balancing by means of a kernel network tool, but the scheme of using multi-network card binding can only realize load balancing in nodes, cannot realize load balancing among different nodes, and is not suitable for a distributed block storage cluster scene;
3. the developer also provides a network card load balancing algorithm on the software level. However, in the existing software load balancing algorithm, it is only simple to monitor whether the load of the network card exceeds a threshold, and if so, the stored service traffic is migrated to an idle node, so that once the network fluctuates, the migration of the service traffic is caused, and further, the waste of system resources is caused.
The target device migration method provided by the invention can overcome the problems, can realize network card load balance among all nodes, and provides guarantee for improving data transmission performance between the client application and the distributed block storage system. A system to which the method is applied is shown in fig. 1. The system is a distributed block storage system and comprises an application layer, a scheduling layer, a block storage layer and a storage array from top to bottom.
Wherein the application layer may deploy a plurality of client applications.
The scheduling layer can deploy a real-time monitoring module and a Kalman filtering scheduling module (hereinafter referred to as a scheduling module). The real-time monitoring module may be configured to collect network load data of each server and provide the network load data to the scheduling module, and the scheduling module is configured to implement the method described in fig. 2. The real-time monitoring module can be an independent module or a module inside the scheduling module. The network load data may be a network load rate or an actual network bandwidth. When the network load data is the actual network bandwidth, the scheduling module determines the network load rate according to the actual network bandwidth and the maximum value of the network bandwidth, that is, the network load rate is the actual network bandwidth/the maximum value of the network bandwidth.
The block storage layer is provided with n (n is more than or equal to 2) servers, which can also be called as block storage target servers, and each server can provide a plurality of targets for the login use of the client application. The server may also be referred to as a node, which is not limited in this application. Each target device can support the analysis and encapsulation of a standard iSCSI storage protocol, process an I/O request of a client application, provide block storage service and also be the minimum unit of load balancing scheduling.
The storage array includes a plurality of disks (disks).
With the above system, a typical workflow of the method may be: firstly, a client application and a target device provided by a server establish a data I/O path, an I/O request firstly reaches a block storage layer, and data access is completed in a storage array after protocol analysis of the target device.
By adopting the method provided by the invention, the network load rate of each server at the kth moment is obtained, when the server with high load is determined to exist according to the network load rate of each server, whether the server with high load is still the server with high load at the kth +1 moment is determined, if yes, an idle server is further determined, and the target device in the server with high load is transferred to the idle server, so that the problem of network card load balance can be solved on the basis of ensuring service continuity.
The method is suitable for efficient access of the block devices in the cloud storage service, and particularly can be applied to the distributed block storage service based on the iSCSI protocol. The network card load balancing method has wide application in a block storage system, and the invention can directly deploy application and any distributed cluster supporting iSCSI standard storage protocol. The method can optimize the network card load balancing result of the cross-boundary point in the cluster, ensure the service continuity and fully utilize the network resources of the cluster to improve the performance of the block storage service.
As shown in fig. 2, the method includes:
step 200: and the scheduling module acquires the network load rates of N servers at the Kth moment, wherein K and N are positive integers.
In combination with the above, the scheduling module may obtain, from the real-time monitoring module, the network load rates of the N servers at the kth time, or the scheduling module may obtain, from the real-time monitoring module, the actual network bandwidths of the N servers at the kth time, and then determine the corresponding network load rates according to the maximum value of the network bandwidth of each server.
Step 210: and the scheduling module determines a second server according to the network load rates of the N servers when determining that the first server exists according to the network load rates of the N servers.
The first server is a server of which the network load rate at the Kth moment is greater than or equal to a first preset threshold value and the predicted value of the network load rate at the K +1 th moment is greater than or equal to the first preset threshold value, and the second server is a server of which the network load rate at the Kth moment is less than a second preset threshold value, wherein the second preset threshold value is less than or equal to the first preset threshold value. The first server may be referred to as a high-load server, and the second server may be referred to as an idle server.
In addition, the first server may also be a server in which the network load rate at the kth time is greater than or equal to a first preset threshold and the predicted value of the network load rate at the K +1 th time is greater than or equal to a fourth preset threshold. Wherein the fourth preset threshold may be different from the first preset threshold.
For example, the scheduling module may first determine whether there is a server with a network load rate greater than or equal to a first preset threshold, i.e., a candidate server, according to the network load rates of the N servers, where the number of candidate servers may be one or more, for example, the number of candidate servers is M, and M is a positive integer.
The scheduling module may determine, according to the network load rates and the network load rate estimation models respectively corresponding to the M candidate servers at the kth time, network load rate predicted values respectively corresponding to the M candidate servers at the K +1 th time, and use, as the first server, a server of the M candidate servers whose network load rate predicted value is greater than or equal to a first preset threshold.
By adopting the method, the first server can be ensured to be a high-load server, and the problem that the target device is initiated to be migrated to cause the waste of system resources due to high network load rate at one time is avoided.
For example, assuming that N is 10, the scheduling module may first determine, according to the network load rates of the 10 servers at the kth time, the servers of which the network load rates are greater than or equal to a first preset threshold, for example, 3 servers of which the network load rates are greater than or equal to the first preset threshold, the first preset threshold being the same as the second preset threshold, and 6 servers of which the network load rates are less than the first preset threshold. Then, according to the network load rates and the network load rate estimation models respectively corresponding to the 3 servers at the kth moment, determining the network load rate predicted values respectively corresponding to the 3 servers at the (K + 1) th moment, and using the server of which the network load rate predicted value is greater than or equal to a first preset threshold value in the 3 servers as a first server.
After determining the first server, the scheduling module may determine a second server. If the first server does not exist, the scheduling module need not determine a second server. Illustratively, the second server is the server with the lowest network load rate.
In addition, the scheduling module may further determine network load rate predicted values respectively corresponding to the servers with the network load rates smaller than the second preset threshold at the K +1 th moment, and determine the second server according to the network load rate predicted values. In one possible design, the second server is a server whose network load rate is smaller than a second preset threshold and whose predicted value of the network load rate at the K +1 th time is the lowest.
In a possible design, the second server is a server whose network load rate is smaller than a second preset threshold and whose predicted value of the network load rate at the K +1 th time is smaller than a third preset threshold, and the third preset threshold is smaller than or equal to the second preset threshold.
It is understood that if the number of the first servers is S1, and S1 is a positive integer greater than or equal to 2, the number of the second servers may be S2, S2 ≦ S1, and S2 is a positive integer.
The network load rate estimation model may be determined according to a discrete kalman algorithm or in other manners, which is not limited in the present application.
The following illustrates the process of determining a network load factor estimation model using the discrete kalman algorithm:
the network card load rate estimation model of the single server established by the invention is shown as a formula (1).
Figure BDA0003444467410000081
Wherein, bsIs the network bandwidth of the server, omegasIs the first derivative of the bandwidth of the network,
Figure BDA0003444467410000082
is the second derivative of the network bandwidth, TsRepresenting the sampling period. Assume that the prediction interval is NpThen step k +1 to step k + NpThe predicted sequence of steps can be described by equation (2).
Figure BDA0003444467410000083
Wherein E isbs=[1…1]T,Fbs=[Ts2Ts…NpTs]T
Figure BDA0003444467410000084
Further, the network card load rate estimation model can be described by formula (3) and formula (4). Wherein, b refers to a network bandwidth state value;
Figure BDA0003444467410000085
is a matrix form of the second derivative of the network bandwidth, gammasIs the monitored server network load rate; bg=[bsg 0]Is a target network bandwidth status vector, bsgIs a preset network bandwidth threshold, i.e. the product of the first preset threshold and the maximum value of the network bandwidth.
b[k+1]=[bs[k+1] ωs[k+1]]T=Ab[k]+Bu[k]Formula (3)
Figure BDA0003444467410000091
Combining the formula (3) and the formula (4), the method can be obtained
Figure BDA0003444467410000092
Figure BDA0003444467410000093
[1 0]K is the maximum network bandwidth, which is a matrix.
Defining a network bandwidth state vector b as a state variable, γ, for estimating a discrete-time processsAnd outputting the predicted value of the network load rate as the variable to be observed.
The kalman filter time update equation is shown in equation (5) and equation (6).
Figure BDA0003444467410000094
Figure BDA0003444467410000095
Where Q is a process noise covariance matrix, usually taken as an empirical value. Equations (5) and (6) describe the process of advancing the network bandwidth state vector estimate and covariance estimate from the k-1 th time to the k-th time.
At the same time order phik=A,BkThe time update equation is described as equation (7) and equation (8) for BThe above-mentioned processes are described.
Figure BDA0003444467410000096
Pk∣k=Pk∣k-1-KkHkPk∣k-1Formula (8)
Wherein, γkIs the collected network load rate, Hk=Cb,KkIs the kalman gain, which can be calculated by equation (9).
The kalman filter measurement update equation is shown in equation (9).
Figure BDA0003444467410000097
Where R is a covariance matrix, usually an empirical value.
And solving a formula (7) to obtain the optimal vector estimation of the network bandwidth state vector at the kth moment, obtaining the network bandwidth state vector estimation at the (K + 1) th moment through a formula (10), and further calculating the predicted value of the network load rate at the (K + 1) th moment by combining a formula (4).
Figure BDA0003444467410000101
In the prediction of the network load factor at the kth time, first, the initial values of the kalman filter time update equation (5) and equation (6)) and the measurement update equation (9)) at the zero time should be given, including the network bandwidth state vector b0Covariance p0And the values of Q, R, then at the k-1 th time according to equation (5) and equation (6)
Figure BDA0003444467410000102
Pk-1∣k-1Calculating the network bandwidth state vector estimation of the kth moment
Figure BDA0003444467410000103
And covariance estimation Pk∣k-1. Estimating the state vector of the network bandwidth according to the k time
Figure BDA0003444467410000104
And (4) determining the predicted network load rate value at the kth moment.
Then, the Kalman gain K according to equation (9) is calculatedkAccording to the Kalman gain KkThe calculation result of equation (6) (i.e., P)k∣k-1) And formula (8) calculating Pk∣kFurther, the collected server network load rate gamma can be combinedkAnd Kalman gain KkAccording to the formula (7), the optimal vector estimation of the network bandwidth state vector at the kth moment can be obtained
Figure BDA0003444467410000105
By updating the covariance estimate P at the kth time instantk∣kSo that the algorithm enters the next cycle and can be operated in a recursion mode. The specific calculation flow chart is shown in fig. 3.
Step 220: the scheduling module migrates at least one target in the first server to the second server.
Illustratively, the scheduling module first creates the at least one target in the second server and creates an association of the at least one target with the second server, and then deletes the at least one target in the first server.
Fig. 4A and 4B show a specific process of target thermal migration.
Step 401: when the system is in an initial state, a client logs on a target 1(target1) of a source (src) node, and when the scheduling module determines that the src node is a first server, a migration scheduling process is triggered.
Step 402: the scheduling module determines a target (dst) node to be migrated; the dst node corresponds to a second server.
Step 403: the scheduling module sends an instruction to the dst node to create target 1.
Step 404: the dst node creates a target1 identical to the src node according to the parameters carried by the instruction.
Step 405: the scheduling module sends a redirection instruction to the src node.
Step 406: the src node sets the redirection information of target1 to the dst node and records it in the database of the scheduling module.
Step 407: after the src node redirection information is successfully set, the src node deletes the target1 on the src node, and the src node actively disconnects with the client while deleting the target 1.
Step 408: the client discovers the connection is broken and attempts to send a request to the src node to reconnect target 1.
Step 409: the src node, upon receipt, redirects the client request to the dst node according to the redirection information set in step 406.
Step 410: and the client reestablishes connection with the dst node, continues data I/O and finishes target thermal migration.
And after the redirection instruction of the scheduling module reaches the source node, the source node analyzes the instruction and records the information of the source node and the target node into a database of the scheduling module. After the database transaction is submitted and returned, the source node forcibly deletes the information of the migrated target device, and meanwhile, the database can persist the scheduling log containing the redirection information in a disk-dropping manner. And after the two steps of work are finished, successfully replying to the scheduling module. Therefore, the target migration is realized by setting redirection information at the source node to redirect the client request to the new node.
By adopting the method, the scheduling module acquires the network load rate of each server, judges the reported network load rate of each server, determines that the first server exists, triggers the scheduling module to start a scheduling process to migrate at least one target device in the first server to the second server, further realizes network load balance among the servers, can avoid meaningless target device migration, and improves the accuracy of load balance. The method is suitable for a distributed cluster environment and has good elastic load balancing capacity.
In addition, compared with the prior art, the invention has the following technical effects:
1. creatively establishing a state space equation describing the network load condition of the server node, and carrying out prediction estimation on the network load rate of the server based on a Kalman algorithm;
2. extra load balancing hardware equipment is not needed, load balancing is achieved through a pure software algorithm, and cost is reduced;
3. the migration of the target device in the network card load balancing process is completely transparent to the client, namely, the client cannot perceive the migration process, and the continuity of the client service is ensured.
The division of the unit in the embodiments of the present invention is schematic, and is only a logical function division, and there may be another division manner in actual implementation, and in addition, each functional unit in each embodiment of the present invention may be integrated in one processor, may also exist alone physically, or may also be integrated in one unit by two or more units. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
An embodiment of the present invention further provides an apparatus 500, as shown in fig. 5, including: a processing module 510 and a transceiver module 520.
The transceiving module 520 may include a receiving unit and a transmitting unit. The processing module 510 is used for controlling and managing the actions of the apparatus 500. The transceiver module 520 is used to support the communication between the apparatus 500 and other apparatuses. Optionally, the apparatus 500 may further comprise a storage unit for storing program codes and data of the apparatus 500.
Alternatively, the modules in the apparatus 500 may be implemented by software.
Alternatively, the processing module 510 may be a processor or a controller, such as a general purpose Central Processing Unit (CPU), a general purpose processor, a Digital Signal Processing (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, a transistor logic device, a hardware component, or any combination thereof. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure of the embodiments of the application. The processor may also be a combination of computing functions, e.g., comprising one or more microprocessors, DSPs, and microprocessors, among others. The transceiver module 520 may be a communication interface, a transceiver or a transceiver circuit, etc., wherein the communication interface is generally referred to, in a specific implementation, the communication interface may include a plurality of interfaces, and the storage unit may be a memory.
The transceiver module 520 is configured to obtain network load rates of N servers at a kth time, where K and N are positive integers;
a processing module 510, configured to determine a second server according to the network load rates of the N servers when it is determined that a first server exists according to the network load rates of the N servers, where the first server is a server whose network load rate at the kth time is greater than or equal to a first preset threshold and whose predicted value of the network load rate at the K +1 th time is greater than or equal to the first preset threshold, and the second server is a server whose network load rate at the kth time is less than a second preset threshold, where the second preset threshold is less than or equal to the first preset threshold; migrating at least one target in the first server to the second server.
Another apparatus 600 is provided in the embodiment of the present invention, as shown in fig. 6, including:
a communication interface 601, a memory 602, and a processor 603;
wherein, the communication apparatus 600 communicates with other devices, such as receiving and sending messages, through the communication interface 601; a memory 602 for storing program instructions; a processor 603 for calling the program instructions stored in the memory 602, and executing the method according to the obtained program.
The communication interface 601 is used for acquiring the network load rates of N servers at the Kth moment, wherein K and N are positive integers;
the processor 603 invokes program instructions stored in the memory 602 to perform: when a first server is determined to exist according to the network load rates of the N servers, determining a second server according to the network load rates of the N servers, wherein the first server is a server of which the network load rate at the Kth moment is greater than or equal to a first preset threshold and the network load rate predicted value at the K +1 th moment is greater than or equal to the first preset threshold, and the second server is a server of which the network load rate at the Kth moment is less than a second preset threshold, wherein the second preset threshold is less than or equal to the first preset threshold; migrating at least one target in the first server to the second server.
In the embodiment of the present invention, the specific connection medium among the communication interface 601, the memory 602, and the processor 603 is not limited, for example, a bus, and the bus may be divided into an address bus, a data bus, a control bus, and the like.
In the embodiments of the present invention, the processor may be a general purpose processor, a digital signal processor, an application specific integrated circuit, a field programmable gate array or other programmable logic device, a discrete gate or transistor logic device, or a discrete hardware component, and may implement or execute the methods, steps, and logic blocks disclosed in the embodiments of the present invention. A general purpose processor may be a microprocessor or any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware processor, or may be implemented by a combination of hardware and software modules in the processor.
In the embodiment of the present invention, the memory may be a nonvolatile memory, such as a Hard Disk Drive (HDD) or a solid-state drive (SSD), and may also be a volatile memory (RAM), for example. The memory can also be, but is not limited to, any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. The memory in embodiments of the present invention may also be circuitry or any other device capable of performing a storage function for storing program instructions and/or data.
Embodiments of the present invention also provide a computer-readable storage medium, which includes program code for causing a computer to perform the steps of the method provided above in the embodiments of the present invention when the program code runs on the computer.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various modifications and variations can be made in the present invention without departing from the scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (10)

1. A target migration method, comprising:
acquiring network load rates of N servers at the Kth moment, wherein K and N are positive integers;
when a first server is determined to exist according to the network load rates of the N servers, determining a second server according to the network load rates of the N servers, wherein the first server is a server of which the network load rate at the Kth moment is greater than or equal to a first preset threshold and the network load rate predicted value at the K +1 th moment is greater than or equal to the first preset threshold, and the second server is a server of which the network load rate at the Kth moment is less than a second preset threshold, wherein the second preset threshold is less than or equal to the first preset threshold;
migrating at least one target in the first server to the second server.
2. The method according to claim 1, wherein the second server is a server whose network load rate at the kth time is smaller than the second preset threshold and whose predicted value of the network load rate at the K +1 th time is the lowest;
or, the second server is a server whose network load rate at the kth time is smaller than the second preset threshold and whose predicted value of the network load rate at the K +1 th time is smaller than a third preset threshold, where the third preset threshold is smaller than or equal to the second preset threshold.
3. The method of claim 1 or 2, further comprising:
determining M candidate servers according to the network load rates of the N servers, wherein the candidate servers are servers with network load rates larger than or equal to the first preset threshold value, and M is a positive integer;
determining network load rate predicted values respectively corresponding to the M candidate servers at the K +1 th moment according to the network load rates and the network load rate estimation models respectively corresponding to the M candidate servers at the K th moment;
and taking the server of which the network load rate predicted value is greater than or equal to the first preset threshold value in the M candidate servers as the first server.
4. The method of claim 3, wherein the network load rate estimation model is determined according to a discrete Kalman algorithm.
5. The method of claim 1 or 2, wherein migrating at least one target in the first server to the second server comprises:
creating the at least one target in the second server, and creating an association relationship between the at least one target and the second server;
deleting the at least one target in the first server.
6. An object migration apparatus, comprising:
the acquisition unit is used for acquiring the network load rates of N servers at the Kth moment, and K and N are positive integers;
the processing unit is used for determining a second server according to the network load rates of the N servers when the first server is determined to exist according to the network load rates of the N servers, wherein the first server is a server with the network load rate at the Kth moment being greater than or equal to a first preset threshold value, the predicted value of the network load rate at the K +1 th moment being greater than or equal to the first preset threshold value, the second server is a server with the network load rate at the Kth moment, and the second preset threshold value is less than or equal to the first preset threshold value; migrating at least one target in the first server to the second server.
7. The apparatus according to claim 6, wherein the second server is a server whose network load rate at the kth time is smaller than the second preset threshold and whose predicted value of network load rate at the K +1 th time is the lowest;
or, the second server is a server whose network load rate at the kth time is smaller than the second preset threshold and whose predicted value of the network load rate at the K +1 th time is smaller than a third preset threshold, where the third preset threshold is smaller than or equal to the second preset threshold.
8. The apparatus according to claim 6 or 7, wherein the processing unit is further configured to determine M candidate servers according to network load ratios of the N servers, where the candidate servers are servers whose network load ratios are greater than or equal to the first preset threshold, and M is a positive integer; determining network load rate predicted values respectively corresponding to the M candidate servers at the K +1 th moment according to the network load rates and the network load rate estimation models respectively corresponding to the M candidate servers at the K th moment; and taking the server of which the network load rate predicted value is greater than or equal to the first preset threshold value in the M candidate servers as the first server.
9. The apparatus of claim 8, wherein the network load rate estimation model is determined according to a discrete kalman algorithm.
10. The apparatus according to claim 6 or 7, wherein the processing unit is configured to, when migrating at least one target in the first server to the second server, create the at least one target in the second server and create an association relationship between the at least one target and the second server; deleting the at least one target in the first server.
CN202111643792.0A 2021-12-29 2021-12-29 Target migration method and device Active CN114448897B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111643792.0A CN114448897B (en) 2021-12-29 2021-12-29 Target migration method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111643792.0A CN114448897B (en) 2021-12-29 2021-12-29 Target migration method and device

Publications (2)

Publication Number Publication Date
CN114448897A true CN114448897A (en) 2022-05-06
CN114448897B CN114448897B (en) 2024-01-02

Family

ID=81366041

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111643792.0A Active CN114448897B (en) 2021-12-29 2021-12-29 Target migration method and device

Country Status (1)

Country Link
CN (1) CN114448897B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102480502A (en) * 2010-11-26 2012-05-30 联想(北京)有限公司 I/O load equilibrium method and I/O server
CN103218261A (en) * 2013-03-12 2013-07-24 浙江大学 Dynamic migrating method of virtual machine based on performance prediction
US20150036504A1 (en) * 2013-07-31 2015-02-05 Oracle International Corporation Methods, systems and computer readable media for predicting overload conditions using load information
US20150195173A1 (en) * 2014-01-09 2015-07-09 International Business Machines Corporation Physical Resource Management
CN106790381A (en) * 2016-11-21 2017-05-31 浙江理工大学 Dynamic feedback of load equalization methods based on weighting Smallest connection
CN107562512A (en) * 2016-07-01 2018-01-09 华为技术有限公司 A kind of method, apparatus and system for migrating virtual machine
US20190136919A1 (en) * 2016-04-26 2019-05-09 Wpt Power Corporation Rapid Onset Overload Prediction and Protection
CN111381928A (en) * 2018-12-28 2020-07-07 中兴通讯股份有限公司 Virtual machine migration method, cloud computing management platform and storage medium
CN113591322A (en) * 2021-08-11 2021-11-02 广西大学 Low-voltage transformer area line loss rate prediction method based on extreme gradient lifting decision tree
WO2021237826A1 (en) * 2020-05-28 2021-12-02 网宿科技股份有限公司 Traffic scheduling method, system and device

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102480502A (en) * 2010-11-26 2012-05-30 联想(北京)有限公司 I/O load equilibrium method and I/O server
CN103218261A (en) * 2013-03-12 2013-07-24 浙江大学 Dynamic migrating method of virtual machine based on performance prediction
US20150036504A1 (en) * 2013-07-31 2015-02-05 Oracle International Corporation Methods, systems and computer readable media for predicting overload conditions using load information
US20150195173A1 (en) * 2014-01-09 2015-07-09 International Business Machines Corporation Physical Resource Management
US20190136919A1 (en) * 2016-04-26 2019-05-09 Wpt Power Corporation Rapid Onset Overload Prediction and Protection
CN107562512A (en) * 2016-07-01 2018-01-09 华为技术有限公司 A kind of method, apparatus and system for migrating virtual machine
CN106790381A (en) * 2016-11-21 2017-05-31 浙江理工大学 Dynamic feedback of load equalization methods based on weighting Smallest connection
CN111381928A (en) * 2018-12-28 2020-07-07 中兴通讯股份有限公司 Virtual machine migration method, cloud computing management platform and storage medium
WO2021237826A1 (en) * 2020-05-28 2021-12-02 网宿科技股份有限公司 Traffic scheduling method, system and device
CN113591322A (en) * 2021-08-11 2021-11-02 广西大学 Low-voltage transformer area line loss rate prediction method based on extreme gradient lifting decision tree

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HANEN CHIHI; KHALED GHEDIRA: "Self-configuration model based neural predition and agent technology for cloud infrastructure", IEEE *
夏榆杭;滕欢;冯超;: "基于改进BP网络的电网调度关系转移研究", 电气应用, no. 17 *

Also Published As

Publication number Publication date
CN114448897B (en) 2024-01-02

Similar Documents

Publication Publication Date Title
CN110865867B (en) Method, device and system for discovering application topological relation
US9747093B2 (en) Device driver aggregation in operating system deployment
US9513835B2 (en) Impact-based migration scheduling from a first tier at a source to a second tier at a destination
US7631034B1 (en) Optimizing node selection when handling client requests for a distributed file system (DFS) based on a dynamically determined performance index
US20200065135A1 (en) Optimal dynamic placement of virtual machines in geographically distributed cloud data centers
US9191330B2 (en) Path selection for network service requests
US9229778B2 (en) Method and system for dynamic scaling in a cloud environment
US9270539B2 (en) Predicting resource provisioning times in a computing environment
US7698417B2 (en) Optimized performance counter monitoring
US10931548B1 (en) Collecting health monitoring data pertaining to an application from a selected set of service engines
CN107729570B (en) Data migration method and device for server
US20200151012A1 (en) Adjustment of the number of central processing units to meet performance requirements of an i/o resource
US20150058474A1 (en) Quality of service agreement and service level agreement enforcement in a cloud computing environment
US20190146956A1 (en) Parallel processing of a keyed index file system
CN109474700B (en) Access method of iSCSI client, storage medium, client and storage node
US20190109901A1 (en) Initiator aware data migration
CN112269661B (en) Partition migration method and device based on Kafka cluster
US11303712B1 (en) Service management in distributed system
KR102287566B1 (en) Method for executing an application on a distributed system architecture
US10929263B2 (en) Identifying a delay associated with an input/output interrupt
US10613986B2 (en) Adjustment of the number of tasks for a cache storage scan and destage application based on the type of elements to be destaged from the cache storage
US10545677B2 (en) Volatile account identification and isolation and resource management in distributed data storage systems
CN114448897B (en) Target migration method and device
CN115827148A (en) Resource management method and device, electronic equipment and storage medium
US10673937B2 (en) Dynamic record-level sharing (RLS) provisioning inside a data-sharing subsystem

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant