CN112527470B - Model training method and device for predicting performance index and readable storage medium - Google Patents

Model training method and device for predicting performance index and readable storage medium Download PDF

Info

Publication number
CN112527470B
CN112527470B CN202110036748.7A CN202110036748A CN112527470B CN 112527470 B CN112527470 B CN 112527470B CN 202110036748 A CN202110036748 A CN 202110036748A CN 112527470 B CN112527470 B CN 112527470B
Authority
CN
China
Prior art keywords
gate layer
model
virtual device
time
migration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110036748.7A
Other languages
Chinese (zh)
Other versions
CN112527470A (en
Inventor
臧云峰
安柯
徐蓉
周麟辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Yovole Computer Network Co ltd
Shanghai Youfu Zhishu Yunchuang Digital Technology Co ltd
Original Assignee
Shanghai Yovole Computer Network Co ltd
Shanghai Youfu Zhishu Yunchuang Digital Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Yovole Computer Network Co ltd, Shanghai Youfu Zhishu Yunchuang Digital Technology Co ltd filed Critical Shanghai Yovole Computer Network Co ltd
Priority to CN202110036748.7A priority Critical patent/CN112527470B/en
Publication of CN112527470A publication Critical patent/CN112527470A/en
Application granted granted Critical
Publication of CN112527470B publication Critical patent/CN112527470B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/4557Distribution of virtual machine instances; Migration and load balancing
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The invention discloses a model training method and device for predicting performance indexes and a readable storage medium. The training method comprises the following steps: collecting performance indexes of virtual equipment to be migrated in N periods; obtaining an input/output data set for training according to the performance index; inputting the training input/output data set into a preset first model for training; when the training reaches a first preset condition, ending the training and obtaining the optimal performance parameters of the model; the first preset model is a model based on a time-cycle neural network, wherein the model comprises a forgetting gate layer, an input gate layer and an output gate layer, the information needing to be discarded is determined by the forgetting gate layer, and the information needing to be updated is determined by the input gate layer; updating the final state through the output gate layer output. The invention can shorten the interrupt time in the existing virtual equipment migration process.

Description

Model training method and device for predicting performance index and readable storage medium
Technical Field
The application is filed 5/27/2020, the application number is 202010460191.5, and the invention creates a divisional application of China application named as a virtual equipment optimal idle time migration method, device and readable storage medium.
The present invention relates to the field of virtualization technologies, and in particular, to a model training method and apparatus for predicting performance indicators, and a readable storage medium.
Background
Migration of the system refers to moving the operating system and applications on the source host to the destination host and enabling normal operation on the destination host. In the era of no virtual devices, migration between physical machines relies on system backup and restore techniques. The state of the operating system and applications is backed up on the source host in real time, then the storage medium is connected to the target host, and finally the system is restored on the target host. With the development of virtual device technology, the migration of the system is more flexible and diversified. The migration method of the virtual device provides a simple method for the virtualization of the server, and the migration mode of the virtual device comprises static migration (offline migration) and dynamic migration (online migration), wherein the static migration is that the virtual device is migrated from one physical machine to another physical machine under the condition that the virtual device is shut down or suspended, the migration process in the mode needs to stop the operation of the virtual device, and from the viewpoint of a user, a definite period of downtime exists, and services on the virtual device are not available. The dynamic migration is to move a virtual device system from one physical host to another while ensuring normal operation of services on the virtual device, and the process does not have obvious influence on end users, so that an administrator can perform offline maintenance or upgrade on the physical server without affecting normal use of the users. While live migration may ensure the availability of virtual device services during migration as compared to static migration, it is still difficult to avoid having some downtime for the migration process.
Disclosure of Invention
The invention aims to provide a model training method and device for predicting performance indexes and a readable storage medium, which further shorten the interrupt time in the existing virtual equipment migration process.
In order to solve the technical problems, the invention is realized by the following technical scheme:
the invention provides a model training method for predicting performance indexes, which at least comprises the following steps:
s1, collecting performance indexes of virtual equipment to be migrated in N periods;
s2, acquiring an input and output data set for training according to the performance index;
s3, inputting the training input/output data set into a preset first model for training;
s4, finishing training and obtaining optimal performance parameters of the model when the training reaches a first preset condition;
the first preset model is a model based on a time-cycle neural network, the model comprises a forgetting gate layer, an input gate layer and an output gate layer, the information needing to be discarded is determined by the forgetting gate layer, and the information needing to be updated is determined by the input gate layer; updating the final state through the output gate layer output; and outputting a number between 0 and 1 by each gate layer in the preset first model, wherein the number describes how much data information of each gate layer can pass through, 0 indicates that no data information passes through, 1 indicates that all data information passes through, and screening the data information to be passed through by each gate layer.
In one embodiment of the invention, the method further comprises preprocessing performance indexes in the historical N periods, wherein the preprocessing comprises performing stationarity test on data, and if the data is not stationary, performing stationarity processing; after the data is stable, performing white noise test on the data to obtain an input and output data set for training; the stability test utilizes a unit root test method or a time sequence diagram for drawing performance indexes to test stability; the white noise test uses the LB statistic method to test the white noise of the data after the difference.
In one embodiment of the present invention, the output data of the forgotten gate layer is obtained by the following formula:
Figure 593354DEST_PATH_IMAGE001
wherein ft is an output value after forgetting a gate layer, sigma is a sigma function, wf is a weight matrix of the forgetting the gate layer, t is the current time, ht-1 is a performance index corresponding to a t-1 time node, xt is a current observation performance index, and bf is a bias term; the forgetting gate layer is used for determining how much information of the last moment can be reserved to the current moment.
In one embodiment of the present invention, the preset first model creates an updated value through an input gate layer, determines a value to be updated through the input gate layer, creates a candidate vector, and combines the two vectors to create the updated value; creating an updated value according to the formula:
Figure 54422DEST_PATH_IMAGE002
Figure 858168DEST_PATH_IMAGE003
Wherein it is an output value after passing through the input gate layer, wi is a weight matrix of the input gate layer, bi is an offset, Ĉ t is a candidate vector, tanh is a tanh function, wc is a vector parameter, bc is an adjustable parameter, ct is an updated state value corresponding to the moment of the model t, and Ct-1 is a historical state value corresponding to the moment of the model t-1;
the preset first model determines a value to be output through an output gate layer, normalizes an updated state value corresponding to the model t time to be between-1 and 1 through a tanh function, multiplies the updated state value by an output value of the output gate layer to obtain an output value of the preset first model, and obtains the output value according to a formula:
Figure 37477DEST_PATH_IMAGE004
wherein Ot is an output value after passing through the output gate layer, WO is a weight matrix of the output gate layer, bO is a bias term, and ht is an output value of the model.
The invention provides an optimal idle time migration method of virtual equipment, which at least comprises the following steps:
obtaining predicted performance indexes in future M periods of the virtual equipment to be migrated by using a model for predicting the performance indexes;
obtaining the optimal idle migration time of the virtual equipment to be migrated according to the predicted performance index;
setting a timing scheduling task according to the optimal idle migration time;
According to the timing scheduling task, completing automatic migration of the virtual equipment to be migrated;
the model for predicting the performance index is obtained by training according to the model training method for predicting the performance index.
In one embodiment of the present invention, the method for automatic migration at least includes the following steps:
r1, providing a source host and a target host;
r2, configuring a first virtual device in the source host, configuring a second virtual device in the target host, and checking a virtual device migration environment;
a memory access tracking recorder is configured outside the first virtual equipment of the source host, and the memory access tracking recorder is used for monitoring the use condition of a memory block in the first virtual equipment;
r4, dividing the memory into idle memory blocks and active memory blocks by adopting a memory classification block division algorithm according to the use condition of the memory blocks, and copying all the idle memory blocks from the first virtual device to the second virtual device;
r5. copying the remaining active memory block copies from the first virtual device to the second virtual device, and completing the automatic migration of the first virtual device.
In one embodiment of the invention, the method further comprises:
configuring a virtual device on each of a target host and a source host;
preparing a migration environment, the migration environment comprising: whether the target host and the source host are in a storage system with network sharing, whether the types of central processing units of the target host and the source host are the same, whether the operating system version of the target host accords with the migration environment of the source virtual device, whether the name of the second virtual device is the same as that of the first virtual device, whether the memory size of the target host is the same as that of the source host, and whether the network transmission speed is suitable for the thermal migration of the virtual device.
In one embodiment of the invention, the method further comprises:
the source host sends a specified file packet to the server, sends a file packet with the size of integer times of the specified file packet to the server, and records the time required by the twice sending action as a first sending time and a second sending time respectively; and acquiring the network speed according to the first sending time and the second sending time, and determining the size of the memory block according to the network speed.
The invention provides a model training device for predicting performance indexes, which comprises:
The data acquisition module is used for acquiring performance indexes of the virtual equipment to be migrated in the history N periods;
the input/output data set acquisition module is used for acquiring an input/output data set for training according to the performance index;
the training module is used for inputting the training input/output data set into a preset first model for training; ending training and obtaining optimal performance parameters of the model when the training reaches a first preset condition;
the first preset model is a model based on a time-cycle neural network, the model comprises a forgetting gate layer, an input gate layer and an output gate layer, the information needing to be discarded is determined by the forgetting gate layer, and the information needing to be updated is determined by the input gate layer; updating the final state through the output gate layer output; and outputting a number between 0 and 1 by each gate layer in the preset first model, wherein the number describes how much data information of each gate layer can pass through, 0 indicates that no data information passes through, 1 indicates that all data information passes through, and screening the data information to be passed through by each gate layer.
The present invention provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a model training method for predicting performance indicators as set forth in any one of the above or a virtual device optimal idle time migration method as set forth in any one of the above.
According to the method, firstly, according to the historical use condition of the source host, the most idle time of the source host is automatically predicted, the user uses the least idle time as the optimal idle migration time, and can infer that the virtual equipment is migrated in the optimal idle migration time, so that the use of the user can be ensured to the greatest extent, the influence of the migration process on the use of the user is avoided, after the optimal idle time of the source host is predicted, the system can set that the source host automatically starts to migrate when the system time reaches the predicted optimal idle migration time. In the automatic migration process, the use condition of a memory block is monitored through a memory access tracking recorder, the memory block is divided into two parts with the same size according to the use condition of the memory block, the two parts are marked as an idle memory block and an active memory block respectively, then the idle memory block is copied from a first virtual device of a source host to a second virtual device of a target host, meanwhile, the active memory block is divided into the idle memory block and the active memory block with the same size again, the steps of copying the idle memory block and dividing the active memory block are repeated continuously until the size of the remaining active memory block is smaller than or equal to a minimum memory block threshold value, then the active memory blocks in the active memory block set are sequenced according to the latest access frequency, the active memory blocks are copied from the first virtual device to the second virtual device according to the sequencing result until the last access time of all the active memory blocks is within a minimum threshold time, the copy is stopped, the migration process of the source host is completed, and meanwhile, the virtual device of the source host is basically not influenced by the user under the condition of the condition that the virtual device is not interrupted.
Of course, it is not necessary for any one product to practice the invention to achieve all of the advantages set forth above at the same time.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed for the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of an application of virtual device data migration according to the present invention;
FIG. 2 is a schematic diagram of another application of virtual device data migration according to the present invention;
FIG. 3 is a flow chart of a method for virtual device optimal idle time migration method according to the present invention;
FIG. 4 is a graph of performance indicators and a graph of predicted performance indicators in FIG. 1;
FIG. 5 is a flow chart of the method of automatic migration of FIG. 1;
FIG. 6 is a diagram illustrating virtual device data migration in accordance with the present invention;
FIGS. 7-11 are schematic diagrams illustrating the method of automatic migration in FIG. 5;
FIG. 12 is a block diagram of a virtual device migration apparatus according to the present invention;
FIG. 13 is a block diagram of the data processing module of FIG. 12;
FIG. 14 is a block diagram of a virtual device migration apparatus according to the present invention;
Fig. 15 is a block diagram of the data processing module of fig. 14.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1 to 2, the scenario in which migration of a virtual device is applied is first maintenance, fault repair and upgrade of a hardware system of a physical machine, and a virtual device that needs to run on the physical machine cannot be turned off at this time, because a user applies the virtual device thereon. In addition, the physical machine software system is upgraded and patched, so that the virtual device running above is not affected, and the virtual device needs to be migrated to another physical machine before being upgraded and patched. Furthermore, the load on a physical machine is too heavy, and some virtual devices need to be reduced to free up resources. In a cross-domain environment, there are many virtual devices on physical machines in some domains, and there are few virtual devices on physical machines in some domains, so that resource balance needs to be done. The migration process of the virtual device tends to require an interruption time during which the user's use is also affected.
Referring to fig. 3 to 4, the present invention provides an optimal idle time migration method for a virtual device, where the migration method is as follows: according to the historical use condition of the source host, the most idle time period of the source host 1 is automatically predicted, the user uses the least idle time period as the optimal idle migration time, and the user can be inferred that the virtual equipment is migrated in the optimal idle migration time period, so that the use of the user can be ensured to the greatest extent, the influence of the migration process on the use of the user is avoided, after the optimal idle time of the source host 1 is predicted, the system can set that the virtual equipment on the source host 1 automatically starts to migrate when the system time reaches the predicted optimal idle migration time.
Referring to fig. 5, when migration starts, firstly, the memory of the virtual device on the source host 1 is partitioned, then a memory access tracking recorder is arranged outside the virtual device in the source host 1, which is used for monitoring the use condition of the memory, the memory is partitioned into two parts with the same size according to the use condition of the memory and is marked as an idle memory block and an active memory block respectively, then the idle memory block is copied from the virtual device of the source host 1 to the virtual device of the target host 2, and meanwhile, the active memory block is partitioned into an idle memory block and an active memory block with the same size again, and the steps of idle memory block copying and active memory block partitioning are repeated until the size of the remaining active memory block is less than or equal to a minimum memory block threshold value, for example, the size of a system memory page is reached, and all idle memory blocks are copied. And sequencing the active memory blocks in the active memory block set according to the latest access frequency, sequentially copying the least active memory blocks according to the sequencing result, copying the active memory blocks from the virtual device of the source host 1 to the virtual device of the target host 2 until the last access time of all the active memory blocks is within the minimum threshold time, stopping copying, and suspending the source host 1. Because the user may still perform some operations on the source host 1 during the migration process of the virtual device, the content in the memory is changed, at this time, the memory address changed after the copy is obtained from the memory access tracking recorder, the memory blocks are divided according to the minimum memory block threshold value and combined into the active memory block set, the remaining active memory blocks are synchronized to the virtual device of the target host 2, the service is switched to the target host 2, the source host 1 is stopped, the migration process of the virtual device is completed, and meanwhile, the shortest interruption time of the source host 1 is ensured, so that the user completes the migration of the virtual device of the source host 1 under the condition of basically not influencing the use.
Referring to fig. 3 and 4, the optimal idle time migration method of the present invention at least includes the following steps:
s1, collecting performance indexes of virtual equipment to be migrated in N periods;
s2, acquiring an input and output data set for training according to the performance index;
s3, inputting the training input/output data set into a preset first model for training;
s4, finishing training and obtaining optimal performance parameters of the model when the training reaches a first preset condition;
s5, obtaining predicted performance indexes of the virtual equipment to be migrated in the future M periods by using the model with the optimal performance parameters;
s6, obtaining the optimal idle migration time of the virtual equipment to be migrated according to the prediction performance index;
s7, setting a timing scheduling task according to the optimal idle migration time;
s8, according to the timing scheduling task, completing automatic migration of the virtual equipment to be migrated.
Referring to fig. 3, in step S1, performance indexes of a virtual device to be migrated in N periods are collected by a collection module, where the performance indexes include: the virtual device to be migrated comprises one or more of the occupancy rate of a central processing unit, the occupancy rate of a memory of the virtual device to be migrated, the utilization rate of an input/output interface, network traffic, network delay time, concurrent connection number, packet number and the like.
Referring to fig. 3, in step S2, performance indexes in N cycles of history are preprocessed, where the preprocessing process includes data preprocessing stationarity test and white noise test, to obtain a training input/output data set. Specifically, in an embodiment of the present invention, the data may be subjected to a stationarity test, and if not, to a stationarity process; and after the data are stable, performing white noise test on the data. The stability test of the data can be performed by using a unit root test method or a time sequence chart for drawing performance indexes. White noise testing can utilize LB statistic method to conduct stability test on the data after difference. The specific implementation method comprises the following steps: selecting performance index data X1, X2, X3 … … Xn-1 and Xn; performing one-section difference stabilization to obtain X2-X1, X3-X2 … … Xn-Xn-1; and (3) white noise detection is carried out on the stabilized data by using a 1bqtest function in MATLAB, and when the output h is 1, the white noise is generated.
Referring to fig. 3, in step S3, the training input/output data set is input into a preset first model for training. The first preset model can be a model based on a time-cycle neural network, wherein the model comprises a forgetting gate layer, an input gate layer and an output gate layer, the information needing to be discarded is determined by the forgetting gate layer, and the information needing to be updated is determined by the input gate layer; updating the final state through the output gate layer output. The preset first model has the capability of deleting and adding information, each gate layer in the preset first model outputs a number between 0 and 1, the number describes how much data information can pass through each gate layer, 0 indicates that no data information passes through, and 1 indicates that all data information passes through, so that the data information to be passed through can be screened through each gate layer. The training process of the preset first model on the input and output data set specifically comprises the following steps that firstly, the performance index of the previous time node and the current observed performance index are input into a forgetting gate layer, the data information needing to be discarded is determined in the forgetting gate layer and is output through numerals 0 and 1, wherein 1 represents complete reservation, and 0 represents complete deletion. The output data of the forgetting gate layer is obtained by the following method:
Figure 250283DEST_PATH_IMAGE005
Wherein ft is an output value after forgetting a gate layer, sigma is a sigma function, wf is a weight matrix of the forgetting the gate layer, t is the current time, ht-1 is a performance index corresponding to a t-1 time node, xt is the current observation performance index, and bf is an offset item.
By the above equation, it can be determined how much information at the previous time can be retained at the current time.
Then, the updated value is created through the input gate layer, the process is divided into two steps, the value needing to be updated is determined through the input gate layer, a candidate vector is created, and the two vectors are combined to create the updated value.
Figure 332902DEST_PATH_IMAGE006
Figure 605752DEST_PATH_IMAGE003
Wherein it is an output value after passing through the input gate layer, wi is a weight matrix of the input gate layer, bi is an offset, Ĉ t is a candidate vector, tanh is a tanh function, wc is a vector parameter, bc is an adjustable parameter, ct is an updated state value corresponding to the moment of the model t, and Ct-1 is a historical state value corresponding to the moment of the model t-1.
Finally, determining a value to be output through an output gate layer, normalizing an updated state value corresponding to the moment of the model t to be between-1 and 1 through a tanh function, multiplying the updated state value by the output value of the output gate layer, and realizing the value required by the output model through the process.
Figure 272356DEST_PATH_IMAGE007
Wherein Ot is an output value after passing through the output gate layer, WO is a weight matrix of the output gate layer, bO is a bias term, and ht is an output value of the model.
Referring to fig. 3, in step S4, when the training process reaches the first preset condition, training is ended and the best performance parameters of the model are obtained, wherein the best performance parameters may represent the state of the source host 1 with the lowest utilization rate, such as the lowest occupancy rate of the cpu including the virtual device to be migrated, the lowest occupancy rate of the memory of the virtual device to be migrated, the lowest utilization rate of the input/output interface, and the like. The first preset condition is the optimal number of times of traversing the input/output data set completely to obtain a good neural network model, if the training number is too small, the training process may generate under fitting, that is, learning on the input data is insufficient, and if the training number is too large, the training process may generate over fitting, that is, the training process fits to "noise" in the input data instead of the signal. The updated effective performance parameter obtained is considered to be the optimal performance parameter only when the preset proper training times are reached. The method for achieving the first preset condition through training does not need to manually set the training times of completely traversing the preset first model, and the method can be regarded as a regularization method capable of avoiding the occurrence of over fitting in the training process so as to prevent the problem of non-convergence caused by over fitting and overlarge learning rate.
Referring to fig. 3 and fig. 4, in some embodiments, in step S5 and step S6, a model with the optimal performance parameter is used to obtain a predicted performance index of the virtual device to be migrated in M future periods, and according to the predicted performance index, an optimal idle migration time of the virtual device to be migrated is obtained. According to the prediction performance index, the virtual device of the source host 1 can be considered to be in the most idle state at some time points, namely, the use rate of the virtual device of the source host 1 by the user at the time points is estimated to be the lowest according to the historical data analysis, and the virtual device is migrated at the time points, so that the use of the user can be ensured to the greatest extent, and the influence of the migration process on the user is reduced.
Referring to fig. 3, in step S7 and step S8, a timing task is set according to the optimal idle migration time, and the timing task is timed according to the predicted optimal idle migration time, and when the system time reaches the optimal idle migration time, automatic migration of the virtual device of the source host 1 is started.
Referring to fig. 4, in an embodiment of the present invention, according to the historical data of the virtual device to be migrated, the data of the indicators such as CPU, memory, IO, etc. of the virtual device in the future time may be predicted by a preset first model algorithm on the big data analysis platform, so as to obtain a predicted graph of the data migration time of the virtual device, as shown in fig. 4, and the automatic migration of the virtual device to be migrated is started at the optimal idle migration time point therein.
Referring to fig. 5, the method for virtual device optimal idle time migration provided by the present invention at least includes the following steps:
r1, providing a source host 1 and a target host 2;
r2, configuring a first virtual device 11 in the source host 1, configuring a second virtual device 21 in the target host 2, and checking the migration environment of the virtual devices in advance;
r3, configuring a memory access tracking recorder outside the first virtual device 11 of the source host 1, wherein the memory access tracking recorder is used for monitoring the use condition of a memory block in the first virtual device 11;
r4, according to the use condition of the memory blocks, dividing the memory into idle memory blocks 13 and active memory blocks 14 by adopting a memory classification block division algorithm, and copying all the idle memory blocks 13 from the first virtual device 11 to the second virtual device 21;
and R5, copying the remaining active memory block 14 from the first virtual device 11 to the second virtual device 21, and completing the automatic migration of the first virtual device 11.
Referring to fig. 6, in step R2, a virtual device system is first installed on the source host 1 provided, where the virtual device may be vmware workstation pro, VMware Workstation Pro, for example, is an industry standard that uses multiple operating systems as virtual devices (VMs) running on a single Linux or Windows PC. An OpenStack cloud platform may be installed on the virtual device, where OpenStack is a free software and open source project developed and launched by the united states national aerospace agency and Rackspace collaboration, authorized with the Apache license. The virtual device and the host system network are configured with environments capable of communicating, more specifically, the memory of the virtual device is required to be larger than 4G, the hard disk is required to be larger than 30G, so as to meet the requirements of an opentack environment, the virtual device network selects a bridging network, the host machine can be ensured to access the virtual device (the nat network is only one-way), in the embodiment, for example, the host machine adopts wifi connection, a wifi network card is required to be selected, the system is installed, and the network is required to be configured after the system is installed. For example, if the information of the host network card is checked by using the ifconfig, and the ip of the host network card is found to be 192.168.199.128, the network segment of the virtual device needs to be ensured to be in 192.168.199.xxx network segment for intercommunication. In this embodiment, the ip of the virtual device is checked, if the ip network segment of the virtual device is the same as the host machine, it can be tested by ping, and if the network is interworking, it can also ping the external network, it is proved that the network has been interworking.
Referring to fig. 5, in this embodiment, a virtual device is configured on each of the target host 2 and the source host 1 before the copying step is performed in step R2, and the migration of the virtual device is essentially a process of migrating the memory in the virtual device on the source host 1 to the virtual device on the target host 2. The preparation before migration further comprises preparation of a migration environment, because whether the process of migration of the virtual device can be successfully completed and the length of the migration time are directly influenced by the migration environment, the migration environment is ensured to be suitable for migration of the virtual device before migration of the virtual device, and the migration environment can comprise: whether the target host 2 and the source host 1 are in a storage system with network sharing, whether the types of central processing units of the target host 2 and the source host 1 are the same, whether the operating system version of the target host 2 accords with the migration environment of the source virtual device, whether the name of the second virtual device 21 is the same as the name of the first virtual device 11, whether the memory size of the target host 2 is the same as the memory size of the source host 1, whether the network transmission speed is suitable for the thermal migration of the virtual device, and the like.
Referring to fig. 5, specifically, for example, in step R2, the size of the memory block to be transferred may be determined through a network speed test. The network speed test in this embodiment may be triggered by a user in a specified manner, for example, inputting a test link, or scanning a specified two-dimensional code to enter a test; it is also possible to enter the network speed test by default before performing other functions, for example, the network speed test is performed before the user browses a web page such as a space. The background of the source host 1 sends a specified file packet to the server, sends a file packet with the size of an integer multiple of the specified file packet to the server, and records the time required by the two sending actions as a first sending time and a second sending time respectively. The specified file package may be a file package such as a picture, a test document, etc. stored in the electronic terminal. Since an HTTP request procedure is divided into a domain name system (Domain Name System, DT) request, a transmission control protocol (Transmission Control Protocol, CT) connection, round-Trip Time (RT), traffic Time. The round trip delay represents the delay from the beginning of sending data by the sending end to the receiving end (the receiving end sends acknowledgement immediately after receiving the data), which is total. Therefore, the first sending time and the second sending time of sending the specified file packet record to the server in this embodiment each include the total duration spent by the domain name system request, the transmission control protocol connection, the round trip delay, and the traffic time. The integral multiple of the size of the specified file package can be set according to the requirement, for example, the file package with the size twice that of the specified file package can be changed. Further, the size of the file packet of the integer multiple size of the specified file packet does not exceed the size limited by the maximum output unit (MTU). The maximum memory block (Maximum Transmission Unit, MTU) refers to the maximum packet size (in bytes) that can be passed over the corresponding layer of a communication protocol. For example, in this embodiment, the maximum memory block may be the maximum packet size that can be passed through on the corresponding layer of the communication protocol according to which the file packet is sent. And calculating the current network speed of the source host 1 according to the first sending time and the second sending time. Since the DT requests, CT connections, round trip delays RT in the time taken for each send action can be considered to be approximately the same as long as the size of the file packet sent to the server is not greater than the maximum memory block (MTU). Therefore, the difference between the first transmission time and the second transmission time is the difference between the traffic time spent in the two transmission actions, and the traffic time is determined by the size and bandwidth of the transmitted file packet.
Referring to fig. 5, in one example, the method for obtaining the current network speed of the source host 1 according to the first sending time and the second sending time may include:
the first transmission time is: t1=dt+ct+rt+ps/BW;
the first transmission time is: t2=dt+ct+rt+nps/BW;
T2-T1=(n-1)PS/BW;
bw= (n-1) PS/(T2-T1);
network speed s=125×bw;
wherein DT represents DT request time, CT represents CT connection time, RT, i.e. round trip delay, PS is the size of the designated bundle sent, and BW represents bandwidth, where n is the integer multiple of the bundle of integer multiple size. From this, it can be seen that the Bandwidth (BW) can be calculated according to the first transmission time and the second transmission time, so that the network speeds from the source host 1 to the target host 2 can be obtained.
Referring to fig. 5, in step R3, a memory access tracking recorder is configured on a source host 1 and is configured to monitor usage of the memory blocks of the first virtual device in the source host 1, where the usage of the memory blocks monitored by the memory access tracking recorder may include: recording the accessed memory block, recording the access frequency of the memory block, and recording the last access time and the copy state of the memory block.
Referring to fig. 5 and fig. 7 to 11, in step R4, according to the usage of the memory block, the memory block is divided into two parts with the same size according to a memory classification and partitioning algorithm, and the divided memory blocks are marked as an idle memory block 13 and an active memory block 14, where in this embodiment, the usage of the memory block may be monitored by using an LRU algorithm, and in other embodiments, other algorithms may be used to monitor the usage of the memory block, for example, an algorithm such as FIFO, LFU, NMRU. In this embodiment, the use condition of the memory block is marked, for example, the accessed memory block is marked, the access frequency of the memory block is marked, the last access time and the copy state of the memory block are marked, and the physical address of the memory block is associated with the marked memory block through the hash chain table.
Referring to fig. 5 and fig. 7 to fig. 12, in step R4, the method specifically includes the following steps: H1. screening the memory blocks which are not used for the longest time recently according to the monitoring result of the memory access tracking recorder, marking the memory blocks which are not accessed within a preset threshold time as idle memory blocks 13, and marking the rest memory blocks as active memory blocks 14; H2. copying the idle memory block 13 from the first virtual device to the second virtual device 21; H3. acquiring the size of the active memory block 14; H4. if the size of the active memory block 14 is greater than the minimum memory block threshold, dividing the active memory block 14 into two memory blocks with the same size, and marking the memory block as an idle memory block 13 if the memory block is not accessed within a preset threshold time according to the monitoring result of the memory access tracking recorder, otherwise marking the memory block as the active memory block 14; H5. repeating the steps H2 to H4 until the size of the remaining active memory blocks 14 is less than or equal to a minimum memory block threshold; H6. and copying all the idle memory blocks 13.
Referring to fig. 5 and 13, in step R5, the method specifically includes the following steps: F1. if the size of the active memory block 14 is less than or equal to a minimum memory block threshold; F2. sorting the active memory blocks 14 in the set of active memory blocks 14 according to the frequency of recent accesses; F3. copying the active memory blocks 14 from the first virtual device to the second virtual device 21 according to the sorting result until the last access time of all the active memory blocks 14 is within a minimum threshold time, and stopping copying; F4. suspending the source host 1; F5. after suspending the source host 1, the user may still perform some operations on the source host 1, so that the content in the memory changes, at this time, the memory address changed after having been copied is obtained from the memory access tracking recorder, the memory blocks are divided according to the minimum memory block threshold value, and the memory blocks are combined into the active memory block 14 set; F6. synchronizing the remaining active memory blocks 14 to the second virtual device 21; F7. and switching the service to the target host 2, stopping the source host 1, and completing the automatic migration of the first virtual device 11.
Referring to fig. 14, the present invention further provides a low-interrupt virtual device migration apparatus 3, which includes: the system comprises a data acquisition module 4, a data processing module 5, a time scheduling module 6, a data transmission module 7, a memory monitoring module 8, a memory cutting module 9 and a memory transmission module 10.
Referring to fig. 14 and 15, the data acquisition module 4 is configured to acquire performance indexes of the virtual device to be migrated in N periods. The data processing module 5 is connected with the data acquisition module 4 and is used for processing the performance indexes in the T periods obtained by the data acquisition module 4 to obtain the optimal idle migration time. The data processing module 5 comprises the following components connected in sequence: an input-output data set forming unit 51, a model training unit 53, an optimum performance parameter holding unit 55, a predicted performance index unit 56, and an optimum idle transition time extracting unit 57.
Referring to fig. 14 and 15, an input/output data set forming unit 51 is configured to process performance indexes of the virtual device to be migrated in N cycles obtained by the data acquisition module 4, and perform preprocessing on the performance indexes in the N cycles of the history in the input/output data set forming unit 51, where the preprocessing process includes a data preprocessing stationarity test and a white noise test, so as to obtain an input/output data set for training. A preset first model is provided in the model training unit 53, and the input/output data set may be trained by the preset first model. When the training process reaches a first preset condition, the training of the preset first model is terminated, and the best performance parameters obtained after the training are saved to the best performance parameter saving unit 55. The predicted performance index unit 56 obtains predicted performance indexes of the virtual device to be migrated in the future M periods through the optimal performance parameters in the optimal performance parameter storage unit 55; the optimal idle migration time extracting unit 57 obtains the optimal idle migration time of the virtual device to be migrated according to the predicted performance index of the virtual device to be migrated in the future M periods.
Referring to fig. 14, the time scheduling module 6 is connected to the data processing module 5, and may set a timing scheduling task according to the optimal idle migration time of the virtual device to be migrated obtained by the optimal idle migration time extracting unit 57 in the data processing module 5, and when the system time reaches the optimal idle migration time, start automatic migration of the virtual device of the source host 1.
Referring to fig. 14, the data transmission module 7 includes a memory monitoring module 8, a memory slicing module 9 and a memory transmission module 10. The memory cutting module 9 may block the memory of the source host 1 according to the need. The memory monitoring module 8 is connected with the memory cutting module 9, and a memory access tracking recorder is arranged in the memory monitoring module 8 and can record the block, the access frequency, the last access time and the copy state of the accessed memory address. The memory transmission module 10 is connected with the memory cutting module 9, and the memory transmission module 10 transmits the memory according to a preset rule.
Referring to fig. 14, specifically, the memory slicing module 9 segments the memory block into idle memory 13 blocks and active memory 14 blocks with the same size, and during migration, the memory transfer module 10 copies the idle memory 13 blocks to the virtual device of the target host 2, the memory slicing module 9 segments the remaining active memory 14 blocks into idle memory 13 blocks and active memory 14 blocks with the same size again according to the access frequency recorded by the memory monitoring module 8, and the steps of copying the idle memory 13 blocks to the target host 2 by the memory transfer module 10 and segmenting the active memory 14 blocks by the memory slicing module 9 are repeated until the size of the remaining active memory 14 blocks is less than or equal to the minimum memory block threshold, for example, the size of the system memory page is reached, and the memory transfer module 10 copies all the idle memory 13 blocks. And sorting the active memory blocks in the active memory block set according to the latest access time, copying the least active memory blocks in sequence according to the sorting result, copying the active memory blocks 14 from the virtual device of the source host 1 to the virtual device of the target host 2 until the last access time of all the active memory blocks 14 is within the minimum threshold time, stopping copying, and suspending the source host 1. Because the user may still perform some operations on the source host 1 during the migration of the virtual device, so that the memory is changed, at this time, the memory monitoring module 8 obtains the memory address changed after the memory is copied, the memory cutting module 9 divides the memory block according to the minimum memory block threshold value, merges the memory blocks into the set of active memory blocks 14, and the memory transmission module 10 synchronizes the remaining active memory blocks 14 to the virtual device of the target host 2, switches the service to the target host 2, stops the source host 1, so as to complete the migration process of the virtual device, and simultaneously ensures the shortest interruption time of the source host 1, so that the user completes the migration of the virtual device of the source host 1 under the condition of basically not affecting the use.
The invention also provides a virtual equipment migration device, which comprises: memory and a processor. The memory is used for storing a computer program, and the processor is used for realizing the optimal idle time migration method of the virtual equipment when the computer program is executed.
The present invention also provides a computer-readable storage medium having a computer program stored thereon, such as a nonvolatile memory, such as an optical disk, a hard disk, or a flash memory. The computer program, when executed by a processor, implements an optimal idle time migration method for virtual devices according to the present invention.
The above disclosed alternative embodiments of the invention are merely intended to help illustrate the invention. The preferred embodiments are not exhaustive or to limit the invention to the precise form disclosed. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the invention and the practical application, to thereby enable others skilled in the art to best understand and utilize the invention. The invention is limited only by the claims and the full scope and equivalents thereof.

Claims (5)

1. An optimal idle time migration method for a virtual device is characterized by at least comprising the following steps:
obtaining predicted performance indexes in future M periods of the virtual equipment to be migrated by using a model for predicting the performance indexes; obtaining the optimal idle migration time of the virtual equipment to be migrated according to the predicted performance index;
setting a timing scheduling task according to the optimal idle migration time;
according to the timing scheduling task, completing automatic migration of the virtual equipment to be migrated;
the method for predicting the model of the performance index at least comprises the following steps:
s1, collecting performance indexes of virtual equipment to be migrated in N periods;
s2, acquiring an input and output data set for training according to the performance index;
s3, inputting the training input/output data set into a preset first model for training;
s4, finishing training and obtaining optimal performance parameters of the model when training reaches a first preset condition, and taking the trained preset first model as a model of the predicted performance index;
the performance index comprises: the method comprises the steps that any one or more of the occupancy rate of a central processing unit of the virtual equipment to be migrated, the occupancy rate of a memory of the virtual equipment to be migrated, the utilization rate of input and output interfaces, network traffic, network delay time, concurrent connection number and packet sending number are adopted;
The first preset model is a model based on a time-cycle neural network, the model comprises a forgetting gate layer, an input gate layer and an output gate layer, the information needing to be discarded is determined by the forgetting gate layer, and the information needing to be updated is determined by the input gate layer; updating the final state through the output gate layer output; outputting a number between 0 and 1 by each gate layer in the preset first model, wherein the number describes how much data information of each gate layer can pass through, 0 represents that no data information passes through, 1 represents that all data information passes through, and screening the data information to be passed through by each gate layer;
the method also comprises the step of preprocessing the performance indexes in the N periods of the history, wherein the preprocessing comprises the step of performing stationarity test on data, and if the data is not stationary, performing stationarity processing; after the data is stable, performing white noise test on the data to obtain an input and output data set for training; the stability test utilizes a unit root test method or a time sequence diagram for drawing performance indexes to test stability; the white noise test utilizes an LB statistic method to test white noise of the data after difference;
the automatic migration method at least comprises the following steps:
R1, providing a source host and a target host;
r2, configuring a first virtual device in the source host, configuring a second virtual device in the target host, and checking a virtual device migration environment;
a memory access tracking recorder is configured outside the first virtual equipment of the source host, and the memory access tracking recorder is used for monitoring the use condition of a memory block in the first virtual equipment;
r4, dividing the memory into idle memory blocks and active memory blocks by adopting a memory classification block division algorithm according to the use condition of the memory blocks, and copying all the idle memory blocks from the first virtual device to the second virtual device;
r5. copying the remaining active memory blocks from the first virtual device to the second virtual device, simultaneously dividing the active memory blocks into idle memory blocks and active memory blocks with the same size again, continuously repeating the steps of copying the idle memory blocks and dividing the active memory blocks until the size of the remaining active memory blocks is smaller than or equal to a minimum memory block threshold value, sorting the active memory blocks in the active memory block set according to the latest access frequency, copying the active memory blocks from the first virtual device to the second virtual device according to the sorting result, stopping copying until the last access time of all the active memory blocks is within the minimum threshold time, and performing interrupt migration to complete the automatic migration of the first virtual device;
The method further comprises the steps of:
configuring a virtual device on each of a target host and a source host;
preparing a migration environment, the migration environment comprising: whether the target host and the source host are in a storage system with network sharing, whether the types of central processing units of the target host and the source host are the same, whether the operating system version of the target host accords with the migration environment of the source virtual device, whether the name of the second virtual device is the same as that of the first virtual device, whether the memory size of the target host is the same as that of the source host, and whether the network transmission speed is suitable for the thermal migration of the virtual device;
the method further comprises the steps of:
the source host sends a specified file packet to the server, sends a file packet with the size of integer times of the specified file packet to the server, and records the time required by the twice sending action as a first sending time and a second sending time respectively; and acquiring the network speed according to the first sending time and the second sending time, and determining the size of the memory block according to the network speed.
2. The virtual device optimal idle time migration method of claim 1, wherein the output data of the forgetting gate layer is obtained by the following formula:
f t =σ(W f ·[h t-1 ,x t ]+b f );
Wherein f t In order to forget the output value of the gate layer, sigma is sigma function, W f In order to forget the weight matrix of the gate layer, t is the current time, h t-1 Performance finger corresponding to t-1 time nodeMark, x t B, as the current observation performance index f Is a bias term; the forgetting gate layer is used for determining how much information of the last moment can be reserved to the current moment.
3. The method for optimal idle time migration of a virtual device of claim 1,
the method comprises the steps that an updating value is established through an input gate layer of a preset first model, a value needing to be updated is determined through the input gate layer, a candidate vector is established, and the two vectors are combined to establish the updating value; creating an updated value according to the formula:
i t =σ(W i ·[h t-1 ,x t ]+b i )
Figure QLYQS_1
Figure QLYQS_2
wherein i is t W is the output value after passing through the input gate layer i B is a weight matrix input into the gate layer i As a result of the bias term,
Figure QLYQS_3
as candidate vector, tanh is tanh function, W c As vector parameters, b c To adjust parameters, C t For the updated state value corresponding to the moment of the model t, C t-1 The historical state value corresponding to the model t-1 moment;
the preset first model determines a value to be output through an output gate layer, normalizes an updated state value corresponding to the model t time to be between-1 and 1 through a tanh function, multiplies the updated state value by an output value of the output gate layer to obtain an output value of the preset first model, and obtains the output value according to a formula:
o t =σ(W o [h t-1 ,x t ]+b o )
h t =o t *tanh(C t )
Wherein O is t To output value after passing through the output gate layer, W O B, outputting a weight matrix of the gate layer o Is an offset term, h t Is the output value of the model.
4. A model training apparatus for predicting performance indexes, wherein the virtual device optimal idle time migration method according to any one of claims 1 to 3 is adopted, comprising:
the data acquisition module is used for acquiring performance indexes of the virtual equipment to be migrated in the history N periods;
the input/output data set acquisition module is used for acquiring an input/output data set for training according to the performance index; the training module is used for inputting the training input/output data set into a preset first model for training; when training reaches a first preset condition, finishing training and obtaining optimal performance parameters of the model, and taking the trained preset first model as a model of the predicted performance index;
the performance index comprises: the method comprises the steps that any one or more of the occupancy rate of a central processing unit of the virtual equipment to be migrated, the occupancy rate of a memory of the virtual equipment to be migrated, the utilization rate of input and output interfaces, network traffic, network delay time, concurrent connection number and packet sending number are adopted;
the first preset model is a model based on a time-cycle neural network, the model comprises a forgetting gate layer, an input gate layer and an output gate layer, the information needing to be discarded is determined by the forgetting gate layer, and the information needing to be updated is determined by the input gate layer; updating the final state through the output gate layer output; outputting a number between 0 and 1 by each gate layer in the preset first model, wherein the number describes how much data information of each gate layer can pass through, 0 represents that no data information passes through, 1 represents that all data information passes through, and screening the data information to be passed through by each gate layer;
The system also comprises a preprocessing module for preprocessing performance indexes in N historic periods, wherein the preprocessing comprises the steps of performing stability test on data and performing stability processing if the data is unstable; after the data is stable, performing white noise test on the data to obtain an input and output data set for training; the stability test utilizes a unit root test method or a time sequence diagram for drawing performance indexes to test stability; the white noise test uses the LB statistic method to test the white noise of the data after the difference.
5. A computer readable storage medium having stored thereon a computer program which when executed by a processor implements a virtual device optimal idle time migration method according to any of claims 1-3.
CN202110036748.7A 2020-05-27 2020-05-27 Model training method and device for predicting performance index and readable storage medium Active CN112527470B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110036748.7A CN112527470B (en) 2020-05-27 2020-05-27 Model training method and device for predicting performance index and readable storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110036748.7A CN112527470B (en) 2020-05-27 2020-05-27 Model training method and device for predicting performance index and readable storage medium
CN202010460191.5A CN111611055B (en) 2020-05-27 2020-05-27 Virtual equipment optimal idle time migration method and device and readable storage medium

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN202010460191.5A Division CN111611055B (en) 2020-05-27 2020-05-27 Virtual equipment optimal idle time migration method and device and readable storage medium

Publications (2)

Publication Number Publication Date
CN112527470A CN112527470A (en) 2021-03-19
CN112527470B true CN112527470B (en) 2023-05-26

Family

ID=72200670

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202010460191.5A Active CN111611055B (en) 2020-05-27 2020-05-27 Virtual equipment optimal idle time migration method and device and readable storage medium
CN202110036748.7A Active CN112527470B (en) 2020-05-27 2020-05-27 Model training method and device for predicting performance index and readable storage medium

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202010460191.5A Active CN111611055B (en) 2020-05-27 2020-05-27 Virtual equipment optimal idle time migration method and device and readable storage medium

Country Status (1)

Country Link
CN (2) CN111611055B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112667516B (en) * 2021-01-06 2023-07-28 南京万得资讯科技有限公司 An Zhuo Duoji type automatic test system and method
CN114867065B (en) * 2022-05-18 2024-09-13 中国联合网络通信集团有限公司 Base station calculation load balancing method, equipment and storage medium
CN117453148B (en) * 2023-12-22 2024-04-02 柏科数据技术(深圳)股份有限公司 Data balancing method, device, terminal and storage medium based on neural network
CN117453149B (en) * 2023-12-22 2024-04-09 柏科数据技术(深圳)股份有限公司 Data balancing method, device, terminal and storage medium of distributed storage system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105843748A (en) * 2015-01-15 2016-08-10 华为技术有限公司 Method and device for processing memory page in memory
CN110059858A (en) * 2019-03-15 2019-07-26 深圳壹账通智能科技有限公司 Server resource prediction technique, device, computer equipment and storage medium

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9384062B2 (en) * 2008-12-27 2016-07-05 Vmware, Inc. Artificial neural network for balancing workload by migrating computing tasks across hosts
CN103577249B (en) * 2013-11-13 2017-06-16 中国科学院计算技术研究所 The online moving method of virtual machine and system
CN105446790B (en) * 2014-07-15 2019-10-18 华为技术有限公司 A kind of virtual machine migration method and device
CN104156255B (en) * 2014-07-31 2017-10-17 华为技术有限公司 A kind of virtual machine migration method, virtual machine (vm) migration device and source physical host
US20180246751A1 (en) * 2015-09-25 2018-08-30 Intel Corporation Techniques to select virtual machines for migration
US9336042B1 (en) * 2015-11-19 2016-05-10 International Business Machines Corporation Performing virtual machine live migration within a threshold time by adding available network path in multipath network
CN106502799A (en) * 2016-12-30 2017-03-15 南京大学 A kind of host load prediction method based on long memory network in short-term
US10509667B1 (en) * 2017-01-19 2019-12-17 Tintri By Ddn, Inc. Modeling space consumption of a migrated VM
CN106933650B (en) * 2017-03-03 2020-08-04 北方工业大学 Load management method and system of cloud application system
CN108932149B (en) * 2017-05-22 2023-11-17 中兴通讯股份有限公司 Data transmission method and device
CN109034400B (en) * 2018-05-29 2021-10-15 国网天津市电力公司 Transformer substation abnormal measurement data prediction platform system
CN110928634B (en) * 2018-09-19 2023-04-07 阿里巴巴集团控股有限公司 Data processing method, device and equipment
CN109542585B (en) * 2018-11-14 2020-06-16 山东大学 Virtual machine workload prediction method supporting irregular time intervals
CN110008079A (en) * 2018-12-25 2019-07-12 阿里巴巴集团控股有限公司 Monitor control index method for detecting abnormality, model training method, device and equipment
CN110806918A (en) * 2019-09-24 2020-02-18 梁伟 Virtual machine operation method and device based on deep learning neural network
CN110795213B (en) * 2019-12-12 2022-06-07 东北大学 Active memory prediction migration method in virtual machine migration process

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105843748A (en) * 2015-01-15 2016-08-10 华为技术有限公司 Method and device for processing memory page in memory
CN110059858A (en) * 2019-03-15 2019-07-26 深圳壹账通智能科技有限公司 Server resource prediction technique, device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN111611055A (en) 2020-09-01
CN112527470A (en) 2021-03-19
CN111611055B (en) 2020-12-18

Similar Documents

Publication Publication Date Title
CN112527470B (en) Model training method and device for predicting performance index and readable storage medium
US8863138B2 (en) Application service performance in cloud computing
Bellavista et al. Differentiated service/data migration for edge services leveraging container characteristics
JP5458308B2 (en) Virtual computer system, virtual computer system monitoring method, and network device
US9256464B2 (en) Method and apparatus to replicate stateful virtual machines between clouds
CN107615792B (en) Management method and system for MTC event
EP3206335B1 (en) Virtual network function instance migration method, device and system
KR20160049006A (en) Method, apparatus, and system for managing migration of virtual machine
US10860375B1 (en) Singleton coordination in an actor-based system
Singh et al. A review on migration techniques and challenges in live virtual machine migration
CN110134490B (en) Virtual machine dynamic migration method, device and storage medium
US20140282530A1 (en) Refreshing memory topology in virtual machine operating systems
CN104281484A (en) VM (virtual machine) migration method and device
CN112148430A (en) Method for online safe migration of virtual machine with virtual network function
CN104239120A (en) State information synchronization method, state information synchronization device and state information synchronization system for virtual machine
CN115292003A (en) Server failure recovery method and device, electronic equipment and storage medium
GB2564863A (en) Containerized application platform
Umesh et al. Dynamic software aging detection-based fault tolerant software rejuvenation model for virtualized environment
US11582168B2 (en) Fenced clone applications
EP4006725A1 (en) Virtual machine migration processing and strategy generation method, apparatus and device, and storage medium
JP2019028869A (en) Packet processing function migration system, server, packet processing function migration method, and program
Altahat et al. Analysis and comparison of live virtual machine migration methods
CN108932149B (en) Data transmission method and device
US20190108060A1 (en) Mobile resource scheduler
US9348672B1 (en) Singleton coordination in an actor-based system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant