CN113326140A - Process migration method and device, computing equipment and storage medium - Google Patents

Process migration method and device, computing equipment and storage medium Download PDF

Info

Publication number
CN113326140A
CN113326140A CN202110738697.2A CN202110738697A CN113326140A CN 113326140 A CN113326140 A CN 113326140A CN 202110738697 A CN202110738697 A CN 202110738697A CN 113326140 A CN113326140 A CN 113326140A
Authority
CN
China
Prior art keywords
load
processor
processes
active
task
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110738697.2A
Other languages
Chinese (zh)
Inventor
叶中玉
周鹏
余昇锦
胡翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Uniontech Software Technology Co Ltd
Original Assignee
Uniontech Software Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Uniontech Software Technology Co Ltd filed Critical Uniontech Software Technology Co Ltd
Priority to CN202110738697.2A priority Critical patent/CN113326140A/en
Publication of CN113326140A publication Critical patent/CN113326140A/en
Priority to PCT/CN2021/124293 priority patent/WO2023273015A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load

Abstract

The invention discloses a process migration method, a device, a computing device and a storage medium, wherein the process migration method is executed in the computing device and comprises the following steps: dividing each process into an active process or an inactive process based on the real load of each process in the processor; judging whether the active process is the only active process in the processor or not; if the process is not the only active process, sequentially determining task hierarchical loads of the processes; and when the task hierarchical load of the process meets a preset condition, migrating the process to other processors.

Description

Process migration method and device, computing equipment and storage medium
Technical Field
The invention relates to the field of internet, in particular to a process migration method, a process migration device, a computing device and a storage medium.
Background
In a multi-core SMP (symmetric multiprocessor) system, reasonable task scheduling is an important premise for exerting the potential of the multi-core system. Multiple core based scheduling currently runs one process queue on each processor. In addition, a process in an executable state can be added to other running queues to realize load balance among the processors, and the situations that part of the processors are busy and the other part of the processors are idle are avoided.
In the current load balancing implementation, task _ h _ load is used, that is, the load contribution of the current process to the current processor is considered, and whether a task can meet the migration condition is judged according to the size of the task hierarchical load. However, the existing process migration method has the following problems that in a full-load scene, a phenomenon that a high-load process migrates across processors and even across memory nodes due to the influence of a background low-load process occurs, so that serious cache invalidation is caused, and the normal performance of the high-load process is influenced.
Disclosure of Invention
In view of the above, the present invention has been made to provide a process migration method, apparatus, computing device and storage medium that overcome or at least partially address the above-mentioned problems.
According to an aspect of the present invention, there is provided a process migration method, executed in a computing device, the method comprising: dividing each process into an active process or an inactive process based on the real load of each process in the processor; judging whether the active process is the only active process in the processor or not; if the process is not the only active process, sequentially determining task hierarchical loads of the processes; and when the task hierarchical load of the process meets a preset condition, migrating the process to other processors.
Optionally, in the process migration method according to the present invention, if the process is not the only active process, the step of sequentially determining task hierarchical loads of the processes includes: polling a linked list in which processes in a processor are stored to obtain the adding sequence of the processes; and sequentially determining the task hierarchical load of the processes according to the adding sequence of the processes from back to front.
Optionally, in the process migration method according to the present invention, after the step of dividing each process into an active process or an inactive process based on the real load of each process in the processor, the method further includes the steps of: and counting the number of active processes in the processor.
Optionally, in the process migration method according to the present invention, the step of calculating the real load includes: respectively acquiring time information of each process in a working state and a non-working state; based on the time information, the real load of each process is calculated.
Optionally, in the process migration method according to the present invention, the step of calculating the task hierarchical load includes: acquiring the number of processors of a process in a corresponding group; acquiring the process load of the process in the corresponding group; and taking the ratio of the process load to the number of the processors as the task hierarchical load of the process.
Optionally, in the process migration method according to the present invention, the step of dividing each process into an active process or an inactive process based on a real load of each process in the processor includes: if the real load of the process is larger than a preset load threshold value, determining the process as an active process; otherwise, the process is determined to be an inactive process.
Optionally, in the process migration method according to the present invention, when the task hierarchical load of the process satisfies a preset condition, the step of migrating the process to another processor includes: judging whether the task layering load of the process is smaller than half of the load unbalance value; if so, the process is migrated to the other processor.
According to still another aspect of the present invention, there is provided a process migration apparatus, including: the process state determining module is suitable for dividing each process into an active process or an inactive process based on the real load of each process in the processor; the judging module is suitable for judging whether the active process is the only active process in the processor; the process task hierarchical load determining module is suitable for sequentially determining task hierarchical loads of the processes; and a process migration module adapted to migrate the process to the other processor.
According to yet another aspect of the invention, there is provided a computing device comprising: at least one processor; and a memory storing program instructions, wherein the program instructions are configured to be executed by the at least one processor, the program instructions comprising instructions for performing the above-described method.
According to yet another aspect of the present invention, there is provided a readable storage medium storing program instructions which, when read and executed by a computing device, cause the computing device to perform the above-described method.
According to the scheme of the invention, the double influences of the task real load and the task hierarchical load of the process are comprehensively considered, when only one process with high real load is arranged on the processor, the low-load process newly added on the processor is preferentially migrated, the process with high real load is prevented from being migrated, and the normal performance of the process with high load is further ensured.
According to the scheme of the invention, under the situation of a full thread use case, the influence of a background process on a main process is reduced, the main process is not influenced by the background and the process migration across processors and even across memory nodes is not generated, and at the moment, the utilization rate of a cache memory is highest and the running performance of a program is best.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
FIG. 1 illustrates a schematic diagram of a principle 100 of process migration;
FIG. 2 illustrates a flow diagram of a process migration method 200 in the prior art;
FIG. 3 shows a schematic diagram of a computing device 300, according to one embodiment of the invention;
FIG. 4 shows a flow diagram of a process migration method 400 according to one embodiment of the invention.
FIG. 5 is a block diagram illustrating a process migration apparatus 500 according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
An important part of load balancing is how to select a suitable process in the processor process queue for migration. Specifically, the load of the process queue of the processor is the sum of the loads of all the processes on the process queue, the load of one process is related to the actual running time of the process, and the longer the continuous running time is basically, the higher the load is. The goal of load balancing is therefore to utilize the computational resources of the processor as much as possible, allowing each process to have sufficient processor time. To achieve this goal, it is necessary to select a suitable process (generally, a process with a smaller load easily meets the migration condition) from a busy processor with a larger total load and a process with a larger number of processes in the running queue to migrate to a relatively idle processor.
As shown in fig. 1, fig. 1 illustrates a schematic diagram of a principle 100 of process migration, where a process 1, a process 2, and a process 3 run in a processor 0, and a process 4 run in a processor 1. Processor 0 is a busy processor compared to processor 1, and at this time, for example, process 3 in processor 0 can be migrated to processor 1, so as to implement load balancing of the system.
FIG. 2 shows a flow diagram of one prior art method 200 for implementing process migration, where each process on a busy processor is first polled in order after the process migration process has begun. And then judging whether the task layered load of the process meets the migration requirement. And finally, migrating the process meeting the migration requirement from the busy processor to the idle processor. The process migration method uses task hierarchical load, namely, the load contribution of the current process to the current processor is considered, and whether a task can meet the migration condition is judged according to the size of the task hierarchical load.
In the scene of starting group scheduling, the more tasks are operated in the same group, and since the weight of one task group is a default value when not actively adjusted, the task weight averaged on one processor is smaller than a standard value, so that the hierarchical load (task _ h _ load) of the tasks is equal to the number of task loads/cpus, and at this time, the background process load > may be equal to the work process load.
In one specific example, after the group schedule is turned on, the full thread runs the designated program.
Process 1 (user process running continuously):
the system is positioned in a task group A, and 10 processes in the group are distributed on 10 processors;
the process load task _ load is 1000;
the hierarchical load of the process task _ h _ load is 1000/10 100.
Process 2 (periodically running background process):
the system is positioned in a task group B, and 1 process in the group is distributed on 1 processor;
the process load task _ load is 120;
the hierarchical load of the process task _ h _ load is 120/1-120.
At this time, the hierarchical load of the background process > is the hierarchical load of the work process, and as mentioned above, the process with a smaller load is easy to satisfy the condition of migration, and when a plurality of tasks on one processor all satisfy the condition of migration, the migration is performed according to the order of adding the tasks into the running queue, and the process which is added into the running queue first is preferentially migrated, so the user process which generally runs continuously is migrated. In a full-load scene, a phenomenon that a high-load process migrates across processors and even across memory nodes due to the influence of a background low-load process occurs, which may cause serious cache failure, and further affect the normal performance of the high-load process.
In order to solve the problems in the prior art, the technical scheme of the invention is provided. One embodiment of the present invention provides a process migration method that may be performed in a computing device. In particular, FIG. 3 shows a block diagram of a computing device 300, according to one embodiment of the invention. As shown in FIG. 3, in a basic configuration 302, a computing device 300 typically includes a system memory 306 and one or more processors 304. A memory bus 308 may be used for communication between the processor 304 and the system memory 306.
Depending on the desired configuration, the processor 304 may be any type of processing, including but not limited to: a microprocessor (μ P), a microcontroller (μ C), a Digital Signal Processor (DSP), or any combination thereof. The processor 304 may include one or more levels of cache, such as a level one cache 310 and a level two cache 312, a processor core 314, and registers 316. The example processor core 314 may include an Arithmetic Logic Unit (ALU), a Floating Point Unit (FPU), a digital signal processing core (DSP core), or any combination thereof. The example memory controller 318 may be used with the processor 304, or in some implementations the memory controller 318 may be an internal part of the processor 304.
Depending on the desired configuration, system memory 306 may be any type of memory, including but not limited to: volatile memory (such as RAM), non-volatile memory (such as ROM, flash memory, etc.), or any combination thereof. The physical memory in the computing device is usually referred to as a volatile memory RAM, and data in the disk needs to be loaded into the physical memory to be read by the processor 304. System memory 306 may include an operating system 320, one or more applications 322, and program data 324. The application 322 is actually a plurality of program instructions that direct the processor 304 to perform corresponding operations. In some embodiments, the application 322 may be arranged to execute instructions on an operating system with the program data 324 by one or more processors 304 in some embodiments. Operating system 320 may be, for example, Linux, Windows, etc., which includes program instructions for handling basic system services and performing hardware dependent tasks. The application 322 includes program instructions for implementing various user-desired functions, and the application 322 may be, for example, but not limited to, a browser, instant messenger, a software development tool (e.g., an integrated development environment IDE, a compiler, etc.), and the like. When the application 322 is installed into the computing device 300, a driver module may be added to the operating system 320.
When the computing device 300 is started, the processor 304 reads program instructions of the operating system 320 from the memory 306 and executes the program instructions. The applications 322 run on top of the operating system 320, utilizing the operating system 320 and interfaces provided by the underlying hardware to implement various user-desired functions. When the user launches the application 322, the application 322 is loaded into the memory 306, and the processor 304 reads and executes the program instructions of the application 322 from the memory 306.
The computing device 300 also includes a storage device 332, the storage device 332 including removable storage 336 and non-removable storage 338, the removable storage 336 and the non-removable storage 338 each connected to the storage interface bus 334.
The computing device 300 may also include an interface bus 340 that facilitates communication from various interface devices (e.g., output devices 342, peripheral interfaces 344, and communication devices 346) to the basic configuration 302 via the bus/interface controller 330. The example output devices 342 include a graphics processing unit 348 and an audio processing unit 350. They may be configured to facilitate communications with various external devices, such as a display or speakers, via one or more a/V ports 352. Example peripheral interfaces 344 may include a serial interface controller 354 and a parallel interface controller 356, which may be configured to facilitate communication with external devices such as input devices (e.g., keyboard, mouse, pen, voice input device, touch input device) or other peripherals (e.g., printer, scanner, etc.) via one or more I/O ports 358. An example communication device 346 can include a network controller 360, which can be arranged to facilitate communications with one or more other computing devices 362 over a network communication link via one or more communication ports 364.
A network communication link may be one example of a communication medium. Communication media may typically be embodied by computer readable instructions, data structures, program modules, and may include any information delivery media, such as carrier waves or other transport mechanisms, in a modulated data signal. A "modulated data signal" may be a signal that has one or more of its data set or its changes made in such a manner as to encode information in the signal. By way of non-limiting example, communication media may include wired media such as a wired network or private-wired network, and various wireless media such as acoustic, Radio Frequency (RF), microwave, Infrared (IR), or other wireless media. The term computer readable media as used herein may include both storage media and communication media.
The computing device 300 also includes a storage interface bus 334 connected to the bus/interface controller 330. The storage interface bus 334 is coupled to a storage device 332, and the storage device 332 is adapted to store data. Example storage devices 332 may include removable storage 336 (e.g., CD, DVD, usb disk, removable hard disk, etc.) and non-removable storage 338 (e.g., hard disk drive HDD, etc.).
In a computing device 300 according to the invention, the application 322 includes a plurality of program instructions that perform the method 400.
FIG. 4 shows a flow diagram of a process migration method 400 according to one embodiment of the invention. The method 400 is suitable for execution in a computing device, such as the computing device 300 described above.
As shown in fig. 4, the purpose of the method 400 is to implement a process migration method, starting with step S402, where each process is divided into an active process or an inactive process based on the real load of each process in the processor in step S402.
It should be noted that, according to the embodiment of the present invention, before the step S402 is executed, it is already determined that there is a load imbalance in the computing device at this time, and the process on the processor needs to be migrated, and the determination of the load imbalance may be known based on the foregoing content or an existing load balancing policy, which is not described herein again.
It should be further noted that the process migration method provided in this embodiment is applicable to a scenario in which each process runs at or near full load in the processor, that is, the occupancy rate of the processor is near 100%.
Preferably, the true load of the process can be calculated by the following steps.
Step S422, respectively obtaining the time information of each process in the working state and in the non-working state.
Illustratively, the running time of the process is intermittent, after the process runs for a period of time, the processor stops running the process for a period of time, changes to running other processes, and then runs the process again when other processes stop running, and the time when the process is in the working state is the running time of the process.
In step S424, the real load of each process is calculated based on the time information.
The process load is an accumulation of a plurality of runtime conditions, while the load during a previous runtime condition is to be attenuated by a factor related to the time the process is in the inactive state.
The real load is calculated from the time of the process in each operating interval and the attenuation coefficient of each operating interval, for example, if the process has been operated for 3 time intervals before the current time, the real load of the process is (first time interval, first attenuation coefficient) + (second time interval, second attenuation coefficient) + (third time interval, third attenuation coefficient).
And comparing the obtained real load of the process with a preset load threshold, and judging that the process is an active process when the real load of the process is greater than the load threshold, or else, judging that the process is an inactive process. The setting of the load threshold may be performed by a person skilled in the art or may be performed according to the attribute of the computing device, which is not limited in the embodiment.
In step S404, it is determined whether the active process is the only active process in the processor. In step S402, the status of each process (active process or inactive process) in the processor has been determined by the real load of each process, and it can be directly determined whether the target process is the only active process in the processor.
It should be noted that, when the process is the only active process in the processor, the process is directly skipped, in other words, when the process is the only active process in the processor, the process is abandoned.
Of course, to facilitate determining the number of active processes in the processor, the number of active processes in the processor may be counted before executing step S404.
In step S406, if the process is not the only active process, the task hierarchical load of the process is sequentially determined. Polling determines the task hierarchical load of each process on the processor, starting with the process that is added later to the run queue.
Specifically, a linked list of each process in the processor is stored in a polling mode so as to obtain the adding sequence of each process; and sequentially determining the task hierarchical load of the processes according to the adding sequence of the processes from back to front.
The process is put into the head position of the appointed linked list after being added into the process queue, therefore, the process can be polled according to the time for adding into the process queue from the head of the linked list, namely, the adding sequence of the process can be known by polling the linked list.
In some embodiments, the steps of calculating the task load of each process are as follows: acquiring the number of processors of a process in a corresponding group; acquiring the process load of the process in the corresponding group; and taking the ratio of the process load to the number of the processors as the task hierarchical load of the process.
It should be noted that the task load calculation step is applicable to a scenario in which each process in the packet is running at or near full capacity on the processor.
In one specific example, the conventional calculation formula for task hierarchical load is as follows:
task hierarchical load is the real load of the process and the load of the upper queue relative to the top load/the upper queue;
therefore, the task hierarchical load/the task real load is the relative top-level load of the upper-level queue/the load of the upper-level queue is the entity weight of the current cpu group/1024 is the total load of all processes in the current process load/group;
so when each process runs at full load, the load is the same, and the current process load/total load of all processes in the group is approximately 1/total number of processors.
In other words, when the processors in each group run at or near full load, the load of each process in the group is the same, and the task hierarchical load of each process is the process load/the number of processors in the corresponding group.
In one specific example, the processes are located in task group 0, a total of 5 processors are distributed in the group, and the processes in task group 0 are running at full load. The load of the process is 10; the task hierarchical load of the process is 10/5 2.
In step S408, when the task hierarchical load of the process satisfies a preset condition, the process is migrated to another processor.
Specifically, the load imbalance value is calculated according to the current load value of the calculation device. And judging whether the task hierarchical load of the process is less than half of the load imbalance value. If so, the process is migrated to the other processor.
It should be noted that the load value of the computing device is not a fixed value, and changes with the operation of each process, and therefore, the load imbalance value also changes from time to time. In summary, the smaller the task layering load, the easier it is to satisfy the conditions for migration.
FIG. 5 is a block diagram illustrating a process migration apparatus 500 according to an embodiment of the present invention.
As shown in fig. 5, the apparatus 500 comprises a state determination module adapted to divide processes in a processor into active processes or inactive processes based on their real load; the judging module is suitable for judging whether the active process is the only active process in the processor; the task hierarchical load determining module is suitable for sequentially determining task hierarchical loads of the processes; and a migration module adapted to migrate the process to the other processor.
It should be noted that the principle and the working flow of the process migration apparatus provided in this embodiment are similar to those of the process migration method described above, and reference may be made to the description of the process migration method described above for relevant points, which is not described herein again.
In the embodiment, on the basis of the load balancing task emigration standard, the dual influence of the task real load and the task hierarchical load is comprehensively considered, and when only one process with high real load exists on the processor, a low-load process newly added to the processor is preferentially migrated. Under the situation of a full thread use case, the influence of a background process on a main process is reduced, the main process cannot be influenced by the background to cause process migration across processors and even across memory nodes, the utilization rate of a cache memory is highest, and the running performance of a program is best.
The various techniques described herein may be implemented in connection with hardware or software or, alternatively, with a combination of both. Thus, the methods and apparatus of the present invention, or certain aspects or portions thereof, may take the form of program code (i.e., instructions) embodied in tangible media, such as removable hard drives, U.S. disks, floppy disks, CD-ROMs, or any other machine-readable storage medium, wherein, when the program is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the invention.
In the case of program code execution on programmable computers, the computing device will generally include a processor, a storage medium readable by the processor (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device. Wherein the memory is configured to store program code; the processor is configured to perform the method of the invention according to instructions in said program code stored in the memory.
By way of example, and not limitation, readable media may comprise readable storage media and communication media. Readable storage media store information such as computer readable instructions, data structures, program modules or other data. Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. Combinations of any of the above are also included within the scope of readable media.
In the description provided herein, algorithms and displays are not inherently related to any particular computer, virtual system, or other apparatus. Various general purpose systems may also be used with examples of this invention. The required structure for constructing such a system will be apparent from the description above. Moreover, the present invention is not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose preferred embodiments of the invention.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules or units or components of the devices in the examples disclosed herein may be arranged in a device as described in this embodiment or alternatively may be located in one or more devices different from the devices in this example. The modules in the foregoing examples may be combined into one module or may be further divided into multiple sub-modules.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
Furthermore, some of the described embodiments are described herein as a method or combination of method elements that can be performed by a processor of a computer system or by other means of performing the described functions. A processor having the necessary instructions for carrying out the method or method elements thus forms a means for carrying out the method or method elements. Further, the elements of the apparatus embodiments described herein are examples of the following apparatus: the apparatus is used to implement the functions performed by the elements for the purpose of carrying out the invention.
As used herein, unless otherwise specified the use of the ordinal adjectives "first", "second", "third", etc., to describe a common object, merely indicate that different instances of like objects are being referred to, and are not intended to imply that the objects so described must be in a given sequence, either temporally, spatially, in ranking, or in any other manner.
While the invention has been described with respect to a limited number of embodiments, those skilled in the art, having benefit of this description, will appreciate that other embodiments can be devised which do not depart from the scope of the invention as described herein. Furthermore, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter. Accordingly, many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the appended claims. The present invention has been disclosed in an illustrative rather than a restrictive sense with respect to the scope of the invention, as defined in the appended claims.

Claims (10)

1. A process migration method to be executed in a computing device having a plurality of processors resident therein, the method comprising:
dividing each process into an active process or an inactive process based on the real load of each process in a processor;
judging whether the active process is the only active process in the processor;
if the process is not the only active process, sequentially determining task hierarchical loads of the processes;
and when the task hierarchical load of the process meets a preset condition, migrating the process to other processors.
2. The method of claim 1, wherein the step of sequentially determining task hierarchical loads of processes, if not the only active process, comprises:
polling a linked list in which processes in the processor are stored to obtain the adding sequence of the processes;
and sequentially determining the task hierarchical load of the processes according to the adding sequence of the processes from back to front.
3. The method as claimed in claim 1, wherein after the step of dividing the processes into active processes or inactive processes based on the real load of the processes in the processor, further comprising the steps of:
and counting the number of active processes in the processor.
4. The method of claim 1, wherein the calculating of the real load comprises:
respectively acquiring time information of each process in a working state and a non-working state;
and calculating the real load of each process based on the time information.
5. The method of claim 1, wherein the task hierarchical load calculating step comprises:
acquiring the number of processors of a process in a corresponding group;
acquiring the process load of the process in the corresponding group;
and taking the ratio of the process load to the number of the processors as the task hierarchical load of the process.
6. The method of claim 1, wherein the step of dividing processes into active processes or inactive processes based on the real load of the processes in the processor comprises:
if the real load of the process is larger than a preset load threshold value, determining the process as an active process;
otherwise, the process is determined to be an inactive process.
7. The method of claim 2, wherein the step of migrating the process to the other processor when the task hierarchical load of the process satisfies a preset condition comprises:
calculating a load imbalance value according to the current load value of the calculation equipment;
judging whether the task layering load of the process is smaller than half of the load unbalance value;
and if so, migrating the process to other processors.
8. A process migration apparatus comprising:
the state determining module is suitable for dividing each process in the processor into an active process or an inactive process based on the real load of each process;
the judging module is suitable for judging whether the active process is the only active process in the processor;
the task hierarchical load determining module is suitable for sequentially determining task hierarchical loads of the processes; and
a migration module adapted to migrate the process to other processors.
9. A computing device, comprising:
at least one processor; and
a memory storing program instructions configured for execution by the at least one processor, the program instructions comprising instructions for performing the method of any of claims 1-7.
10. A readable storage medium storing program instructions that, when read and executed by a computing device, cause the computing device to perform the method of any of claims 1-7.
CN202110738697.2A 2021-06-30 2021-06-30 Process migration method and device, computing equipment and storage medium Pending CN113326140A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110738697.2A CN113326140A (en) 2021-06-30 2021-06-30 Process migration method and device, computing equipment and storage medium
PCT/CN2021/124293 WO2023273015A1 (en) 2021-06-30 2021-10-18 Process migration method and apparatus, computing device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110738697.2A CN113326140A (en) 2021-06-30 2021-06-30 Process migration method and device, computing equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113326140A true CN113326140A (en) 2021-08-31

Family

ID=77425256

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110738697.2A Pending CN113326140A (en) 2021-06-30 2021-06-30 Process migration method and device, computing equipment and storage medium

Country Status (2)

Country Link
CN (1) CN113326140A (en)
WO (1) WO2023273015A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114942791A (en) * 2022-05-26 2022-08-26 统信软件技术有限公司 Process awakening method and device, computing device and readable storage medium
WO2023273015A1 (en) * 2021-06-30 2023-01-05 统信软件技术有限公司 Process migration method and apparatus, computing device, and storage medium
CN115857418A (en) * 2023-02-28 2023-03-28 深圳华龙讯达信息技术股份有限公司 Programmable logic control system based on coupling design

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116542707A (en) * 2023-03-14 2023-08-04 读书郎教育科技有限公司 Dynamic user layering method and system based on behavior data
CN117290075B (en) * 2023-11-23 2024-02-27 苏州元脑智能科技有限公司 Process migration method, system, device, communication equipment and storage medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102834807B (en) * 2011-04-18 2015-09-09 华为技术有限公司 The method and apparatus of multicomputer system load balancing
CN103729248B (en) * 2012-10-16 2017-12-15 华为技术有限公司 A kind of method and apparatus of determination based on cache perception task to be migrated
CN107196865B (en) * 2017-06-08 2020-07-24 中国民航大学 Load-aware adaptive threshold overload migration method
CN109766180B (en) * 2017-11-09 2023-01-17 阿里巴巴集团控股有限公司 Load balancing method and device, storage medium, computing equipment and computing system
US11216314B2 (en) * 2018-11-02 2022-01-04 EMC IP Holding Company LLC Dynamic reallocation of resources in accelerator-as-a-service computing environment
JP7234704B2 (en) * 2019-03-11 2023-03-08 富士通株式会社 Information processing device and information processing program
CN113326140A (en) * 2021-06-30 2021-08-31 统信软件技术有限公司 Process migration method and device, computing equipment and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023273015A1 (en) * 2021-06-30 2023-01-05 统信软件技术有限公司 Process migration method and apparatus, computing device, and storage medium
CN114942791A (en) * 2022-05-26 2022-08-26 统信软件技术有限公司 Process awakening method and device, computing device and readable storage medium
CN115857418A (en) * 2023-02-28 2023-03-28 深圳华龙讯达信息技术股份有限公司 Programmable logic control system based on coupling design

Also Published As

Publication number Publication date
WO2023273015A1 (en) 2023-01-05

Similar Documents

Publication Publication Date Title
CN113326140A (en) Process migration method and device, computing equipment and storage medium
JP5583837B2 (en) Computer-implemented method, system and computer program for starting a task in a computer system
JP5040773B2 (en) Memory buffer allocation device and program
TWI494850B (en) Providing an asymmetric multicore processor system transparently to an operating system
US8898435B2 (en) Optimizing system throughput by automatically altering thread co-execution based on operating system directives
EP2437168B1 (en) Method and device for balancing load of multiprocessor system
US9098337B2 (en) Scheduling virtual central processing units of virtual machines among physical processing units
JP5305664B2 (en) Method, program and apparatus for trading resources between partitions of a data processing system
Jeon et al. TPC: Target-driven parallelism combining prediction and correction to reduce tail latency in interactive services
WO2012028213A1 (en) Re-scheduling workload in a hybrid computing environment
CN113553164B (en) Process migration method, computing device and storage medium
US20220414503A1 (en) Slo-aware artificial intelligence inference scheduler for heterogeneous processors in edge platforms
CN114461404B (en) Process migration method, computing device and readable storage medium
Aldhalaan et al. Analytic performance modeling and optimization of live VM migration
US20090320022A1 (en) File System Object Node Management
US8862786B2 (en) Program execution with improved power efficiency
JP5136658B2 (en) Virtual computer allocation method, allocation program, and information processing apparatus having virtual computer environment
CN112114967B (en) GPU resource reservation method based on service priority
CN114265677A (en) Scheduling method and device for load balancing and computing equipment
CN113515388A (en) Process scheduling method and device, computing equipment and readable storage medium
Wu et al. A selective mirrored task based fault tolerance mechanism for big data application using cloud
Chhabra et al. Qualitative Parametric Comparison of Load Balancing Algorithms in Distributed Computing Environment
Hu et al. An improved heterogeneous dynamic list schedule algorithm
CN113918527B (en) Scheduling method and device based on file cache and computing equipment
CN115373862B (en) Dynamic resource scheduling method, system and storage medium based on data center

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination