CN107800779B - Method and system for optimizing load balance - Google Patents

Method and system for optimizing load balance Download PDF

Info

Publication number
CN107800779B
CN107800779B CN201710927691.3A CN201710927691A CN107800779B CN 107800779 B CN107800779 B CN 107800779B CN 201710927691 A CN201710927691 A CN 201710927691A CN 107800779 B CN107800779 B CN 107800779B
Authority
CN
China
Prior art keywords
container
load balancing
identification information
task file
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710927691.3A
Other languages
Chinese (zh)
Other versions
CN107800779A (en
Inventor
李国超
刘海锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Original Assignee
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Century Trading Co Ltd, Beijing Jingdong Shangke Information Technology Co Ltd filed Critical Beijing Jingdong Century Trading Co Ltd
Priority to CN201710927691.3A priority Critical patent/CN107800779B/en
Publication of CN107800779A publication Critical patent/CN107800779A/en
Application granted granted Critical
Publication of CN107800779B publication Critical patent/CN107800779B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • H04L41/0823Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Stored Programmes (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The present disclosure provides a method for optimizing load balancing, comprising: after the first container is started, adding identification information of a load balancing process in the first container to a first task file of a control group so as to associate the load balancing process with the first task file, wherein the load balancing process is used for distributing connection requests and/or data requests aiming at a host machine of the load balancing process to corresponding servers in a distributed architecture in a balanced manner; and after the second container is started, adding identification information of the management process in the second container into a second task file of the control group to associate the management process with the second task file, wherein the control group is used for isolating hardware resources used by different processes by associating the different processes with different task files. The present disclosure also provides a system for optimizing load balancing, a computer system and a computer readable storage medium.

Description

Method and system for optimizing load balance
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method and a system for optimizing load balancing, a computer system, and a computer-readable storage medium.
Background
For a complete load balancing System (LoadBalance System), a host where the load balancing System is located must run some management processes, and the management processes and the load balancing System (LoadBalance) process have certain data interaction.
In the process of implementing the inventive concept, the inventors found that at least the following defects exist in the related art: under the condition of high-load operation of a host machine, a management process on the host machine may compete with a LoadBalance process for hardware resources, so that performance jitter of the LoadBalance process is caused.
Disclosure of Invention
In view of this, the present disclosure provides a method and a system for optimizing load balancing, which are capable of isolating a load balancing process and a management process in different containers to achieve the purpose of preventing the load balancing process and the management process from competing for hardware resources, thereby achieving the technical effect of avoiding performance jitter of the load balancing process.
One aspect of the present disclosure provides a method of optimizing load balancing, comprising: after a first container is started, adding identification information of a load balancing process in the first container into a first task file of a control family group so as to associate the load balancing process with the first task file, wherein the load balancing process is used for distributing connection requests and/or data requests aiming at a host machine of the load balancing process to corresponding servers in a distributed architecture in a balanced manner; and after the second container is started, adding identification information of the management process in the second container to a second task file of the control group to associate the management process with the second task file, wherein the control group is used for isolating hardware resources used by different processes by associating different processes with different task files.
According to an embodiment of the present disclosure, after the first container is started, adding, by a Docker engine, identification information of the load balancing process in the first container to the first task file of the control family to associate the load balancing process with the first task file; and after the second container is started, adding the identification information of the management process in the second container to the second task file of the control family through the Docker engine so as to associate the management process with the second task file.
According to an embodiment of the present disclosure, after the first container is started, adding, by the Docker engine, the identification information of the load balancing process in the first container to the first task file of the control family includes: after the Docker engine starts the first container, generating a directory with the identification information of the first container as a name under each resource directory under the related directory of the control group through the Docker engine; storing the load balancing process in a directory with the identification information of the first container as a name; and writing, by the Docker engine, identification information corresponding to the load balancing process stored in the directory named with the first container identification information into the first task file of the control group.
According to an embodiment of the present disclosure, after the second container is started and the identification information of the management process in the second container is added to the second task file of the control family, the method further includes: and controlling the load balancing process and the management process to share a network protocol stack.
According to an embodiment of the present disclosure, controlling the load balancing process and the management process to share a network protocol stack includes: and controlling the first container and the second container to use the same network entrance.
According to an embodiment of the present disclosure, controlling the first container and the second container to use the same network portal includes: generating a sleep container as the network entry; and designating the network mode of the first container and the second container as the dormant container at the same time, so as to enable the first container and the second container to use the dormant container as the same network entry.
According to an embodiment of the present disclosure, after generating the hibernation container as the network entry, the method further includes: and generating a network namespace corresponding to the dormant container, wherein the network namespace is used for isolating resources related to the network.
According to an embodiment of the present disclosure, the hardware resources at least include one or more of the following resources: CPU, internal memory and IO interface.
According to an embodiment of the present disclosure, the second container includes at least one; the management process at least comprises one or more of the following processes: a driving process, a proxy process and a reporting process; and storing each process in the management processes into one container in the second containers respectively.
Another aspect of the present disclosure provides a system for optimizing load balancing, comprising: a first adding module, configured to add, after a first container is started, identification information of a load balancing process in the first container to a first task file of a control group, so as to associate the load balancing process with the first task file, where the load balancing process is configured to distribute connection requests and/or data requests for its hosts to corresponding servers in a distributed architecture in a balanced manner; and a second adding module, configured to add, after the second container is started, identification information of the management process in the second container to a second task file of the control group, so as to associate the management process with the second task file, where the control group is configured to isolate hardware resources used by different processes by associating different processes with different task files.
According to an embodiment of the present disclosure, the first adding module is further configured to add, by a Docker engine, identification information of the load balancing process in the first container to the first task file of the control family after the first container is started, so as to associate the load balancing process with the first task file; and the second adding module is further configured to add, by the Docker engine, the identification information of the management process in the second container to the second task file of the control group after the second container is started, so as to associate the management process with the second task file.
According to an embodiment of the present disclosure, the second adding module includes: a first generating unit, configured to generate, by the Docker engine, a directory with identification information of the first container as a name under each resource directory under a relevant directory of the control population after the Docker engine starts the first container; a storage unit, configured to store the load balancing process in a directory with the identification information of the first container as a name; and a write operation unit, configured to write, by the Docker engine, identification information corresponding to the load balancing process stored in a directory with the first container identification information as a name into the first task file of the control group.
According to an embodiment of the present disclosure, the above system further includes: and the control module is used for controlling the load balancing process and the management process to share the network protocol stack after the second container is started and the identification information of the management process in the second container is added into the second task file of the control group.
According to an embodiment of the present disclosure, the control module is further configured to: and controlling the first container and the second container to use the same network entrance.
According to an embodiment of the present disclosure, the control module includes: a second generation unit configured to generate a hibernation container as the network entry; and a defining unit, configured to specify a network mode of the first container and the second container as the dormant container, so that the first container and the second container use the dormant container as a same network entry.
According to an embodiment of the present disclosure, the above system further includes: a generating module, configured to generate a network namespace corresponding to the dormant container after generating the dormant container as the network entry, where the network namespace is used to isolate resources related to a network.
According to an embodiment of the present disclosure, the hardware resources at least include one or more of the following resources: CPU, internal memory and IO interface.
According to an embodiment of the present disclosure, the second container includes at least one; the management process at least comprises one or more of the following processes: a driving process, a proxy process and a reporting process; and storing each process in the management processes into one container in the second containers respectively.
Another aspect of the present disclosure provides a computer system comprising: one or more processors; a memory for storing one or more programs, wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method for optimizing load balancing of any of the above embodiments.
Another aspect of the present disclosure provides a computer-readable storage medium having stored thereon executable instructions that, when executed by a processor, cause the processor to implement the method of optimizing load balancing as described in any of the above embodiments.
According to the embodiment of the disclosure, because the technical means of isolating the load balancing process and the management process in different containers is adopted, the technical problem that the load balancing process and the management process which are not isolated in the related technology compete for hardware resources is at least partially solved, and then the performance of the load balancing process can be optimized to achieve the technical effect of avoiding the intermittent performance jitter of the load balancing process.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent from the following description of embodiments of the present disclosure with reference to the accompanying drawings, in which:
FIG. 1 schematically illustrates a system architecture to which the method and system of optimizing load balancing may be applied, according to an embodiment of the present disclosure;
FIG. 2A schematically illustrates a flow diagram of a method of optimizing load balancing according to an embodiment of the present disclosure;
fig. 2B schematically shows a schematic diagram of a load balancing method according to the related art;
fig. 2C schematically illustrates a schematic diagram of a load balancing method according to another related art;
FIG. 3A schematically illustrates a flow diagram of a method of optimizing load balancing according to another embodiment of the present disclosure;
FIG. 3B schematically illustrates a schematic diagram of optimizing load balancing according to an embodiment of the present disclosure;
fig. 3C schematically illustrates a flow diagram for adding identification information of a load balancing process in a first container to a first task file of a control population by a Docker engine after the first container is started, according to an embodiment of the disclosure;
FIG. 3D schematically illustrates a flow diagram of a method of optimizing load balancing according to another embodiment of the present disclosure;
FIG. 3E schematically illustrates a flow diagram of a method of optimizing load balancing according to another embodiment of the present disclosure;
FIG. 3F schematically illustrates a flow chart for controlling a first container and a second container to use the same web portal, in accordance with an embodiment of the disclosure;
FIG. 3G schematically illustrates a schematic diagram of implementing different containers using the same network portal, in accordance with an embodiment of the present disclosure;
FIG. 3H schematically illustrates a flow chart for controlling a first container and a second container to use the same web portal, according to another embodiment of the present disclosure;
FIG. 4 schematically illustrates a block diagram of a system for optimizing load balancing, in accordance with an embodiment of the present disclosure;
FIG. 5A schematically illustrates a block diagram of a second add module according to an embodiment of the present disclosure;
FIG. 5B schematically illustrates a block diagram of a system that optimizes load balancing according to another embodiment of the present disclosure;
FIG. 5C schematically illustrates a block diagram of a control module according to an embodiment of the disclosure;
FIG. 5D schematically illustrates a block diagram of a system that optimizes load balancing according to another embodiment of the present disclosure; and
FIG. 6 schematically illustrates a block diagram of a computer system suitable for implementing a method of optimizing load balancing according to an embodiment of the present disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is illustrative only and is not intended to limit the scope of the present disclosure. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The words "a", "an" and "the" and the like as used herein are also intended to include the meanings of "a plurality" and "the" unless the context clearly dictates otherwise. Furthermore, the terms "comprises," "comprising," and the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It is noted that the terms used herein should be interpreted as having a meaning that is consistent with the context of this specification and should not be interpreted in an idealized or overly formal sense.
Where a convention analogous to "at least one of A, B and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B and C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.). Where a convention analogous to "A, B or at least one of C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B or C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.). It will be further understood by those within the art that virtually any disjunctive word and/or phrase presenting two or more alternative terms, whether in the description, claims, or drawings, should be understood to contemplate the possibilities of including one of the terms, either of the terms, or both terms. For example, the phrase "a or B" should be understood to include the possibility of "a" or "B", or "a and B".
Embodiments of the present disclosure provide a method for optimizing load balancing for preventing performance jitter of a load balancing process by putting the load balancing process and a management process into different containers for isolation, and a system for optimizing load balancing to which the method can be applied. After the first container is started, adding identification information of a load balancing process in the first container to a first task file of a control group so as to associate the load balancing process with the first task file, wherein the load balancing process is used for distributing connection requests and/or data requests aiming at a host machine of the load balancing process to corresponding servers in a distributed architecture in a balanced manner; and after the second container is started, adding identification information of the management process in the second container into a second task file of the control group to associate the management process with the second task file, wherein the control group is used for isolating hardware resources used by different processes by associating the different processes with different task files.
Fig. 1 schematically illustrates a system architecture to which the method and system of optimizing load balancing may be applied, according to an embodiment of the present disclosure.
As shown in fig. 1, the system architecture 100 according to this embodiment may include terminal devices 101, 102, 103, a network 104 and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. The terminal devices 101, 102, 103 may have installed thereon various communication client applications, such as shopping-like applications, web browser applications, search-like applications, instant messaging tools, mailbox clients, social platform software, etc. (by way of example only).
The terminal devices 101, 102, 103 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like.
The server 105 may be a server providing various services, such as a background management server (for example only) providing support for websites browsed by users using the terminal devices 101, 102, 103. The background management server may analyze and perform other processing on the received data such as the user request, and feed back a processing result (e.g., a webpage, information, or data obtained or generated according to the user request) to the terminal device.
It should be noted that the method for optimizing load balancing provided by the embodiments of the present disclosure may be generally performed by the server 105. Accordingly, the system for optimizing load balancing provided by the embodiments of the present disclosure may be generally disposed in the server 105. The method for optimizing load balancing provided by the embodiments of the present disclosure may also be performed by a server or a server cluster different from the server 105 and capable of communicating with the terminal devices 101, 102, 103 and/or the server 105. Accordingly, the system for optimizing load balancing provided by the embodiment of the present disclosure may also be disposed in a server or a server cluster different from the server 105 and capable of communicating with the terminal devices 101, 102, 103 and/or the server 105.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Fig. 2A schematically illustrates a flow chart of a method of optimizing load balancing according to an embodiment of the present disclosure.
As shown in fig. 2A, the method for optimizing load balancing may include operations S201 to S202, where:
in operation S201, after the first container is started, the identification information of the load balancing process in the first container is added to the first task file of the control group to associate the load balancing process with the first task file, where the load balancing process is used to distribute connection requests and/or data requests for its hosts to corresponding servers in the distributed architecture in a balanced manner.
It should be noted that, in general, the creation of the container (which may be a virtualized container) may be started after completion. Since the Control Group (CGroup) can associate a task file (tasks, such as a first task file (tasks1)) of the Control Group with a subsystem Process (such as a load balancing Process), so as to limit hardware resources used by the subsystem Process, after the first container is started, in order to limit hardware resources used by the load balancing Process, the load balancing Process can be found from the first container storing the Process, identification information of the load balancing Process is determined, and then the determined identification information is automatically written into a task file of the Control Group to associate the task file with the load balancing Process, so as to finally achieve the purpose of limiting the hardware resources used by the load balancing Process.
It should be understood that the purpose of allocating the hardware resources used by the load balancing process is achieved by limiting the hardware resources used by the load balancing process, that is, associating the task file with the load balancing process.
It should be noted that the hardware resource at least includes one or more of a CPU, a memory, and an IO interface, which is not limited herein.
For example, the identification information PID1 of the load balancing process is "12345", after the first container is started, the load balancing process is found from the first container, the identification information "12345" of the load balancing process is determined, and then "12345" is automatically written into the first task file tasks1 of the control group, so as to associate the load balancing process with the first task file tasks1, that is, the control group restricts the hardware resources used by the load balancing process whose identification information PID1 is "12345".
In operation S202, after the second container is started, the identification information of the management process in the second container is added to the second task file of the control clan to associate the management process with the second task file. Wherein the control clan is used to isolate hardware resources used by different processes by associating different processes with different task files.
It should be noted that the management Process at least includes one or more of a Driver Process (Driver Process), an agent Process (AgentProcess), and a Report Process (Report Process), and is not limited herein. In addition, in the disclosed embodiments, the second container may include one or more. If the second container only comprises one, all the management processes can be placed in the second container, in this case, the resource limitation is not performed between the management processes, so that the management processes still compete for hardware resources; if the second container includes a plurality of second containers, different management processes can be placed in different second containers, and in this case, since resource restriction is performed between the management processes, hardware resources are not contended between the management processes.
For example, when the management process includes an agent process, the identification information PID2 of the agent process is "12346", after the second container is started, the agent process is found from the second container, the identification information "12346" of the agent process is determined, and then "12346" is automatically written into the second task file tasks2 of the control group, so as to realize the association between the agent process and the second task file tasks2, that is, the control group restricts the hardware resources used by the agent process whose identification information PID2 is "12346".
Unlike the technical solutions provided in the embodiments of the present disclosure, currently, in the related art, the mainstream software-based load balancing is implemented in a reverse proxy manner, which is specifically as follows: when a request is sent externally, a host machine where a load balancing process is located receives the request, then forwards the request to a back-end operation server, the back-end operation server responds to the request and returns a corresponding result, and finally sends the returned result to a client sending the request. Meanwhile, a management process running on the host machine can perform data interaction with the load balancing process to form a complete set of load balancing system. As shown in fig. 2B, in a related technical solution, since the load balancing process and the management process running on the host are not resource-limited, when the host is in a high load, the load balancing process and the management process may intermittently perform jitter due to resource contention. As shown in fig. 2C, in another related art solution, although the load balancing process and the management process are respectively associated with different tasks, the user must manually write the identification information of the load balancing process into one task file of the control group and write the identification information of the management process into another task file of the control group, which is very inconvenient and tedious to operate by using the native control group.
In the embodiment of the present disclosure, because the load balancing process and the management process are stored in different containers, and the identification information of the load balancing process and the identification information of the management process are automatically written into the task file corresponding to the control group, the isolation of the hardware resources used by the load balancing process and the management process is achieved, the competition of different processes for the same hardware resources is avoided, and the purpose of preventing the performance of the load balancing process from jittering is achieved.
The method shown in fig. 2A is further described with reference to fig. 3A-3H in conjunction with specific embodiments.
Fig. 3A schematically illustrates a flow chart of a method of optimizing load balancing according to another embodiment of the present disclosure.
In this embodiment, the method for optimizing load balancing includes operations S201 to S202 described above with reference to fig. 2A, where operation S201 may be replaced with operation S301, and operation S202 may be replaced with operation S302. For simplicity of description, the description of operations S201 to S202 in fig. 2A is omitted here. As shown in fig. 3A, wherein:
in operation S301, after the first container is started, the identification information of the load balancing process in the first container is added to the first task file of the control group through the Docker engine, so as to associate the load balancing process with the first task file;
in operation S302, after the second container is started, the identification information of the management process in the second container is added to the second task file of the control clan through the Docker engine to associate the management process with the second task file.
It should be noted that the container may be started in various ways, which are not limited herein, for example, the container may be started by a Docker engine. Specifically, the Docker engine may utilize the kernel characteristics of Linux (a set of open-source, POSIX-and UNIX-based multi-user, multi-task, multithreading, and multi-CPU-supported operating systems) such as Namespace (Namespace), control clan, and the like, to create a lightweight, portable, and resource-isolated virtualized container for the application.
For example, as shown in fig. 3B, after a Container is started by a Docker, a load balancing process and a management process (e.g., an Agent process and a reporting process) are placed in a corresponding first Container (e.g., a load balancing Container (LoadBalance Container)) and a second Container (e.g., an Agent Container (Agent Container) and a reporting Container (Report Container)), where the load balancing process, the Agent process, and the reporting process are respectively stored in the load balancing Container, the Agent Container, and the reporting Container, identification information of the processes is sequentially determined as identification information PID1, identification information PID2, and identification information PID3, and then the Docker engine automatically writes PID1 into a first task file tasks1 of a control family, and writes PID2 and PID3 into a second task file tasks2 of the control family, thereby isolating hardware resources used by different processes.
Through the embodiment of the disclosure, the Docker engine generates the virtualization container, and different processes are respectively put into different containers to run, so that the situation that different processes compete for the same hardware resource is avoided, and further, the performance jitter of the load balancing process can be prevented.
Fig. 3C schematically illustrates a flowchart for adding identification information of a load balancing process in a first container to a first task file of a control population by a Docker engine after the first container is started, according to an embodiment of the disclosure.
In this embodiment, the method of optimizing load balancing may include operations S401 to S403 (i.e., operation S301 may include operations S401 to S403) in addition to operations S201 to S202 and S302 described above with reference to fig. 2A and 3A. For simplicity of description, the description of operations S201 to S202 and S302 in fig. 2A and 3A is omitted here. As shown in fig. 3C, wherein:
in operation S401, after the Docker engine starts the first container, a directory with the identification information of the first container as a name is generated under each resource directory under the related directory of the control population by the Docker engine;
in operation S402, storing a load balancing process in a directory named by identification information of the first container; and
in operation S403, identification information corresponding to the load balancing process stored in the directory named the first container identification information is written into the first task file of the control group by the Docker engine.
It should be noted that, in the embodiment of the present disclosure, the relevant directory of the control group may be a "/sys/fs/cgroup directory", for example, after the Docker engine starts the first container, the Docker engine generates a directory related to the container under each resource directory under the "/sys/fs/cgroup directory", where the generated directory may be named by identification information of the first container, stores a load balancing process into the container, determines identification information of the load balancing process, and then automatically writes the determined identification information into one task file (e.g., the first task file) of the control group through the Docker engine to associate the task file with the load balancing process, so as to finally achieve the purpose of limiting hardware resources used by the load balancing process.
For example, the identification information ID1 of the first container (which may be referred to as a load balancing container) is "111", after the first container is started by the Docker engine, the identification information "111" of the first container is determined, then the Docker engine generates a new directory named "111" under each resource directory under the/sys/fs/cgroup directory, and at the same time, the Docker engine automatically stores the load balancing process into the directory named "111", further determines the identification information PID1 (e.g., "12345") of the load balancing process, and automatically writes the identification information "12345" into the first task file of the control group, thereby isolating the load balancing process from other processes.
It should be understood that, after the second container is started, and the Docker engine adds the identification information of the management process in the second container to the second task file of the control group, the execution manner is similar to operations S401 to S403, and details are not described here again.
Through the embodiment of the disclosure, different processes are put into different containers to run, so that the purpose that the load balancing process and other management processes cannot compete for the same hardware resource is achieved, and the technical effect of preventing the performance of the load balancing process from jittering is achieved.
Fig. 3D schematically illustrates a flow diagram of a method of optimizing load balancing according to another embodiment of the present disclosure.
In the above embodiment, the load balancing process and the management process are isolated directly by the task file of the container and the control group, so that not only the hardware resources used by the load balancing process are isolated from the hardware resources used by the management process, but also the network protocol stacks used by the load balancing process and the management process are isolated, and therefore, communication which can be realized locally through the network protocol stacks originally can be realized only by external communication after being wound to the outside, which not only causes resource waste, but also causes communication timeliness to be poor. In the preferred embodiment, the method of optimizing load balancing may further include operation S501 in addition to operations S201 to S202 described above with reference to fig. 2A. For simplicity of description, the description of operations S201 to S202 in fig. 2A is omitted here. As shown in fig. 3D, wherein:
in operation S501, the control load balancing process and the management process share a network protocol stack.
It should be noted that, the manner of sharing the network protocol stack (i.e., the sum of the protocols of the layers in the network) by the control load balancing process and the management process may include various manners, which are not limited herein. For example, the method can be realized by introducing a name space of Linux.
In the embodiment of the present disclosure, 6 namespaces can be implemented in Linux, where each namespace includes some abstract set of global system resources. For the shared Network protocol stack, a Network Namespace (Network Namespace) is used, wherein the Network Namespace is used for isolating resources related to the Network. Typically, each generated virtualized container will have its own independent network namespace.
Through the embodiment of the disclosure, the load balancing process and the management process are controlled to share the network protocol stack, so that the hardware resources used by the load balancing process and the management process are isolated, and the load balancing process and the management process can be ensured to perform local communication on a host machine of the load balancing process and the management process, thereby achieving the purposes of fully utilizing internal resources and improving the timeliness of communication.
Fig. 3E schematically illustrates a flow chart of a method of optimizing load balancing according to another embodiment of the present disclosure.
In this embodiment, the method for optimizing load balancing may further include operation S601 in addition to operations S201 to S202 and S501 described above with reference to fig. 2A and 3D, and operation S501 described in fig. 3D. For simplicity of description, the description of operations S201 to S202 and S501 in fig. 2A and 3D is omitted here. As shown in fig. 3E, wherein:
in operation S601, the first container and the second container are controlled to use the same web portal.
It should be noted that, the manner of controlling the first container and the second container to use the same network portal may include multiple ways, and is not limited herein. Specifically, the network mode of the first container and the second container may be set as the container corresponding to the network entry.
Through the embodiment of the disclosure, the first container and the second container are controlled to use the same network entry, so that a load balancing process and a management process are realized, and the technical effect of preventing the performance of the load balancing process from shaking is further achieved on the premise that the network entry performs mutual communication on a host machine.
Fig. 3F schematically illustrates a flow chart for controlling a first container and a second container to use the same web portal, according to an embodiment of the disclosure.
In this embodiment, the method for optimizing load balancing may include operations S701 to S702 in addition to operations S201 to S202 and S601 described above with reference to fig. 2A and 3E, and operation S601 described in fig. 3E may also include operations S701 to S702. The description of operations S201 to S202 and S601 is omitted here for the sake of brevity of description. As shown in fig. 3F, wherein:
in operation S701, generating a hibernation container as a network entry;
in operation S702, for the first container and the second container, the network mode thereof is simultaneously designated as a dormant container, so that the first container and the second container use the dormant container as the same network entry.
It should be noted that, the manner of generating the sleep container as the network entry may include various manners, and is not limited herein. For example, a virtualized container may be generated by the Docker engine as the hibernation container.
In an embodiment of the present disclosure, the hibernation container (do _ no _ ing _ container) may be represented as a container that does nothing, and after the hibernation container is generated, the hibernation container is used as a network entry of the first container and the second container. Wherein if the dormant container is denoted as "container 1"), then specifying that the network mode of the first container and the second container is "container 1", i.e. as shown in fig. 3G, the purpose of controlling the first container and the second container to use the dormant container as the same network entry can be achieved.
By the embodiment of the disclosure, the first container and the second container are controlled to use the same network inlet, so that the load balancing process and the management process isolate respective used hardware resources on the premise of sharing a network protocol stack, and the purpose of preventing the performance of the load balancing process from jittering is achieved.
Fig. 3H schematically illustrates a flow diagram of a method of optimizing load balancing according to another embodiment of the present disclosure.
In this embodiment, the method of optimizing load balancing may include operation S801 in addition to operations S201 to S202 and S701 to S702 described above with reference to fig. 2A and 3F. For simplicity of description, the description of operations S201 to S202 and S701 to S702 is omitted here. As shown in fig. 3H, wherein:
in operation S801, a network namespace corresponding to a hibernation container is generated. Wherein the network namespace is used to isolate resources related to the network.
It should be noted that the network namespace at least includes one or more of a network device, an IP device, a routing table, and a port number, which is not limited herein.
In the embodiment of the disclosure, after the hibernation container is generated, the Docker engine may automatically generate the network namespace corresponding to the hibernation container, in other words, the first container and the second container may finally use the same network namespace, so as to implement local communication on the host.
Through the embodiment of the disclosure, the first container and the second container are designated to use the same network name space, so that the management processes of the load balancing process share the network protocol stack and the hardware resources used by the management processes are isolated, and the technical effect of preventing the performance jitter of the load balancing process is achieved.
According to an embodiment of the present disclosure, the hardware resources at least include one or more of the following resources: CPU, internal memory and IO interface.
According to an embodiment of the present disclosure, the second container includes at least one, and the management process includes at least one or more of the following processes: the method comprises a driving process, an agent process, a reporting process and a step of storing each process in a management process into one container in a second container.
In the embodiment of the disclosure, each management process is respectively stored in different virtualization containers, so that on the premise of sharing a network protocol stack among different processes, hardware resources among different processes are isolated, intermittent performance jitter of load balancing is further prevented, and the performance of load balancing is optimized.
Fig. 4 schematically illustrates a block diagram of a system for optimizing load balancing according to an embodiment of the present disclosure.
In this embodiment, the system 400 for optimizing load balancing may include a first adding module 410, for adding identification information of the load balancing process in the first container to the first task file of the control family after the first container is started to associate the load balancing process with the first task file, wherein, the load balancing process is used to distribute the connection request and/or data request for its host to the corresponding server in the distributed architecture in a balanced manner, and the second adding module 420, for adding identification information of the management process in the second container to a second task file of the control clan after the second container is started to associate the management process with the second task file, wherein the control clan is used to isolate hardware resources used by different processes by associating different processes with different task files.
According to the embodiment of the disclosure, because the load balancing process and the management process are stored in different containers, and the identification information of the load balancing process and the identification information of the management process are automatically and respectively written into the task files corresponding to the control groups, the isolation of the hardware resources used by the load balancing process and the management process is realized, the competition of different processes for the same hardware resources is avoided, and the purpose of preventing the performance of the load balancing process from jittering is achieved.
According to the embodiment of the disclosure, the first adding module is further configured to add, by the Docker engine, identification information of the load balancing process in the first container to the first task file of the control group after the first container is started, so as to associate the load balancing process with the first task file, and the second adding module is further configured to add, by the Docker engine, identification information of the management process in the second container to the second task file of the control group after the second container is started, so as to associate the management process with the second task file.
According to the embodiment of the disclosure, the Docker engine generates the virtualization container, and different processes are respectively put into different containers to run, so that different processes are prevented from competing for the same hardware resource, and further, the performance jitter of the load balancing process can be prevented.
FIG. 5A schematically illustrates a block diagram of a second add module according to an embodiment of the present disclosure;
in this embodiment, the system 400 for optimizing load balancing may further include a first generation unit 421, a storage unit 422, and a write operation unit 423 in addition to the respective modules described above with reference to fig. 4. The description of the corresponding blocks in fig. 4 is omitted here for the sake of brevity of description. As shown in fig. 5A, wherein: the second adding module 420 may include a first generating unit 421 configured to generate, by the Docker engine, a directory with the identification information of the first container as a name under each resource directory under the related directory of the control group after the Docker engine starts the first container, a storage unit 422 configured to store a load balancing process in the directory with the identification information of the first container as a name, and a write operation unit 423 configured to write, by the Docker engine, the identification information corresponding to the load balancing process stored in the directory with the identification information of the first container as a name into the first task file of the control group.
Through the embodiment of the disclosure, different processes are put into different containers to run, so that the purpose that the load balancing process and other management processes cannot compete for the same hardware resource is achieved, and the technical effect of preventing the performance of the load balancing process from jittering is achieved.
FIG. 5B schematically illustrates a block diagram of a system that optimizes load balancing according to another embodiment of the present disclosure;
in this embodiment, the system 400 for optimizing load balancing may include a control module 510 in addition to the corresponding modules described above with reference to FIG. 4. The description of the corresponding blocks in fig. 4 is omitted here for the sake of brevity of description. As shown in fig. 5B, wherein: the control module 510 is configured to, after the second container is started and the identification information of the management process in the second container is added to the second task file of the control group, control the load balancing process and the management process to share the network protocol stack.
Through the embodiment of the disclosure, the load balancing process and the management process are controlled to share the network protocol stack, so that the hardware resources used by the load balancing process and the management process are isolated, and the load balancing process and the management process can be ensured to perform local communication on a host machine of the load balancing process and the management process, thereby achieving the purposes of fully utilizing internal resources and improving the timeliness of communication.
According to the embodiment of the disclosure, the control module is further configured to control the first container and the second container to use the same network portal.
Through the embodiment of the disclosure, the first container and the second container are controlled to use the same network entry, so that a load balancing process and a management process are realized, and the technical effect of preventing the performance of the load balancing process from shaking is further achieved on the premise that the network entry performs mutual communication on a host machine.
FIG. 5C schematically illustrates a block diagram of a control module according to an embodiment of the disclosure;
in this embodiment, the system 400 for optimizing load balancing may further include a second generating unit 511 and a defining unit 512 in addition to the respective modules described above with reference to fig. 4 and 5B. For the sake of simplicity of description, descriptions of the corresponding blocks in fig. 4 and 5B are omitted here. As shown in fig. 5C, wherein: the control module 510 may include a second generating unit 511 configured to generate a dormant container as a network portal, and a defining unit 512 configured to simultaneously designate a network mode of the first container and the second container as the dormant container, so that the first container and the second container use the dormant container as the same network portal.
By the embodiment of the disclosure, the first container and the second container are controlled to use the same network inlet, so that the load balancing process and the management process isolate respective used hardware resources on the premise of sharing a network protocol stack, and the purpose of preventing the performance of the load balancing process from jittering is achieved.
FIG. 5D schematically illustrates a block diagram of a system that optimizes load balancing according to another embodiment of the present disclosure;
in this embodiment, the system 400 for optimizing load balancing may include a generation module 610 in addition to the respective modules described above with reference to fig. 4 and 5C. For the sake of simplicity of description, descriptions of the corresponding blocks in fig. 4 and 5C are omitted here. As shown in fig. 5D, wherein: a generating module 610, configured to generate a network namespace corresponding to a hibernation container after the hibernation container serving as a network entry is generated, where the network namespace is used for isolating resources related to a network.
Through the embodiment of the disclosure, the first container and the second container are designated to use the same network name space, so that the management processes of the load balancing process share the network protocol stack and the hardware resources used by the management processes are isolated, and the technical effect of preventing the performance jitter of the load balancing process is achieved.
According to an embodiment of the present disclosure, the hardware resources at least include one or more of the following resources: CPU, internal memory and IO interface.
According to an embodiment of the present disclosure, the second container includes at least one, and the management process includes at least one or more of the following processes: the method comprises a driving process, an agent process, a reporting process and a step of storing each process in a management process into one container in a second container.
In the embodiment of the disclosure, each management process is respectively stored in different virtualization containers, so that on the premise of sharing a network protocol stack among different processes, hardware resources among different processes are isolated, intermittent performance jitter of load balancing is further prevented, and the performance of load balancing is optimized.
FIG. 6 schematically illustrates a block diagram of a computer system suitable for implementing a method of optimizing load balancing according to an embodiment of the present disclosure. The computer system illustrated in FIG. 6 is only one example and should not impose any limitations on the scope of use or functionality of embodiments of the disclosure.
As shown in fig. 5, the computer system 600 according to the embodiment of the present disclosure includes a processor 701, which can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)702 or a program loaded from a storage section 708 into a Random Access Memory (RAM) 703. The processor 701 may include, for example, a general purpose microprocessor (e.g., a CPU), an instruction set processor and/or associated chipset, and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), among others. The processor 710 may also include on-board memory for caching purposes. Processor 710 may include a single processing unit or multiple processing units for performing different actions of the method flows described with reference to fig. 2A, 3A-3H in accordance with embodiments of the disclosure.
In the RAM 703, various programs and data necessary for the operation of the computer system 600 are stored. The processor 701, the ROM702, and the RAM 703 are connected to each other by a bus 704. The processor 701 performs various operations described above with reference to fig. 2A, 3A to 3H by executing programs in the ROM702 and/or the RAM 703. It is noted that the programs may also be stored in one or more memories other than the ROM702 and RAM 703. The processor 701 may also perform the various operations described above with reference to fig. 2A, 3A-3H by executing programs stored in the one or more memories.
According to an embodiment of the present disclosure, the computer system 600 may also include an input/output (I/O) interface 705, the input/output (I/O) interface 705 also being connected to the bus 704. The computer system 600 may also include one or more of the following components connected to the I/O interface 705: an input portion 706 including a keyboard, a mouse, and the like; an output section 707 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 708 including a hard disk and the like; and a communication section 709 including a network interface card such as a LAN card, a modem, or the like. The communication section 709 performs communication processing via a network such as the internet. A drive 710 is also connected to the I/O interface 705 as needed. A removable medium 711 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 710 as necessary, so that a computer program read out therefrom is mounted into the storage section 708 as necessary.
According to an embodiment of the present disclosure, the method described above with reference to the flow chart may be implemented as a computer software program. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program can be downloaded and installed from a network through the communication section 709, and/or installed from the removable medium 711. The computer program, when executed by the processor 701, performs the above-described functions defined in the system of the embodiment of the present disclosure. The systems, devices, apparatuses, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the present disclosure.
It should be noted that the computer readable media shown in the present disclosure may be computer readable signal media or computer readable storage media or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer-readable signal medium may include a propagated data signal with computer-readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing. According to embodiments of the present disclosure, a computer-readable medium may include the ROM702 and/or the RAM 703 and/or one or more memories other than the ROM702 and the RAM 703 described above.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
As another aspect, the present disclosure also provides a computer-readable medium, on which executable instructions are stored, and when executed by the processor 701, the instructions cause the processor 701 to implement the method for optimizing load balancing according to any one of the above method embodiments. The computer readable medium may be contained in the apparatus described in the above embodiments; or may be separate and not incorporated into the device. The computer readable medium carries one or more programs which, when executed by a device, cause the device to perform: after the first container is started, adding identification information of a load balancing process in the first container to a first task file of a control group so as to associate the load balancing process with the first task file, wherein the load balancing process is used for distributing connection requests and/or data requests aiming at a host machine of the load balancing process to corresponding servers in a distributed architecture in a balanced manner; and after the second container is started, adding identification information of the management process in the second container into a second task file of the control group to associate the management process with the second task file, wherein the control group is used for isolating hardware resources used by different processes by associating the different processes with different task files.
The embodiments of the present disclosure have been described above. However, these examples are for illustrative purposes only and are not intended to limit the scope of the present disclosure. Although the embodiments are described separately above, this does not mean that the measures in the embodiments cannot be used in advantageous combination. The scope of the disclosure is defined by the appended claims and equivalents thereof. Various alternatives and modifications can be devised by those skilled in the art without departing from the scope of the present disclosure, and such alternatives and modifications are intended to be within the scope of the present disclosure.

Claims (20)

1. A method of optimizing load balancing, comprising:
after a first container is started, adding identification information of a load balancing process in the first container to a first task file of a control family group so as to associate the load balancing process with the first task file, wherein the load balancing process is used for distributing connection requests and/or data requests aiming at a host machine of the load balancing process to corresponding servers in a distributed architecture in a balanced manner; and
after the second container is started, adding identification information of the management process in the second container to a second task file of the control clan to associate the management process with the second task file, wherein the control clan is used for isolating hardware resources used by different processes by associating different processes with different task files.
2. The method of claim 1, wherein:
after the first container is started, adding identification information of the load balancing process in the first container to the first task file of the control family through a Docker engine so as to associate the load balancing process with the first task file; and
after the second container is started, adding the identification information of the management process in the second container to the second task file of the control clan through the Docker engine, so as to associate the management process with the second task file.
3. The method of claim 2, wherein adding, by a Docker engine, identification information of the load balancing process in the first container to the first task file of the control family after the first container is started comprises:
after the Docker engine starts the first container, generating a directory with the identification information of the first container as a name under each resource directory under the related directory of the control group through the Docker engine;
storing the load balancing process in a directory named by the identification information of the first container; and
and writing identification information corresponding to the load balancing process stored in a directory with the first container identification information as a name into the first task file of the control family through the Docker engine.
4. The method of claim 1, wherein after the second container is started and the identification information of the management process in the second container is added to the second task file of the control clan, the method further comprises:
and controlling the load balancing process and the management process to share a network protocol stack.
5. The method of claim 4, wherein controlling the load balancing process and the management process to share a network protocol stack comprises:
and controlling the first container and the second container to use the same network entrance.
6. The method of claim 5, wherein controlling the first container and the second container to use the same network portal comprises:
generating a sleep container as the network entry; and
and for the first container and the second container, simultaneously designating the network mode of the first container and the second container as the dormant container so as to enable the first container and the second container to use the dormant container as the same network entry.
7. The method of claim 6, wherein after generating the sleep container as the network entry, the method further comprises:
generating a network namespace corresponding to the hibernation container, wherein the network namespace is used to isolate resources related to a network.
8. The method according to any one of claims 1 to 7, wherein the hardware resources comprise at least one or several of the following resources: CPU, internal memory and IO interface.
9. The method of any one of claims 1 to 7, wherein:
the second container comprises at least one;
the management process at least comprises one or more of the following processes: a driving process, a proxy process and a reporting process; and
and storing each process in the management processes into one container in the second containers respectively.
10. A system for optimizing load balancing, comprising:
the system comprises a first adding module, a second adding module and a third adding module, wherein the first adding module is used for adding identification information of a load balancing process in a first container to a first task file of a control group after the first container is started so as to associate the load balancing process with the first task file, and the load balancing process is used for distributing connection requests and/or data requests aiming at a host machine of the load balancing process to corresponding servers in a distributed architecture in a balanced manner; and
a second adding module, configured to add, after a second container is started, identification information of a management process in the second container to a second task file of the control group, so as to associate the management process with the second task file, where the control group is configured to isolate hardware resources used by different processes by associating the different processes with different task files.
11. The system of claim 10, wherein:
the first adding module is further configured to add, by a Docker engine, the identification information of the load balancing process in the first container to the first task file of the control family after the first container is started, so as to associate the load balancing process with the first task file; and
the second adding module is further configured to add, by the Docker engine, the identification information of the management process in the second container to the second task file of the control clan after the second container is started, so as to associate the management process with the second task file.
12. The system of claim 11, wherein the second adding module comprises:
a first generating unit, configured to generate, by the Docker engine, a directory with identification information of the first container as a name under each resource directory under a relevant directory of the control population after the Docker engine starts the first container;
a storage unit, configured to store the load balancing process in a directory with the identification information of the first container as a name; and
and the write operation unit is used for writing the identification information corresponding to the load balancing process stored in the directory with the first container identification information as a name into the first task file of the control family through the Docker engine.
13. The system of claim 10, wherein the system further comprises:
and the control module is used for controlling the load balancing process and the management process to share the network protocol stack after the second container is started and the identification information of the management process in the second container is added into the second task file of the control group.
14. The system of claim 13, wherein the control module is further configured to:
and controlling the first container and the second container to use the same network entrance.
15. The system of claim 14, wherein the control module comprises:
a second generating unit, configured to generate a hibernation container as the network entry; and
and the defining unit is used for simultaneously designating the network mode of the first container and the second container as the dormant container so as to enable the first container and the second container to use the dormant container as the same network entry.
16. The system of claim 15, wherein the system further comprises:
the generation module is used for generating a network namespace corresponding to the dormant container after the dormant container which is the network entry is generated, wherein the network namespace is used for isolating resources related to the network.
17. The system of any one of claims 10 to 16, wherein the hardware resources include at least one or more of the following resources: CPU, internal memory and IO interface.
18. The system of any one of claims 10 to 16, wherein:
the second container comprises at least one;
the management process at least comprises one or more of the following processes: a driving process, a proxy process and a reporting process; and
and storing each process in the management processes into one container in the second containers respectively.
19. A computing device, comprising:
one or more processors;
a storage device for storing one or more programs,
wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method for optimizing load balancing of any one of claims 1 to 9.
20. A computer readable medium having stored thereon executable instructions which, when executed by a processor, cause the processor to carry out the method of optimizing load balancing according to any one of claims 1 to 9.
CN201710927691.3A 2017-09-30 2017-09-30 Method and system for optimizing load balance Active CN107800779B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710927691.3A CN107800779B (en) 2017-09-30 2017-09-30 Method and system for optimizing load balance

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710927691.3A CN107800779B (en) 2017-09-30 2017-09-30 Method and system for optimizing load balance

Publications (2)

Publication Number Publication Date
CN107800779A CN107800779A (en) 2018-03-13
CN107800779B true CN107800779B (en) 2020-09-29

Family

ID=61534020

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710927691.3A Active CN107800779B (en) 2017-09-30 2017-09-30 Method and system for optimizing load balance

Country Status (1)

Country Link
CN (1) CN107800779B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110087107B (en) * 2019-04-25 2022-01-14 视联动力信息技术股份有限公司 Method for improving system self-adaptive capacity and video networking system
CN111127657B (en) * 2019-11-29 2023-06-20 重庆顺泰铁塔制造有限公司 Virtual manufacturing method and system based on Unreal Engine
CN111399999B (en) * 2020-03-05 2023-06-20 腾讯科技(深圳)有限公司 Computer resource processing method, device, readable storage medium and computer equipment
CN112948127B (en) * 2021-03-30 2023-11-10 北京滴普科技有限公司 Cloud platform container average load monitoring method, terminal equipment and readable storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8819561B2 (en) * 2008-11-12 2014-08-26 Citrix Systems, Inc. Tool for visualizing configuration and status of a network appliance
CN103092675A (en) * 2012-12-24 2013-05-08 北京伸得纬科技有限公司 Virtual environment construction method
CN104268022B (en) * 2014-09-23 2017-06-27 浪潮(北京)电子信息产业有限公司 The resource allocation methods and system of process in a kind of operating system
CN106209741B (en) * 2015-05-06 2020-01-03 阿里巴巴集团控股有限公司 Virtual host, isolation method, resource access request processing method and device
US10110418B2 (en) * 2015-11-03 2018-10-23 Rancher Labs, Inc. Cloud computing service architecture

Also Published As

Publication number Publication date
CN107800779A (en) 2018-03-13

Similar Documents

Publication Publication Date Title
CN107800779B (en) Method and system for optimizing load balance
CN112104723B (en) Multi-cluster data processing system and method
US11010215B2 (en) Recommending applications based on call requests between applications
US10673835B2 (en) Implementing single sign-on in a transaction processing system
US10284670B1 (en) Network-controlled device management session
US20180343174A1 (en) Rule based page processing and network request processing in browsers
CN110888696A (en) Page display method and system, computer system and computer readable medium
US11392395B2 (en) Generating and presenting contextual user interfaces on devices with foldable displays
US10939480B2 (en) Enabling communications between a controlling device and a network-controlled device via a network-connected device service over a mobile communications network
JP7200078B2 (en) System-on-chip with I/O steering engine
US20140067864A1 (en) File access for applications deployed in a cloud environment
CN114281263B (en) Storage resource processing method, system and equipment of container cluster management system
US11921726B2 (en) Logical partitions via header-based partition filtering
CN113434241A (en) Page skipping method and device
WO2023174013A1 (en) Video memory allocation method and apparatus, and medium and electronic device
JP2018505494A (en) Multi-mode system on chip
CN113900834A (en) Data processing method, device, equipment and storage medium based on Internet of things technology
CN111124299A (en) Data storage management method, device, equipment, system and storage medium
CN113132400B (en) Business processing method, device, computer system and storage medium
CN111800511B (en) Synchronous login state processing method, system, equipment and readable storage medium
US9338229B2 (en) Relocating an application from a device to a server
US8442939B2 (en) File sharing method, computer system, and job scheduler
CN116257320B (en) DPU-based virtualization configuration management method, device, equipment and medium
CN111142972B (en) Method, apparatus, system, and medium for extending functions of application program
JP2017134827A (en) Long polling processing method, system, and recording medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant