CN107800779A - Optimize the method and system of load balancing - Google Patents

Optimize the method and system of load balancing Download PDF

Info

Publication number
CN107800779A
CN107800779A CN201710927691.3A CN201710927691A CN107800779A CN 107800779 A CN107800779 A CN 107800779A CN 201710927691 A CN201710927691 A CN 201710927691A CN 107800779 A CN107800779 A CN 107800779A
Authority
CN
China
Prior art keywords
container
load balancer
identification information
control group
file
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710927691.3A
Other languages
Chinese (zh)
Other versions
CN107800779B (en
Inventor
李国超
刘海锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Original Assignee
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Century Trading Co Ltd, Beijing Jingdong Shangke Information Technology Co Ltd filed Critical Beijing Jingdong Century Trading Co Ltd
Priority to CN201710927691.3A priority Critical patent/CN107800779B/en
Publication of CN107800779A publication Critical patent/CN107800779A/en
Application granted granted Critical
Publication of CN107800779B publication Critical patent/CN107800779B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • H04L41/0823Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Stored Programmes (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

Present disclose provides a kind of method for optimizing load balancing, including:After the startup of the first container, the identification information of load balancer process in first container is added in the first task file of control group, with by load balancer process and first task file association, wherein, load balancer process be used for by for the connection request of its host and/or request of data equilibrium assignment to the corresponding server in distributed structure/architecture;And after second container startup, the identification information of managing process in second container is added in the second assignment file of control group, so that managing process to be associated with the second assignment file, wherein, group is controlled to be used for by the way that different processes is associated to isolate hardware resource used in different processes from different assignment files.The disclosure additionally provides a kind of system, a kind of computer system and a kind of computer-readable recording medium for optimizing load balancing.

Description

Optimize the method and system of load balancing
Technical field
This disclosure relates to field of computer technology, and in particular to a kind of method and system for optimizing load balancing, Yi Zhongji Calculation machine system and a kind of computer-readable recording medium.
Background technology
For a complete SiteServer LBS (LoadBalance System), the host where it must Some managing process can be run, these managing process there can be certain data interaction with load balancing (LoadBalance) process.
During present inventive concept is realized, inventor has found following defect at least be present in correlation technique:In host In the case of machine heavy-duty service, the managing process on host may compete hardware resource with LoadBalance processes, make Shaken into LoadBalance process performances.
The content of the invention
In view of this, present disclose provides it is a kind of can be by the way that load balancer process and managing process to be put into different appearances Isolated in device, prevent load balancer process and the purpose of managing process competition hardware resource to reach, and then realize and avoid The method and its system of the optimization load balancing of the technique effect of load balancer process performance shake.
An aspect of this disclosure provides a kind of method for optimizing load balancing, including:, will after the startup of the first container The identification information of load balancer process in above-mentioned first container is added in the first task file of control group, will be above-mentioned Load balancer process and above-mentioned first task file association, wherein, above-mentioned load balancer process is used for for its host Connection request and/or request of data equilibrium assignment are on the corresponding server in distributed structure/architecture;And opened in second container After dynamic, the identification information of the managing process in above-mentioned second container is added in the second assignment file of above-mentioned control group, So that above-mentioned managing process to be associated with above-mentioned second assignment file, wherein, above-mentioned control group is used for by by different processes Associated from different assignment files to isolate hardware resource used in different processes.
In accordance with an embodiment of the present disclosure, after the startup of above-mentioned first container, by Docker engines by above-mentioned first container In the identification information of above-mentioned load balancer process be added in the above-mentioned first task file of above-mentioned control group, will be above-mentioned Load balancer process and above-mentioned first task file association;And after the startup of above-mentioned second container, drawn by above-mentioned Docker Hold up the above-mentioned second task text that the identification information of the above-mentioned managing process in above-mentioned second container is added to above-mentioned control group In part, above-mentioned managing process is associated with above-mentioned second assignment file.
In accordance with an embodiment of the present disclosure, after the startup of above-mentioned first container, by Docker engines by above-mentioned first container In the identification information of above-mentioned load balancer process be added to the above-mentioned first task file of above-mentioned control group and include:Upper After stating above-mentioned first container of Docker engine starts, by above-mentioned Docker engines under the associative directory of above-mentioned control group Catalogue of the generation using the identification information of above-mentioned first container as name under each Resource TOC;With the mark of above-mentioned first container Information is to store above-mentioned load balancer process in the catalogue of name;And it will be stored in above-mentioned by above-mentioned Docker engines One container identification information writes above-mentioned control group for identification information corresponding to the above-mentioned load balancer process in the catalogue of name Above-mentioned first task file in.
In accordance with an embodiment of the present disclosure, after second container startup, and by the mark of the managing process in above-mentioned second container After knowledge information is added in the second assignment file of above-mentioned control group, the above method also includes:Control above-mentioned load balancing Process and above-mentioned managing process share network protocol stack.
In accordance with an embodiment of the present disclosure, above-mentioned load balancer process and above-mentioned managing process is controlled to share network protocol stack bag Include:Above-mentioned first container and above-mentioned second container is controlled to use same Web portal.
In accordance with an embodiment of the present disclosure, above-mentioned first container and above-mentioned second container is controlled to use same Web portal bag Include:Generate the dormancy container as above-mentioned Web portal;And for above-mentioned first container and above-mentioned second container, specify simultaneously Its network mode is above-mentioned dormancy container, using realize above-mentioned first container and above-mentioned second container use above-mentioned dormancy container as Same Web portal.
In accordance with an embodiment of the present disclosure, after generation is as the dormancy container of above-mentioned Web portal, the above method also wraps Include:Generation network namespace corresponding with above-mentioned dormancy container, wherein, above-mentioned network namespace is for isolation and network Relevant resource.
In accordance with an embodiment of the present disclosure, above-mentioned hardware resource comprises at least the one or more in following resource:It is CPU, interior Deposit, I/O interface.
In accordance with an embodiment of the present disclosure, above-mentioned second container comprises at least one;Above-mentioned managing process comprises at least following One or more in process:Driving process, agent process, report process;And by each process in above-mentioned managing process It is respectively stored into a container in above-mentioned second container.
Another aspect of the disclosure provides a kind of system for optimizing load balancing, including:First add module, is used for After the startup of the first container, the identification information of the load balancer process in above-mentioned first container is added to the first of control group In assignment file, by above-mentioned load balancer process and above-mentioned first task file association, wherein, above-mentioned load balancer process is used In by for the connection request of its host and/or request of data equilibrium assignment to the corresponding server in distributed structure/architecture On;And second add module, for after second container starts, by the identification information of the managing process in above-mentioned second container It is added in the second assignment file of above-mentioned control group, above-mentioned managing process is associated with above-mentioned second assignment file, its In, above-mentioned control group is used to be used to isolate different processes by the way that different processes is associated from different assignment files Hardware resource.
In accordance with an embodiment of the present disclosure, above-mentioned first add module, it is additionally operable to after above-mentioned first container starts, passes through The identification information of above-mentioned load balancer process in above-mentioned first container is added to the upper of above-mentioned control group by Docker engines State in first task file, by above-mentioned load balancer process and above-mentioned first task file association;And above-mentioned second addition Module, it is additionally operable to after above-mentioned second container starts, by above-mentioned Docker engines by the above-mentioned management in above-mentioned second container The identification information of process is added in above-mentioned second assignment file of above-mentioned control group, by above-mentioned managing process and above-mentioned the Two assignment files associate.
In accordance with an embodiment of the present disclosure, above-mentioned second add module includes:First generation unit, in above-mentioned Docker After above-mentioned first container of engine start, pass through each resource of the above-mentioned Docker engines under the associative directory of above-mentioned control group Catalogue of the generation using the identification information of above-mentioned first container as name under catalogue;Memory cell, for above-mentioned first container Identification information for name catalogue in store above-mentioned load balancer process;And write operation unit, for by above-mentioned Docker engines will be stored in corresponding as the above-mentioned load balancer process in the catalogue of name using above-mentioned first container identification information Identification information write in the above-mentioned first task file of above-mentioned control group.
In accordance with an embodiment of the present disclosure, said system also includes:Control module, for after second container starts, and will After the identification information of managing process in above-mentioned second container is added in the second assignment file of above-mentioned control group, control Above-mentioned load balancer process and above-mentioned managing process share network protocol stack.
In accordance with an embodiment of the present disclosure, above-mentioned control module, is additionally operable to:Control above-mentioned first container and above-mentioned second container Use same Web portal.
In accordance with an embodiment of the present disclosure, above-mentioned control module includes:Second generation unit, above-mentioned network is used as generating The dormancy container of entrance;And definition unit, for for above-mentioned first container and above-mentioned second container, while specify its network Pattern is above-mentioned dormancy container, to realize above-mentioned first container and above-mentioned second container uses above-mentioned dormancy container as same Web portal.
In accordance with an embodiment of the present disclosure, said system also includes:Generation module, for being used as above-mentioned Web portal in generation Dormancy container after, generate corresponding with above-mentioned dormancy container network namespace, wherein, above-mentioned network namespace is use In the isolation resource relevant with network.
In accordance with an embodiment of the present disclosure, above-mentioned hardware resource comprises at least the one or more in following resource:It is CPU, interior Deposit, I/O interface.
In accordance with an embodiment of the present disclosure, above-mentioned second container comprises at least one;Above-mentioned managing process comprises at least following One or more in process:Driving process, agent process, report process;And by each process in above-mentioned managing process It is respectively stored into a container in above-mentioned second container.
Another aspect of the present disclosure provides a kind of computer system, including:One or more processors;Memory, use In storing one or more programs, wherein, when said one or multiple programs are by said one or multiple computing devices, make Said one or multiple processors method of realizing the optimization load balancing any one of above-described embodiment.
Another aspect of the present disclosure provides a kind of computer-readable recording medium, is stored thereon with executable instruction, should The method that instruction makes processor realize the optimization load balancing any one of above-described embodiment when being executed by processor.
In accordance with an embodiment of the present disclosure, because employing by the way that load balancer process and managing process to be put into different appearances The technological means isolated in device, so at least partially overcoming the load balancer process being not isolated from correlation technique and pipe Reason process can compete the technical problem of hardware resource, and then the performance that can optimize load balancer process avoids load equal to realize The technique effect of weighing apparatus process intermittence performance shake.
Brief description of the drawings
By the description to the embodiment of the present disclosure referring to the drawings, the above-mentioned and other purposes of the disclosure, feature and Advantage will be apparent from, in the accompanying drawings:
Fig. 1 diagrammatically illustrates can be with the method and system of optimizing application load balancing according to the embodiment of the present disclosure System framework;
Fig. 2A diagrammatically illustrates the flow chart of the method for the optimization load balancing according to the embodiment of the present disclosure;
Fig. 2 B diagrammatically illustrate the schematic diagram of the load-balancing method according to correlation technique;
Fig. 2 C diagrammatically illustrate the schematic diagram of the load-balancing method according to another correlation technique;
Fig. 3 A diagrammatically illustrate the flow chart of the method for the optimization load balancing according to another embodiment of the disclosure;
Fig. 3 B diagrammatically illustrate the schematic diagram of the optimization load balancing according to the embodiment of the present disclosure;
Fig. 3 C diagrammatically illustrate according to the embodiment of the present disclosure after the startup of the first container by Docker engines by the The identification information of load balancer process in one container is added to the flow chart in the first task file of control group;
Fig. 3 D diagrammatically illustrate the flow chart of the method for the optimization load balancing according to another embodiment of the disclosure;
Fig. 3 E diagrammatically illustrate the flow chart of the method for the optimization load balancing according to another embodiment of the disclosure;
Fig. 3 F diagrammatically illustrate uses same net according to the container of control first and second container of the embodiment of the present disclosure The flow chart of network entrance;
Fig. 3 G diagrammatically illustrate realizes different vessels showing using same Web portal according to the embodiment of the present disclosure It is intended to;
Fig. 3 H are diagrammatically illustrated according to the container of control first and second container of another embodiment of the disclosure using same The flow chart of individual Web portal;
Fig. 4 diagrammatically illustrates the block diagram of the system of the optimization load balancing according to the embodiment of the present disclosure;
Fig. 5 A diagrammatically illustrate the block diagram of the second add module according to the embodiment of the present disclosure;
Fig. 5 B diagrammatically illustrate the block diagram of the system of the optimization load balancing according to another embodiment of the disclosure;
Fig. 5 C diagrammatically illustrate the block diagram of the control module according to the embodiment of the present disclosure;
Fig. 5 D diagrammatically illustrate the block diagram of the system of the optimization load balancing according to another embodiment of the disclosure;And
Fig. 6 diagrammatically illustrates the department of computer science for being adapted for carrying out optimizing the method for load balancing according to the embodiment of the present disclosure The block diagram of system.
Embodiment
Hereinafter, it will be described with reference to the accompanying drawings embodiment of the disclosure.However, it should be understood that these descriptions are simply exemplary , and it is not intended to limit the scope of the present disclosure.In addition, in the following description, the description to known features and technology is eliminated, with Avoid unnecessarily obscuring the concept of the disclosure.
Term as used herein is not intended to limit the disclosure just for the sake of description specific embodiment.Used here as Word " one ", " one (kind) " and "the" etc. should also include " multiple ", the meaning of " a variety of ", unless context clearly refers in addition Go out.In addition, term " comprising " as used herein, "comprising" etc. indicate the presence of the feature, step, operation and/or part, But it is not excluded that in the presence of or other one or more features of addition, step, operation or parts.
All terms (including technology and scientific terminology) as used herein have what those skilled in the art were generally understood Implication, unless otherwise defined.It should be noted that term used herein should be interpreted that with consistent with the context of this specification Implication, without should by idealization or it is excessively mechanical in a manner of explain.
, in general should be according to this using in the case of being similar to that " in A, B and C etc. at least one " is such and stating Art personnel are generally understood that the implication of the statement to make an explanation (for example, " having system at least one in A, B and C " Should include but is not limited to individually with A, individually with B, individually with C, with A and B, with A and C, with B and C, and/or System with A, B, C etc.).Using in the case of being similar to that " in A, B or C etc. at least one " is such and stating, it is general come Say be generally understood that the implication of the statement to make an explanation (for example, " having in A, B or C at least according to those skilled in the art The system of one " should include but is not limited to individually with A, individually with B, individually with C, with A and B, with A and C, with B and C, and/or system etc. with A, B, C).It should also be understood by those skilled in the art that substantially arbitrarily represent two or more The adversative conjunction and/or phrase of optional project, either in specification, claims or accompanying drawing, shall be construed as Give including one of these projects, the possibility of these projects either one or two projects.For example, " A or B " should for phrase It is understood to include " A " or " B " or " A and B " possibility.
Embodiment of the disclosure provides a kind of be used for by the way that load balancer process and managing process to be put into different appearances Isolated in device to prevent the method for the optimization load balancing of load balancer process performance shake and this method can be applied Optimization load balancing system.After this method is included in the startup of the first container, by the load balancer process in the first container Identification information is added in the first task file of control group, by load balancer process and first task file association, its In, load balancer process be used for by for the connection request of its host and/or request of data equilibrium assignment to distributed structure/architecture In corresponding server on;And after second container startup, the identification information of the managing process in second container is added Into the second assignment file of control group, managing process is associated with the second assignment file, wherein, control group is used to lead to Cross and associate different processes to isolate hardware resource used in different processes from different assignment files.
Fig. 1 diagrammatically illustrates can be with the method and system of optimizing application load balancing according to the embodiment of the present disclosure System framework.
As shown in figure 1, terminal device 101,102,103, network can be included according to the system architecture 100 of the embodiment 104 and server 105.Network 104 is to the offer communication link between terminal device 101,102,103 and server 105 Medium.Network 104 can include various connection types, such as wired, wireless communication link or fiber optic cables etc..
User can be interacted with using terminal equipment 101,102,103 by network 104 with server 105, to receive or send out Send message etc..Various telecommunication customer end applications, such as the application of shopping class, net can be installed on terminal device 101,102,103 (merely illustrative) such as the application of page browsing device, searching class application, JICQ, mailbox client, social platform softwares.
Terminal device 101,102,103 can have a display screen and a various electronic equipments that supported web page browses, bag Include but be not limited to smart mobile phone, tablet personal computer, pocket computer on knee and desktop computer etc..
Server 105 can be to provide the server of various services, such as utilize terminal device 101,102,103 to user The website browsed provides the back-stage management server (merely illustrative) supported.Back-stage management server can be to the use that receives The data such as family request analyze etc. processing, and by result (such as according to user's acquisition request or the webpage of generation, believe Breath or data etc.) feed back to terminal device.
It should be noted that the method for the optimization load balancing that the embodiment of the present disclosure is provided typically can be by server 105 perform.Correspondingly, the system for the optimization load balancing that the embodiment of the present disclosure is provided can typically be arranged at server 105 In.The method for the optimization load balancing that the embodiment of the present disclosure is provided can also be by different from server 105 and can be with terminal The server or server cluster that equipment 101,102,103 and/or server 105 communicate perform.Correspondingly, the embodiment of the present disclosure The system of the optimization load balancing provided can also be arranged at different from server 105 and can with terminal device 101,102, 103 and/or server 105 communicate server or server cluster in.
It should be understood that the number of the terminal device, network and server in Fig. 1 is only schematical.According to realizing need Will, can have any number of terminal device, network and server.
Fig. 2A diagrammatically illustrates the flow chart of the method for optimization load balancing in accordance with an embodiment of the present disclosure.
As shown in Figure 2 A, the method for the optimization load balancing can include operation S201~S202, wherein:
In operation S201, after the startup of the first container, the identification information of the load balancer process in the first container is added Into the first task file of control group, by load balancer process and first task file association, wherein, load balancing is entered Journey be used for by for the connection request of its host and/or request of data equilibrium assignment to the corresponding service in distributed structure/architecture On device.
It should be noted that usually, can starts after container (can be virtualization container) creates completion.By Can be by the assignment file (tasks, such as first task file of itself in control group (Control Group, CGroup) (tasks1)) associated with subsystem process (such as load balancer process (LoadBalance Process)), so as to realize antithetical phrase Hardware resource is limited used in system process, therefore after the startup of the first container, in order to realize to load balancer process Used hardware resource is limited, and can find the load balancer process from the first container of storage process, and determine The identification information of the load balancer process, then the identification information determined is write to an assignment file of control group automatically To associate the assignment file and the load balancer process, it is finally reached and hardware resource used in load balancer process is limited The purpose of system.
It should be understood that limit hardware resource used in load balancer process, i.e., by by assignment file with it is negative Carry balancer process to be associated, realize the purpose being allocated to hardware resource used in load balancer process.
It should be noted that hardware resource comprises at least the one or more in CPU, internal memory, I/O interface, do not limit herein It is fixed.
For example, the identification information PID1 of load balancer process is " 12345 ", after the startup of the first container, from the first container In find the load balancer process, and determine the identification information " 12345 " of the load balancer process, then automatically will " 12345 " are written in the first task file tasks1 of control group, are realized with this by load balancer process and first Business file tasks1 be associated, i.e., control group by identification information PID1 be " 12345 " load balancer process used in firmly Part resource is limited.
In operation S202, after second container startup, the identification information of the managing process in second container is added to control In second assignment file of group processed, managing process is associated with the second assignment file.Wherein, control group be used for pass through by Different processes is associated from different assignment files to isolate hardware resource used in different processes.
It should be noted that managing process comprises at least driving process (Driver Process), agent process (Agent Process the one or more in process (Report Process)), are reported, are not limited herein.It is in addition, real in the disclosure Apply in example, second container can include one or more.If second container only includes one, can be by all management Process is all placed in this second container, in such cases, due to no carry out resource constraint between each managing process, because Hardware resource can be still competed between this they;If second container include it is multiple, different managing process can be put respectively In different second containers, in such cases, due to also having carried out resource constraint between each managing process, therefore between them Also hardware resource will not be competed.
For example, when managing process includes agent process, the identification information PID2 of the agent process is " 12346 ", After two containers start, the agent process is found from second container, and determines the identification information " 12346 " of the agent process, so Afterwards automatically " 12346 " are written in the second assignment file tasks2 for controlling group, agent process and the are realized with this Two assignment file tasks2 are associated, that is, control group will be hard used in agent process that identification information PID2 is " 12346 " Part resource is limited.
It is different from the technical scheme that the embodiment of the present disclosure is provided, at present, in the related art, main flow based on software Load balancing is realized in a manner of reverse proxy, specific as follows:When sending request in outside, where load balancer process Host receives the request, then forwards the request to back-end operations server, then by back-end operations server in response to this Ask and return to corresponding result, the result most returned at last is sent to the client for sending the request, now the host pair The outer server that an offer corresponding with service is provided, and actually the host act as the role of proxy server.Together When, the managing process run on host can carry out data interaction with load balancer process, form complete a set of load balancing System.As shown in Figure 2 B, in a kind of related art scheme, due to will not run on load balancer process and its host Managing process carries out resource constraint, thus may result in the case of host high load capacity, load balancer process and manage into Due to competitive resource between journey, and load balancer process intermittence performance is caused to be shaken.As shown in Figure 2 C, in another related skill In art scheme, although load balancer process and managing process are associated with from different tasks respectively, this association must Must by user manually by the identification information of load balancer process be written to control group an assignment file in, and will manage into The identification information of journey is written in another assignment file of control group, and operation is very inconvenient, and uses primary control Group, operation are excessively tediously long numerous and diverse.
And in embodiment of the disclosure, due to load balancer process and managing process being stored in different containers, And the identification information of the identification information of load balancer process and managing process is respectively written into task corresponding to control group automatically In file, the isolation of hardware resource used in load balancer process and managing process is realized, avoids different processes to phase With the competition of hardware resource, the purpose for preventing that load balancer process performance from shaking is reached.
Below with reference to Fig. 3 A~Fig. 3 H, the method shown in Fig. 2A is described further in conjunction with specific embodiments.
Fig. 3 A diagrammatically illustrate the flow chart of the method for the optimization load balancing according to another embodiment of the disclosure.
In this embodiment, the operation S201 above with reference to Fig. 2A descriptions included for the method for the optimization load balancing ~S202, wherein, operation S201 could alternatively be operation S301, and operation S202 could alternatively be operation S302.For description For purpose of brevity, the description to operating S201~S202 in Fig. 2A is omitted here.As shown in Figure 3A, wherein:
In operation S301, after the startup of the first container, by Docker engines by the load balancer process in the first container Identification information be added to control group first task file in, by load balancer process and first task file association;
In operation S302, after second container startup, by Docker engines by the mark of the managing process in second container Know information to be added in the second assignment file of control group, managing process is associated with the second assignment file.
It should be noted that start container mode can include it is a variety of, do not limit herein, for example, can pass through Docker engine starts.Specifically, Docker engines can utilize the Linux such as NameSpace (Namespace), control group (a set of multi-user increasing income, based on POSIX and UNIX, multitask, the operating system for supporting multithreading and multi -CPU) kernel is special Property, for application create lightweight, portable, resource isolation virtualization container.
For example, as shown in Figure 3 B, after starting container by Docker, by load balancer process and managing process (as acted on behalf of Process and report process) be put into corresponding to the first container (such as load balancing container (LoadBalance Container)) and In second container (such as agent container (Agent Container) and report container (Report Container)), load balancing Container, agent container and report and deposit load balancer process, agent process in container respectively and report process, the mark of these processes Know information and be defined as identification information PID1, identification information PID2 and identification information PID3 successively, then Docker engines automatically will In the first task file tasks1 of PID1 write-in controls group, and PID2 and PID3 is write to second file for controlling group In tasks2, hardware resource used in different processes is isolated so as to realize.
By embodiment of the disclosure, virtualization container is generated by Docker engines, and different processes is respectively put into Run in different containers, avoid competition identical hardware resource between different processes, and then load balancer process can be prevented Performance is shaken.
Fig. 3 C diagrammatically illustrate according to the embodiment of the present disclosure after the startup of the first container by Docker engines by the The identification information of load balancer process in one container is added to the flow chart in the first task file of control group.
In this embodiment, the method for the optimization load balancing is except that can include above with reference to Fig. 2A and Fig. 3 A descriptions Operate outside S201~S202 and S302, can also including operation S401~S403, (operation S401 can be included by operating S301 ~S403).For description for purpose of brevity, the description to operating S201~S202 and S302 in Fig. 2A and Fig. 3 A is omitted here. As shown in Figure 3 C, wherein:
In operation S401, after the container of Docker engine starts first, the correlation of group is being controlled by Docker engines Catalogue of the generation using the identification information of the first container as name under each Resource TOC under catalogue;
In operation S402, load balancer process is stored in using the identification information of the first container as the catalogue of name;And
In operation S403, will be stored in using the first container identification information to be negative in the catalogue of name by Docker engines Carry in the first task file of identification information write-in control group corresponding to balancer process.
It should be noted that in embodiment of the disclosure, the associative directory for controlling group can be "/sys/fs/ Cgroup catalogues ", as example, after the container of Docker engine starts first, by Docker engines in "/sys/fs/ Catalogue of the generation on the container under each Resource TOC under cgroup catalogues ", the catalogue generated herein can be held with first The identification information of device by load balancer process storage into the container, and determines the mark of the load balancer process come what is named Know information, the identification information determined then is write (such as by an assignment file of control group by Docker engines automatically First task file) associate the assignment file and the load balancer process, it is finally reached to used in load balancer process The purpose that hardware resource is limited.
For example, the identification information ID1 of the first container (being properly termed as load balancing container) is " 111 ", drawn by Docker Hold up after starting the first container, determine the identification information " 111 " of the first container, then Docker engines can be in/sys/fs/ Under each Resource TOC under cgroup catalogues, generation name be " 111 " new directory, meanwhile, Docker engines can automatically by Load balancer process storage arrives name in the catalogue of " 111 ", further, to determine the identification information of the load balancer process PID1 (such as " 12345 "), and automatically write-in is controlled in the first task file of group by the identification information " 12345 ", so as to Load balancer process and other processes are isolated.
It should be understood that after second container startup, and by Docker engines by the mark of the managing process in second container Know the executive mode that information is added in the second assignment file of control group, it is similar with operation S401~S403, herein no longer Repeat.
By embodiment of the disclosure, different processes is put into different containers and run, realized load balancing and enter Journey will not compete the purpose of same hardware resource with other managing process, reach the skill for preventing that load balancer process performance from shaking Art effect.
Fig. 3 D diagrammatically illustrate the flow chart of the method for the optimization load balancing according to another embodiment of the disclosure.
In the above-described embodiments, directly by container and control group assignment file by load balancer process and manage into Journey is isolated, and can not only be isolated hardware resource used in load balancer process and hardware resource used in managing process, Network protocol stack used in them can also be isolated in the lump simultaneously, so as to cause script can be by network protocol stack at this The communication that ground is realized must could be realized around outside by means of PERCOM peripheral communication, so not only result in the wasting of resources, Er Qiehui Communication efficiency is caused to be deteriorated, in order to overcome the technological deficiency, the disclosure additionally provides a kind of preferred embodiment.It is preferred at this Embodiment in, the method for the optimization load balancing except can include above with reference to Fig. 2A descriptions operation S201~S202 it Outside, operation S501 can also be included.For description for purpose of brevity, omit and retouched to operating S201~S202 in Fig. 2A here State.As shown in Figure 3 D, wherein:
In operation S501, load balancer process and managing process is controlled to share network protocol stack.
It should be noted that control load balancer process and managing process share network protocol stack, (each layer is assisted i.e. in network The summation of view) mode can include it is a variety of, do not limit herein.For example, the side for the NameSpace for introducing Linux can be passed through Formula is realized.
6 kinds of NameSpaces can be realized in embodiment of the disclosure, in Linux, wherein, each NameSpace all wraps The abstraction set of some global system resources is contained.And for sharing network protocol stack, then need to use network namespace (Network Namespace), wherein, network namespace is used to isolate the resource relevant with network.Generally, each The virtualization container of generation can all possess oneself independent network namespace.
By embodiment of the disclosure, control load balancer process and managing process to share network protocol stack, realize While isolating hardware resource used in them, moreover it is possible to which proof load balancer process and managing process can be on its hosts Local communication is carried out, has reached and has made full use of internal resource, and improves the purpose of communication efficiency.
Fig. 3 E diagrammatically illustrate the flow chart of the method for the optimization load balancing according to another embodiment of the disclosure.
In this embodiment, the method for the optimization load balancing is except that can include above with reference to Fig. 2A and Fig. 3 D descriptions Operate outside S201~S202 and S501, the operation S501 of Fig. 3 D descriptions can also include operation S601.It is succinct for description For the sake of, the description to operating S201~S202 and S501 in Fig. 2A and Fig. 3 D is omitted here.As shown in FIGURE 3 E, wherein:
In operation S601, the first container and second container is controlled to use same Web portal.
It should be noted that it is more to control the first container and second container to include using the mode of same Web portal Kind, do not limit herein.Specifically, the first container and the network mode of second container can be arranged to the Web portal pair The container answered.
By embodiment of the disclosure, control the first container and second container to use same Web portal, realize negative Balancer process and managing process are carried, on the premise of being communicated by the Web portal on host, is further reached The technique effect for preventing load balancer process performance from shaking.
Fig. 3 F diagrammatically illustrate uses same net according to the container of control first and second container of the embodiment of the present disclosure The flow chart of network entrance.
In this embodiment, the method for the optimization load balancing is except that can include above with reference to Fig. 2A and Fig. 3 E descriptions Operate outside S201~S202 and S601, the operation S601 of Fig. 3 E descriptions can also include operation S701~S702.In order to describe For purpose of brevity, the description to operating S201~S202 and S601 is omitted here.As illustrated in Figure 3 F, wherein:
In operation S701, the dormancy container as Web portal is generated;
In operation S702, for the first container and second container, while it is dormancy container to specify its network mode, to realize First container and second container use dormancy container as same Web portal.
It should be noted that generation can not limit including a variety of herein as the mode of the dormancy container of Web portal It is fixed.For example, can generate a virtualization container by Docker engines is used as the dormancy container.
In embodiment of the disclosure, dormancy container (do_no_thing_container) can be expressed as one what The container not done, after dormancy container is generated, enter the dormancy container as the network of above-mentioned first container and second container Mouthful.Wherein, if the dormancy container is expressed as " container1 "), the network mode of specified first container and second container Be " container1 ", i.e., as shown in figure 3g, it is possible to realize that the first container of control and second container use the dormancy Purpose of the container as same Web portal.
By embodiment of the disclosure, control the first container and second container to use same Web portal, realize negative Balancer process and managing process are carried on the premise of shared network protocol stack, respective used hardware resource is made that every From, reached prevent load balancer process performance shake purpose.
Fig. 3 H diagrammatically illustrate the flow chart of the method for the optimization load balancing according to another embodiment of the disclosure.
In this embodiment, the method for the optimization load balancing is except that can include above with reference to Fig. 2A and Fig. 3 F descriptions Operate outside S201~S202 and S701~S702, operation S801 can also be included.For description for purpose of brevity, omit here Description to operating S201~S202 and S701~S702.As shown in figure 3h, wherein:
In operation S801, network namespace corresponding with dormancy container is generated.Wherein, network namespace be used for every The resource relevant from network.
It should be noted that network namespace comprises at least one in the network equipment, IP device, routing table, port numbers Kind is several, does not limit herein.
In embodiment of the disclosure, after dormancy container is generated, Docker engines can automatically generate the dormancy container pair The network namespace answered, in other words, the first container and second container eventually use identical network namespace, realize Local communication on host.
By embodiment of the disclosure, the first container and second container is specified to use identical network namespace, finally Load balancer process managing process is realized while shared network protocol stack, has also isolated hardware money used in them Source, the technique effect for preventing that load balancer process performance from shaking is reached.
In accordance with an embodiment of the present disclosure, above-mentioned hardware resource comprises at least the one or more in following resource:It is CPU, interior Deposit, I/O interface.
In accordance with an embodiment of the present disclosure, above-mentioned second container comprises at least one, and managing process comprises at least following process In one or more:Driving process, agent process, process is reported, and each process in managing process is stored respectively In a container into second container.
In embodiment of the disclosure, each managing process is stored in respectively in different virtualization containers, realized Between different processes on the premise of shared network protocol stack, the hardware resource isolated between different processes further prevents negative Balanced intermittent performance shake is carried, optimizes the performance of load balancing.
Fig. 4 diagrammatically illustrates the block diagram of the system of the optimization load balancing according to the embodiment of the present disclosure.
In this embodiment, the system 400 of the optimization load balancing can include the first add module 410, for After one container starts, the identification information of the load balancer process in the first container is added to the first task file of control group In, by load balancer process and first task file association, wherein, load balancer process is used for for the company of its host Request and/or request of data equilibrium assignment are connect on the corresponding server in distributed structure/architecture, and the second add module 420, for after second container starts, the identification information of the managing process in second container to be added into the second of control group In assignment file, managing process is associated with the second assignment file, wherein, control group be used for by by different processes with Different assignment files is associated to isolate hardware resource used in different processes.
By in embodiment of the disclosure, due to load balancer process and managing process are stored in different containers, And the identification information of the identification information of load balancer process and managing process is respectively written into task corresponding to control group automatically In file, the isolation of hardware resource used in load balancer process and managing process is realized, avoids different processes to phase With the competition of hardware resource, the purpose for preventing that load balancer process performance from shaking is reached.
In accordance with an embodiment of the present disclosure, the first add module, it is additionally operable to after the first container starts, passes through Docker engines The identification information of load balancer process in first container is added in the first task file of control group, will loaded equal Weighing apparatus process and first task file association, and the second add module, are additionally operable to after second container starts, are drawn by Docker Hold up and the identification information of the managing process in second container is added in the second assignment file of control group, by managing process Associated with the second assignment file.
By in embodiment of the disclosure, generating virtualization container by Docker engines, and different processes is put respectively Enter and run in different containers, avoid competition identical hardware resource between different processes, and then can prevent load balancing from entering Cheng Xingneng shakes.
Fig. 5 A diagrammatically illustrate the block diagram of the second add module according to the embodiment of the present disclosure;
In this embodiment, the system 400 of the optimization load balancing is except that can include above with reference to the corresponding of Fig. 4 descriptions Outside module, the second add module 420 can also include the first generation unit 421, memory cell 422 and write operation unit 423. For description for purpose of brevity, the description to corresponding module in Fig. 4 is omitted here.As shown in Figure 5A, wherein:Second add module 420 can include the first generation unit 421, for after the container of Docker engine starts first, being controlled by Docker engines Catalogue of the generation using the identification information of the first container as name under each Resource TOC under the associative directory of group processed, storage are single Member 422, for storing load balancer process, and write operation unit in using the identification information of the first container as the catalogue of name 423, for will be stored in using the first container identification information as the load balancer process in the catalogue of name by Docker engines In the first task file of corresponding identification information write-in control group.
By embodiment of the disclosure, different processes is put into different containers and run, realized load balancing and enter Journey will not compete the purpose of same hardware resource with other managing process, reach the skill for preventing that load balancer process performance from shaking Art effect.
Fig. 5 B diagrammatically illustrate the block diagram of the system of the optimization load balancing according to another embodiment of the disclosure;
In this embodiment, the system 400 of the optimization load balancing is except that can include above with reference to the corresponding of Fig. 4 descriptions Outside module, control module 510 can also be included.For description for purpose of brevity, omit here and corresponding module in Fig. 4 is retouched State.As shown in Figure 5 B, wherein:Control module 510, for after second container starts, and by the managing process in second container Identification information be added in the second assignment file of control group after, control load balancer process and managing process to share net Network protocol stack.
By embodiment of the disclosure, control load balancer process and managing process to share network protocol stack, realize While isolating hardware resource used in them, moreover it is possible to which proof load balancer process and managing process can be on its hosts Local communication is carried out, has reached and has made full use of internal resource, and improves the purpose of communication efficiency.
In accordance with an embodiment of the present disclosure, control module, it is additionally operable to control the first container and second container to use same net Network entrance.
By embodiment of the disclosure, control the first container and second container to use same Web portal, realize negative Balancer process and managing process are carried, on the premise of being communicated by the Web portal on host, is further reached The technique effect for preventing load balancer process performance from shaking.
Fig. 5 C diagrammatically illustrate the block diagram of the control module according to the embodiment of the present disclosure;
In this embodiment, the system 400 of the optimization load balancing is except that can include describing above with reference to Fig. 4 and Fig. 5 B Corresponding module outside, control module 510 can also include the second generation unit 511 and definition unit 512.For the letter of description For the sake of clean, the description to corresponding module in Fig. 4 and Fig. 5 B is omitted here.As shown in Figure 5 C, wherein:Control module 510 can wrap The second generation unit 511 is included, for generating the dormancy container as Web portal, and definition unit 512, for for first Container and second container, while it is dormancy container to specify its network mode, to realize that the first container and second container use dormancy Container is as same Web portal.
By embodiment of the disclosure, control the first container and second container to use same Web portal, realize negative Balancer process and managing process are carried on the premise of shared network protocol stack, respective used hardware resource is made that every From, reached prevent load balancer process performance shake purpose.
Fig. 5 D diagrammatically illustrate the block diagram of the system of the optimization load balancing according to another embodiment of the disclosure;
In this embodiment, the system 400 of the optimization load balancing is except that can include describing above with reference to Fig. 4 and Fig. 5 C Corresponding module outside, generation module 610 can also be included.For description for purpose of brevity, omit here in Fig. 4 and Fig. 5 C The description of corresponding module.As shown in Figure 5 D, wherein:Generation module 610, for generation as Web portal dormancy container it Afterwards, network namespace corresponding with dormancy container is generated, wherein, network namespace is for isolating the money relevant with network Source.
By embodiment of the disclosure, the first container and second container is specified to use identical network namespace, finally Load balancer process managing process is realized while shared network protocol stack, has also isolated hardware money used in them Source, the technique effect for preventing that load balancer process performance from shaking is reached.
In accordance with an embodiment of the present disclosure, above-mentioned hardware resource comprises at least the one or more in following resource:It is CPU, interior Deposit, I/O interface.
In accordance with an embodiment of the present disclosure, above-mentioned second container comprises at least one, and managing process comprises at least following process In one or more:Driving process, agent process, process is reported, and each process in managing process is stored respectively In a container into second container.
In embodiment of the disclosure, each managing process is stored in respectively in different virtualization containers, realized Between different processes on the premise of shared network protocol stack, the hardware resource isolated between different processes further prevents negative Balanced intermittent performance shake is carried, optimizes the performance of load balancing.
Fig. 6 diagrammatically illustrates the department of computer science for being adapted for carrying out optimizing the method for load balancing according to the embodiment of the present disclosure The block diagram of system.Computer system shown in Fig. 6 is only an example, should not be to the function and use range of the embodiment of the present disclosure Bring any restrictions.
As shown in figure 5, including processor 701 according to the computer system 600 of the embodiment of the present disclosure, it can be according to storage Program in read-only storage (ROM) 702 is loaded into random access storage device (RAM) 703 from storage part 708 Program and perform various appropriate actions and processing.Processor 701 can for example include general purpose microprocessor (such as CPU), refer to Make set processor and/or related chip group and/or special microprocessor (for example, application specific integrated circuit (ASIC)), etc..Processing Device 710 can also include being used for the onboard storage device for caching purposes.Processor 710 can include being used to perform with reference to figure 2A, figure Single treatment unit either multiple places of the different actions of the method flow according to the embodiment of the present disclosure of 3A~Fig. 3 H descriptions Manage unit.
In RAM 703, it is stored with computer system 600 and operates required various programs and data.Processor 701, ROM 702 and RAM 703 is connected with each other by bus 704.Processor 701 is by performing the journey in ROM 702 and/or RAM 703 Sequence performs the various operations described above with reference to Fig. 2A, Fig. 3 A~Fig. 3 H.Removed it is noted that described program can also be stored in In one or more memories beyond ROM 702 and RAM 703.Processor 701 can also be stored in described one by performing Program in individual or multiple memories performs the various operations described above with reference to Fig. 2A, Fig. 3 A~Fig. 3 H.
In accordance with an embodiment of the present disclosure, computer system 600 can also include input/output (I/O) interface 705, input/ Output (I/O) interface 705 is also connected to bus 704.Computer system 600 can also include being connected to the following of I/O interfaces 705 It is one or more in part:Importation 706 including keyboard, mouse etc.;Including such as cathode-ray tube (CRT), liquid crystal The output par, c 707 of display (LCD) etc. and loudspeaker etc.;Storage part 708 including hard disk etc.;And including such as The communications portion 709 of the NIC of LAN card, modem etc..Communications portion 709 is held via the network of such as internet Row communication process.Driver 710 is also according to needing to be connected to I/O interfaces 705.Detachable media 711, such as disk, CD, magnetic CD, semiconductor memory etc., it is arranged on as needed on driver 710, in order to the computer program read from it Storage part 708 is mounted into as needed.
In accordance with an embodiment of the present disclosure, it may be implemented as computer software journey above with reference to the method for flow chart description Sequence.For example, embodiment of the disclosure includes a kind of computer program product, it includes carrying meter on a computer-readable medium Calculation machine program, the computer program include the program code for being used for the method shown in execution flow chart.In such embodiments, The computer program can be downloaded and installed by communications portion 709 from network, and/or be pacified from detachable media 711 Dress.When the computer program is performed by processor 701, the above-mentioned function of being limited in the system of the embodiment of the present disclosure is performed.Root According to embodiment of the disclosure, system as described above, unit, module, unit etc. can by computer program module come Realize.
It should be noted that the computer-readable medium shown in the disclosure can be computer-readable signal media or meter Calculation machine readable storage medium storing program for executing either the two any combination.Computer-readable recording medium for example can be --- but not Be limited to --- electricity, magnetic, optical, electromagnetic, system, device or the device of infrared ray or semiconductor, or it is any more than combination.Meter The more specifically example of calculation machine readable storage medium storing program for executing can include but is not limited to:Electrical connection with one or more wires, just Take formula computer disk, hard disk, random access storage device (RAM), read-only storage (ROM), erasable type and may be programmed read-only storage Device (EPROM or flash memory), optical fiber, portable compact disc read-only storage (CD-ROM), light storage device, magnetic memory device, Or above-mentioned any appropriate combination.In the disclosure, computer-readable recording medium can any include or store journey The tangible medium of sequence, the program can be commanded the either device use or in connection of execution system, device.And at this In open, computer-readable signal media can be included in a base band or the data-signal as carrier wave part propagation, Wherein carry computer-readable program code.The data-signal of this propagation can take various forms, including but unlimited In electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be that computer can Any computer-readable medium beyond storage medium is read, the computer-readable medium, which can send, propagates or transmit, to be used for By instruction execution system, device either device use or program in connection.Included on computer-readable medium Program code can be transmitted with any appropriate medium, be included but is not limited to:Wirelessly, electric wire, optical cable, RF etc., or it is above-mentioned Any appropriate combination.In accordance with an embodiment of the present disclosure, computer-readable medium can include above-described ROM 702 And/or one or more memories beyond RAM 703 and/or ROM 702 and RAM 703.
Flow chart and block diagram in accompanying drawing, it is illustrated that according to the system of the various embodiments of the disclosure, method and computer journey Architectural framework in the cards, function and the operation of sequence product.At this point, each square frame in flow chart or block diagram can generation The part of one module of table, program segment or code, a part for above-mentioned module, program segment or code include one or more For realizing the executable instruction of defined logic function.It should also be noted that some as replace realization in, institute in square frame The function of mark can also be with different from the order marked in accompanying drawing generation.For example, two square frames succeedingly represented are actual On can perform substantially in parallel, they can also be performed in the opposite order sometimes, and this is depending on involved function.Also It is noted that the combination of each square frame and block diagram in block diagram or flow chart or the square frame in flow chart, can use and perform rule Fixed function or the special hardware based system of operation are realized, or can use the group of specialized hardware and computer instruction Close to realize.
As on the other hand, the disclosure additionally provides a kind of computer-readable medium, is stored thereon with executable instruction, should Instruction makes processor 701 realize the optimization load balancing any one of the above method embodiment when being performed by processor 701 Method.The computer-readable medium can be included in the equipment described in above-described embodiment;Can also individually deposit , and without be incorporated the equipment in.Above computer computer-readable recording medium carries one or more program, when said one or When multiple programs are performed by the equipment so that the equipment performs:After the startup of the first container, by the load in the first container The identification information of balancer process is added in the first task file of control group, by load balancer process and first task text Part associates, wherein, load balancer process is divided for will be given for the connection request of its host and/or request of data equilibrium assignment On corresponding server in cloth framework;And after second container startup, by the mark of the managing process in second container Information is added in the second assignment file of control group, and managing process is associated with the second assignment file, wherein, control race Group is used for by the way that different processes is associated to isolate hardware resource used in different processes from different assignment files.
Embodiment of the disclosure is described above.But the purpose that these embodiments are merely to illustrate that, and It is not intended to limit the scope of the present disclosure.Although respectively describing each embodiment more than, but it is not intended that each reality Use can not be advantageously combined by applying the measure in example.The scope of the present disclosure is defined by the appended claims and the equivalents thereof.Do not take off From the scope of the present disclosure, those skilled in the art can make a variety of alternatives and modifications, and these alternatives and modifications should all fall at this Within scope of disclosure.

Claims (20)

1. a kind of method for optimizing load balancing, including:
After the startup of the first container, the identification information of the load balancer process in first container is added to control group In first task file, by the load balancer process and the first task file association, wherein, the load balancing is entered Journey be used for by for the connection request of its host and/or request of data equilibrium assignment to the corresponding service in distributed structure/architecture On device;And
After second container startup, the identification information of the managing process in the second container is added to the control group In second assignment file, the managing process is associated with second assignment file, wherein, the control group is used to lead to Cross and associate different processes to isolate hardware resource used in different processes from different assignment files.
2. the method according to claim 11, wherein:
After first container startup, by Docker engines by the mark of the load balancer process in first container Know information be added to it is described control group the first task file in, by the load balancer process with it is described first Business file association;And
After second container startup, by the Docker engines by the mark of the managing process in the second container Know information to be added in second assignment file of the control group, by the managing process and second task text Part associates.
3. the method according to claim 11, wherein, after first container startup, by Docker engines by described in The identification information of the load balancer process in first container is added in the first task file of the control group Including:
After the first container described in the Docker engine starts, by the Docker engines in the correlation for controlling group Catalogue of the generation using the identification information of first container as name under each Resource TOC under catalogue;
The load balancer process is stored in using the identification information of first container as the catalogue of name;And
It will be stored in using the first container identification information as the load in the catalogue of name by the Docker engines In the first task file of the identification information write-in control group corresponding to balancer process.
4. according to the method for claim 1, wherein, after second container startup, and by the management in the second container After the identification information of process is added in the second assignment file of the control group, methods described also includes:
The load balancer process and the managing process is controlled to share network protocol stack.
5. according to the method for claim 4, wherein, the load balancer process and the managing process is controlled to share network Protocol stack includes:
First container and the second container is controlled to use same Web portal.
6. according to the method for claim 5, wherein, first container and the second container is controlled to use same net Network entrance includes:
Generate the dormancy container as the Web portal;And
For first container and the second container, while it is the dormancy container to specify its network mode, to realize State the first container and the second container uses the dormancy container as same Web portal.
7. the method according to claim 11, wherein, it is described after generation is as the dormancy container of the Web portal Method also includes:
Generation network namespace corresponding with the dormancy container, wherein, the network namespace is for isolation and net The relevant resource of network.
8. method according to any one of claim 1 to 7, wherein, the hardware resource is comprised at least in following resource One or more:CPU, internal memory, I/O interface.
9. method according to any one of claim 1 to 7, wherein:
The second container comprises at least one;
The managing process comprises at least the one or more in following process:Driving process, agent process, report process;With And
In the container that each process in the managing process is respectively stored into the second container.
10. a kind of system for optimizing load balancing, including:
First add module, for after the startup of the first container, the mark of the load balancer process in first container to be believed Breath is added in the first task file of control group, by the load balancer process and the first task file association, Wherein, the load balancer process for the connection request of its host and/or request of data equilibrium assignment for will give distribution On corresponding server in formula framework;And
Second add module, for after second container starts, the identification information of the managing process in the second container to be added It is added in the second assignment file of the control group, the managing process is associated with second assignment file, wherein, The control group is used for by the way that different processes is associated to isolate used in different processes from different assignment files Hardware resource.
11. system according to claim 10, wherein:
First add module, it is additionally operable to after first container starts, by Docker engines by first container In the load balancer process identification information be added to it is described control group the first task file in, will described in Load balancer process and the first task file association;And
Second add module, it is additionally operable to after the second container starts, by the Docker engines by described second The identification information of the managing process in container is added in second assignment file of the control group, by described in Managing process associates with second assignment file.
12. system according to claim 11, wherein, second add module includes:
First generation unit, for after the first container described in the Docker engine starts, being existed by the Docker engines Generation is using the identification information of first container as name under each Resource TOC under the associative directory for controlling group Catalogue;
Memory cell, enter for storing the load balancing in using the identification information of first container as the catalogue of name Journey;And
Write operation unit, for mesh using the first container identification information as name will to be stored in by the Docker engines In the first task file of the identification information write-in control group corresponding to the load balancer process in record.
13. system according to claim 10, wherein, the system also includes:
Control module, for being added after second container starts, and by the identification information of the managing process in the second container After into the second assignment file of the control group, the load balancer process and the managing process is controlled to share network Protocol stack.
14. system according to claim 13, wherein, the control module, it is additionally operable to:
First container and the second container is controlled to use same Web portal.
15. system according to claim 14, wherein, the control module includes:
Second generation unit, for generating the dormancy container as the Web portal;And
Definition unit, for for first container and the second container, while it is the dormancy to specify its network mode Container, to realize that first container and the second container use the dormancy container to be used as same Web portal.
16. system according to claim 15, wherein, the system also includes:
Generation module, after in generation as the dormancy container of the Web portal, generation is corresponding with the dormancy container Network namespace, wherein, the network namespace is for isolating the resource relevant with network.
17. the system according to any one of claim 10 to 16, wherein, the hardware resource comprises at least following resource In one or more:CPU, internal memory, I/O interface.
18. the system according to any one of claim 10 to 16, wherein:
The second container comprises at least one;
The managing process comprises at least the one or more in following process:Driving process, agent process, report process;With And
In the container that each process in the managing process is respectively stored into the second container.
19. a kind of computing device, including:
One or more processors;
Storage device, for storing one or more programs,
Wherein, when one or more of programs are by one or more of computing devices so that one or more of The method that processor realizes the optimization load balancing described in any one of claim 1 to 9.
20. a kind of computer-readable medium, is stored thereon with executable instruction, the instruction makes the processing when being executed by processor The method that device realizes the optimization load balancing described in any one of claim 1 to 9.
CN201710927691.3A 2017-09-30 2017-09-30 Method and system for optimizing load balance Active CN107800779B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710927691.3A CN107800779B (en) 2017-09-30 2017-09-30 Method and system for optimizing load balance

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710927691.3A CN107800779B (en) 2017-09-30 2017-09-30 Method and system for optimizing load balance

Publications (2)

Publication Number Publication Date
CN107800779A true CN107800779A (en) 2018-03-13
CN107800779B CN107800779B (en) 2020-09-29

Family

ID=61534020

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710927691.3A Active CN107800779B (en) 2017-09-30 2017-09-30 Method and system for optimizing load balance

Country Status (1)

Country Link
CN (1) CN107800779B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110087107A (en) * 2019-04-25 2019-08-02 视联动力信息技术股份有限公司 A kind of method and view networked system of raising system self-adaption ability
CN111127657A (en) * 2019-11-29 2020-05-08 重庆顺泰铁塔制造有限公司 Virtual manufacturing method and system based on non-regional Engine
CN111399999A (en) * 2020-03-05 2020-07-10 腾讯科技(深圳)有限公司 Computer resource processing method and device, readable storage medium and computer equipment
CN112948127A (en) * 2021-03-30 2021-06-11 北京滴普科技有限公司 Cloud platform container average load monitoring method, terminal device and readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100122175A1 (en) * 2008-11-12 2010-05-13 Sanjay Gupta Tool for visualizing configuration and status of a network appliance
CN103092675A (en) * 2012-12-24 2013-05-08 北京伸得纬科技有限公司 Virtual environment construction method
CN104268022A (en) * 2014-09-23 2015-01-07 浪潮(北京)电子信息产业有限公司 Process resource distribution method and system for operation system
CN106209741A (en) * 2015-05-06 2016-12-07 阿里巴巴集团控股有限公司 A kind of fictitious host computer and partition method, resource access request processing method and processing device
US20170126469A1 (en) * 2015-11-03 2017-05-04 Rancher Labs, Inc. Cloud Computing Service Architecture

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100122175A1 (en) * 2008-11-12 2010-05-13 Sanjay Gupta Tool for visualizing configuration and status of a network appliance
CN103092675A (en) * 2012-12-24 2013-05-08 北京伸得纬科技有限公司 Virtual environment construction method
CN104268022A (en) * 2014-09-23 2015-01-07 浪潮(北京)电子信息产业有限公司 Process resource distribution method and system for operation system
CN106209741A (en) * 2015-05-06 2016-12-07 阿里巴巴集团控股有限公司 A kind of fictitious host computer and partition method, resource access request processing method and processing device
US20170126469A1 (en) * 2015-11-03 2017-05-04 Rancher Labs, Inc. Cloud Computing Service Architecture

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110087107A (en) * 2019-04-25 2019-08-02 视联动力信息技术股份有限公司 A kind of method and view networked system of raising system self-adaption ability
CN111127657A (en) * 2019-11-29 2020-05-08 重庆顺泰铁塔制造有限公司 Virtual manufacturing method and system based on non-regional Engine
CN111399999A (en) * 2020-03-05 2020-07-10 腾讯科技(深圳)有限公司 Computer resource processing method and device, readable storage medium and computer equipment
CN112948127A (en) * 2021-03-30 2021-06-11 北京滴普科技有限公司 Cloud platform container average load monitoring method, terminal device and readable storage medium
CN112948127B (en) * 2021-03-30 2023-11-10 北京滴普科技有限公司 Cloud platform container average load monitoring method, terminal equipment and readable storage medium

Also Published As

Publication number Publication date
CN107800779B (en) 2020-09-29

Similar Documents

Publication Publication Date Title
CN109032760A (en) Method and apparatus for application deployment
CN107800779A (en) Optimize the method and system of load balancing
CN109922158A (en) Data processing method, device, medium and electronic equipment based on micro services
CN107258072B (en) Method and system for managing conversation content of chat software and recording medium
CN107844371A (en) Task processing method, system and electronic equipment
CN107844324A (en) Customer terminal webpage redirects treating method and apparatus
CN109936635A (en) Load-balancing method and device
CN107656768A (en) Control the method and its system of page jump
CN105659209B (en) The cloud service of trustship on a client device
CN110019125A (en) The method and apparatus of data base administration
CN107423085A (en) Method and apparatus for application deployment
CN109241033A (en) The method and apparatus for creating real-time data warehouse
CN108804295A (en) log information recording method and device
CN110830374A (en) Method and device for gray level release based on SDK
CN110019310A (en) Data processing method and system, computer system, computer readable storage medium
CN108958744A (en) Dispositions method, device, medium and the electronic equipment of big data distributed type assemblies
CN110166507A (en) More resource regulating methods and device
CN111917587A (en) Method for network service management by using service system and service system
CN110007936A (en) Data processing method and device
CN109936605A (en) A kind of method and apparatus of loading interface data
CN115658348A (en) Micro-service calling method, related device and storage medium
CN110505074A (en) A kind of application module integrated approach and device
CN110019044A (en) Big data cluster quasi real time Yarn Mission Monitor analysis method
CN107347093A (en) Collocation method and device for distributed server system
CN110223179A (en) The data processing method of fund, device, system, medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant