CN112104753B - Service request processing system and method and computing device - Google Patents

Service request processing system and method and computing device Download PDF

Info

Publication number
CN112104753B
CN112104753B CN202011292015.1A CN202011292015A CN112104753B CN 112104753 B CN112104753 B CN 112104753B CN 202011292015 A CN202011292015 A CN 202011292015A CN 112104753 B CN112104753 B CN 112104753B
Authority
CN
China
Prior art keywords
service
client
customer service
server
customer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011292015.1A
Other languages
Chinese (zh)
Other versions
CN112104753A (en
Inventor
赵爽
田鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Uniontech Software Technology Co Ltd
Original Assignee
Uniontech Software Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Uniontech Software Technology Co Ltd filed Critical Uniontech Software Technology Co Ltd
Priority to CN202110105699.8A priority Critical patent/CN112866395B/en
Priority to CN202011292015.1A priority patent/CN112104753B/en
Publication of CN112104753A publication Critical patent/CN112104753A/en
Application granted granted Critical
Publication of CN112104753B publication Critical patent/CN112104753B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1008Server selection for load balancing based on parameters of servers, e.g. available memory or workload
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
    • H04L67/63Routing a service request depending on the request content or context

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer And Data Communications (AREA)

Abstract

The invention discloses a service request processing system, wherein: the back-end server is suitable for responding to the login action of the customer service client, starting a background service for the customer service client, updating a configuration file according to the background service, and synchronizing the configuration file to the network server, wherein the background service comprises an IP address of the customer service client, a port and a weight which are distributed for the customer service client, and the configuration item of the configuration file is the incidence relation between the background service and the IP address, the port and the weight; the network server is suitable for distributing a target IP address and a target port for the service request according to the configuration file when receiving the service request of the client side, and sending the service request to a target background service corresponding to the target port and the target IP address in the back-end server; the back-end server is also suitable for sending the service request to the customer service client associated with the back-end server by the target background service. The invention also discloses a corresponding service request processing method and a corresponding computing device.

Description

Service request processing system and method and computing device
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a service request processing system, a service request processing method, a computing device, and a storage medium.
Background
The customer service system, referred to as customer service system for short, plays a very important role in the production and management activities of enterprises, and is a link between users and enterprises. For example, it plays an important role in after-sales, after-sales service, technical support, consultation, and complaints of products. In order to improve the quality of service of an enterprise after sale, after sale service, technical support and the like, it is very important to effectively distribute customer service to service requests sent by customers. Currently, most implementations provide for the rational distribution of customer service to service requests based on load balancing policies. The load balancing is to distribute the service requests of the clients to a plurality of customer servers for processing as uniformly as possible, so that the situation that the speed of processing the service requests of the clients is low due to the fact that one customer server processes the service requests of too many clients and other customer servers process the service requests of the clients is reduced.
At present, the method for realizing load balancing mainly comprises the following steps: the method comprises the steps that a developer designs a load balancing strategy in advance, when a client sends a service request, a customer service end is distributed for the service request according to the pre-designed load balancing strategy, the service request is transferred to the customer service end, the customer service end processes the service request, a processing result is given, and the processing result is returned to the client. However, in the implementation process of the method, since a developer needs to design a load balancing strategy in advance, the workload of the developer is large, and the development speed is affected.
Therefore, a new service request processing scheme is needed, which can reduce the workload of developers in the process of reasonably distributing customer service for service requests so as to improve the development rate.
Disclosure of Invention
To this end, the present invention provides a service request processing system, method and computing device in an attempt to solve or at least alleviate the problems presented above.
According to an aspect of the present invention, there is provided a service request processing system, including a web server and a backend server coupled to each other, the web server being connected to one or more client terminals, and the backend server being connected to one or more client terminals, wherein:
the back-end server is suitable for responding to the login action of a customer service client, starting a background service for the customer service client, updating a configuration file according to the background service, and synchronizing the configuration file to the network server, wherein the background service comprises an IP address of the customer service client, a port and a weight which are distributed for the customer service client, and the configuration item of the configuration file is the incidence relation between the background service and the IP address, the port and the weight;
the network server is suitable for distributing a target IP address and a target port for the service request according to the configuration file when receiving the service request of the client side, and sending the service request to a target background service corresponding to the target port and the target IP address in the back-end server;
the back-end server is also suitable for sending the service request to a customer service client associated with the back-end server by the target background service.
Optionally, the back-end server synchronizes the memory address of the configuration file to the network server through a loading instruction, so that the network server loads the configuration file according to the memory address.
Optionally, the back-end server is further adapted to:
in response to the exit action of the customer service client, destroying the background service of the customer service client, and deleting the destroyed background service, and the IP address, port and weight associated with the destroyed background service from the configuration file.
Optionally, the back-end server is further adapted to:
receiving a processing result obtained by the customer service client processing the service request by using the target background service, and sending the processing result to the network server;
the network server is further adapted to: and sending the processing result to the client side.
Optionally, the back-end server is further adapted to:
monitoring the service quality of the customer service client in real time in the process of processing the service request by the customer service client;
recording the service request processing condition of the customer service client;
and recording the service request amount of the customer service client for accumulated processing of the customer service client on the same day.
Optionally, the back-end server assigns the weight based on a first preset rule, where the first preset rule includes one or more of the following rules: the method comprises the steps of determining the weight of a customer service client according to the experience level of a logged customer service client, determining the weight of the customer service client according to the service quality of the logged customer service client, determining the weight of the customer service client according to the service request amount of a current accumulated processing client of the logged customer service client, and equally distributing the weight to the customer service clients according to the number of the logged customer service clients, wherein the experience level of the customer service client is determined according to the service quality of the customer service client and the processing condition of the service request.
Optionally, the backend server allocates the port based on a second preset rule, where the second preset rule includes:
when the IP addresses of the customer service clients are the same, distributing different ports for the customer service clients; and when the IP addresses of the customer service clients are different, allocating the same or different ports to the customer service clients.
Optionally, when receiving a service request of the client, the network server allocates a target IP address and a target port to the service request according to the configuration file and based on a preset load balancing policy.
Optionally, the network server is an Nginx network server.
According to another aspect of the present invention, there is provided a service request processing method adapted to operate in the service request processing system described above, the method including:
the method comprises the steps that a back-end server responds to a login action of a customer service client and starts a background service for the customer service client, wherein the background service comprises an IP address of the customer service client, a port and a weight which are distributed for the customer service client;
the back-end server updates a configuration file according to the background service and synchronizes the configuration file to a network server, wherein a configuration item of the configuration file is an association relation between the background service and the IP address, the port and the weight;
when the network server receives a service request of a client, a target IP address and a target port are allocated to the service request according to the configuration file, and the service request is sent to a target background service corresponding to the target port and the target IP address in the back-end server;
and the back-end server sends the service request to a customer service client associated with the back-end server by the target background service.
Optionally, the step of synchronizing the configuration file to the network server by the backend server includes:
and the back-end server synchronizes the memory address of the configuration file to the network server through a loading instruction, so that the network server loads the configuration file according to the memory address.
Optionally, the method further comprises the steps of:
and the back-end server responds to the exit action of the customer service client, destroys the background service of the customer service client, and deletes the destroyed background service and the IP address, the port and the weight which are associated with the destroyed background service from the configuration file.
Optionally, the method further comprises the steps of:
the back-end server receives a processing result obtained by the customer service client by processing the service request by using the target background service, and sends the processing result to the network server;
and the network server sends the processing result to the client side.
Optionally, the backend server allocates the port based on a preset rule, where the preset rule includes: when the IP addresses of the customer service clients are the same, distributing different ports for the customer service clients; and when the IP addresses of the customer service clients are different, allocating the same or different ports to the customer service clients.
Optionally, the step of allocating, by the network server, a target IP address and a target port for the service request according to the configuration file includes:
and the network server allocates a target IP address and a target port for the service request according to the configuration file and based on a preset load balancing strategy.
Optionally, the network server is an Nginx network server.
According to another aspect of the invention, there is provided a computing device comprising: at least one processor; and a memory storing program instructions, wherein the program instructions are configured to be executed by the at least one processor, the program instructions comprising instructions for performing the method as described above.
According to yet another aspect of the present invention, there is provided a readable storage medium storing program instructions which, when read and executed by a computing device, cause the computing device to perform the method as described above.
The invention provides a service request processing scheme, which can realize load balancing by utilizing a network server with a load balancing strategy between a client side and a customer service client side through setting the network server. Specifically, in response to a customer service login action, a background service is dynamically started for a customer service, each background service comprises an IP address of a customer service client, a port and a weight which are distributed for the customer service client, a configuration file is generated according to the background service, a target IP address and a target port are distributed for a service request based on the configuration file and a load balancing strategy of a network server, the service request is sent to a target background service corresponding to the target port and the target IP address in a back-end server, the target background service sends the service request to a customer service which is associated with the target background service, and the customer service processes the service request through the target background service, obtains a processing result and returns the processing result to the customer. The method is convenient to realize and deploy, and the load balancing function of the network server with the load balancing strategy is utilized to realize the load balancing of the client-customer service end without the need of pre-designing the load balancing logic by developers.
Furthermore, the back-end server responds to the dynamic operation of the customer service login and logout and can create background service or destroy the background service for the customer service in real time, so that the memory resource of the system can be released in time, the running efficiency of the system is improved, and the customer service can process the service request more efficiently.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
To the accomplishment of the foregoing and related ends, certain illustrative aspects are described herein in connection with the following description and the annexed drawings, which are indicative of various ways in which the principles disclosed herein may be practiced, and all aspects and equivalents thereof are intended to be within the scope of the claimed subject matter. The above and other objects, features and advantages of the present disclosure will become more apparent from the following detailed description read in conjunction with the accompanying drawings. Throughout this disclosure, like reference numerals generally refer to like parts or elements.
FIG. 1 shows a schematic diagram of a service request processing system 100 according to one embodiment of the invention;
FIG. 2 illustrates a block diagram of a computing device 200, according to one embodiment of the invention;
fig. 3 shows a flow diagram of a service request processing method 300 according to one embodiment of the invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The customer service system, referred to as customer service system for short, plays a very important role in the production and management activities of enterprises, and is a link between users and enterprises. For example, it plays an important role in after-sales, after-sales service, technical support, consultation, and complaints of products. In order to improve the quality of service of an enterprise after sale, after sale service, technical support and the like, it is very important to reasonably distribute customer service to service requests sent by customers. Currently, most implementations provide for the rational distribution of customer service to service requests based on load balancing policies.
Fig. 1 shows a block diagram of a service request processing system 100, which is currently in common use. As shown in fig. 1, the service request processing system 100 includes: a plurality of client clients 110, a web server 120, a back-end server 130, and a plurality of customer service clients 140. The client 110 is coupled to the web server 120, the web server 120 is coupled to the backend server 130, and the backend server 130 is coupled to the client 140.
The client 110, i.e. a terminal device used by the client, may specifically be a personal computer such as a desktop computer and a notebook computer, and may also be a mobile phone, a tablet computer, a multimedia device, an intelligent wearable device, and the like, but is not limited thereto. The client 110 may be one or more. The user interaction interface of the client 110 includes interfaces such as a data sending interface and a data obtaining interface, and a user can send a service request to the web server 120 through triggering the data sending interface on the user interaction interface of the client 110, and receive a processing result corresponding to the service request from the web server 120 through the data obtaining interface.
The web server 120 receives the service request of the client terminal 110, forwards the received service request to the backend server 130, receives a processing result sent by the backend server 130 and made for the service request of the client terminal 110, and forwards the processing result to the client terminal 110. The present invention does not limit the specific deployment and configuration of the backend server 140. The backend server 130 receives the service request sent by the web server 120, and sends the service request to the customer service client 140 according to the service request and the distribution condition of the customer service client 140. When the customer service client 140 processes the service request to obtain a processing result, the back-end server 130 also receives the processing result sent by the customer service client 140. The present invention does not limit the specific deployment and configuration of the backend server 140.
Similarly, the customer service client 140 includes one or more customer service clients 140, that is, the terminal devices used by the customer service clients 140 may be specifically a personal computer such as a desktop computer and a notebook computer, and may also be a mobile phone, a tablet computer, a multimedia device, a smart wearable device, and the like, but is not limited thereto. The user interaction interface of the customer service client 140 includes interfaces such as a data sending interface and a data obtaining interface, and the customer service client 140 can obtain the service request to be processed from the back-end server 130 through triggering the data obtaining interface on the user interaction interface, and send the processing result to the back-end server 130 through triggering the data sending interface.
The process of implementing client-client load balancing using the service request processing system 100 shown in fig. 1 is as follows: the web server 120 obtains the service request sent by the client terminal 110, allocates the customer service client terminal 140 to the client terminal 110 according to the load balancing logic designed in advance by the developer and the condition that the customer service client terminal 140 receives the service request, and forwards the service request of the client terminal 110 to the allocated customer service client terminal 140 through the back-end server 130, and the customer service solves the problem of the client. For example, the load balancing policy pre-designed by the developer may be: if the current consulted client volume exceeds the service online, the network server 120 arranges the client terminal 110 for queuing, and then transfers the client terminal 110 to the client service terminal 140 according to the queuing sequence. If there are spare customer service clients 140, then the customer service clients 140 are assigned to the customer clients 110 according to a offload algorithm. However, the load balancing policy of the above method needs to be designed in advance by a developer, that is, all processes for realizing load balancing of the client 110 to the customer service client 140 need to be developed and realized by the developer completely and independently, so that the workload of the developer is large, and the development speed is affected.
In order to solve the above problems, the present invention also uses the service request processing system of fig. 1 to implement client-client load balancing, except that the network server 120 is added, and the network server 120 is defined as a network server with load balancing logic, such as a Nginx network server, but the present invention is not limited to a specific model of the network server, as long as the network server with load balancing function is within the protection scope of the present invention. It should be noted that the Nginx network server may be an independent network server, or may reside in a backend server, which is not limited in this respect. Since the network server 120 has the load balancing policy, a developer does not need to design development work of the load balancing policy part, thereby improving development efficiency.
In the present invention, in order to apply the load balancing logic of the network server 120 to the application scenario of client-customer service client, the backend server 130 dynamically starts a background service for the customer service client 140 in real time based on the dynamic operation of the customer service client 140, which is equivalent to one customer service client corresponding to one background service, and each background service includes the IP address of the customer service client, the port and the weight allocated to the customer service client, that is, one customer service client is represented by a group of ports, IP addresses and weights, so that each group of ports and IP addresses are reasonably allocated by the load balancing policy of the network server to realize the load balancing of the service request, thereby realizing the application of the network server 140 having the load balancing policy to the application scenario of client-customer service client.
The access request processing system of the present invention starts with the back-end server 130 responding to the login action of the customer service client 140, and starts the background service for the customer service client 140 after the login is successful. The customer service client side uses the terminal equipment to realize login operation by logging in an account password registered in the back-end server. It should be noted that starting a background service for the service client 140 is equivalent to creating a background process for the service client 140, or allocating an idle background process for the service client 140, and each background service corresponds to a port, an IP address, and a weight.
Here, the IP address is a unique address of the terminal device used when the customer service client 140 logs in the account password. Here, the port, i.e. the port number corresponding to the port, may be the same or different, and whether the port number is the same or not has a direct relationship with the IP address of the customer service client 140. When the IP addresses of the multiple customer service clients 140 are the same, the multiple customer service clients 140 have the same IP address, which means that the terminal devices used when the customer service clients 140 log in the account password are the same, and the port numbers of the same terminal device cannot be repeated, so that the port numbers of the customer service clients may not be the same. However, if the IP addresses of the multiple customer service clients are different, the different IP addresses indicate that the terminal devices used when the customer service client 140 logs in the account password are different, and the port numbers of different terminal devices can be repeated, so the port numbers of different customer service clients can be the same at this time.
For example, when the two service clients 140 numbered 1 and 2 log in the account password through the same terminal device, and the IP addresses of the 1 service client and the 2 service client are the same at this time, the port number of the background service corresponding to the 1 service client may be the dynamic port 1090, and the port number of the background service corresponding to the 2 service client may be other than 1090, for example, 1098. When two service clients 140 numbered 1 and 2 log in the account password through different terminal devices, and the IP addresses of the client 1 and the client 2 are different, the port number of the background service corresponding to the client 1 may be 1090, and the port number of the background service corresponding to the client 2 may be 1090 or other port numbers.
According to an embodiment of the present invention, the weight of the customer service client 140 is calculated and distributed by the background calculation module of the backend server 130 based on a preset rule, and the preset rule includes one or more of the following rules: determining the weight of the customer service client according to the experience level of the logged customer service client, wherein the experience level of the customer service client is determined according to the service quality of the customer service client and the processing condition of the service request; determining the weight of the customer service client according to the service quality of the logged customer service client; and determining the weight of the customer service client according to the accumulated service request amount of the logged customer service clients in the same day, equally distributing the weight to the customer service clients according to the number of the logged customer service clients, and the like. Of course, the present invention is not limited to the specific calculation method of the weight of the customer service client, and all the calculation methods of the weight of the customer service client are within the protection scope of the present invention.
After determining the IP address corresponding to each customer service client 140 and assigning a port and a weight to each customer service client 140, the background control module of the backend server 130 updates the configuration file based on the background service. The configuration items of the configuration file are incidence relations between the background service and the IP address, the port and the weight, and the configuration file is composed of one or more configuration items. And the configuration file is a special format file required by the web server 120 to implement load balancing. Since the configuration file is a file commonly used in the network server 120 to implement the load balancing policy, the content, requirement, and the like of the configuration file are well known technologies, and the present invention does not describe this in detail, but the content, requirement, and the like of the configuration file are within the scope of the present invention.
According to the above contents, after the customer service client 140 successfully logs in, a background service is created or an idle background service is allocated to the customer service client, but after the customer service client 140 exits the account, in order to timely release the memory resources occupied by the background service and improve the operating efficiency of the system, the back-end server 130 destroys the background service corresponding to the exited customer service client in response to the exit action of the customer service client, so that the memory resources occupied by the background service are timely released, the resource utilization rate of the system and the operating efficiency of the system are improved, and the customer service client can more efficiently process the service request of the customer client.
Subsequently, the IP address, port and weight associated with the destroyed background service are deleted in the configuration file, which is equivalent to updating the configuration file again. After updating the configuration file, the backend server 130 synchronizes the configuration file to the web server 120. Specifically, the backend server 130 sends a load instruction to the web server 120 through the background control module, for example, the load instruction may be an nginx-s load instruction. The loading instruction includes a command for controlling the network server to load the configuration file, and a memory address of the updated configuration file in the back-end server. The web server 120 reads the updated configuration file according to the memory address of the configuration file in the back-end server 130, and loads the configuration file, so that the updated configuration file becomes effective in the web server 120.
After the configuration file required for implementing load balancing based on the load balancing policy of the web server 120 has been generated and made effective in the web server 120, the web server 120 may reasonably allocate ports and IP addresses for the service request of the client 110. Specifically, when web server 120 receives a service request from client 110, a destination port and a destination IP address are assigned to the service request according to the configuration file and the load balancing policy. The load balancing strategy is a well-known technology and is not described herein, but all strategies for achieving load balancing are within the scope of the present invention.
Next, the web server 120 sends the service request to a target backend service corresponding to the target port and the target IP address in the backend server 130, the backend server 130 sends the service request to a customer service client 140 associated with the target backend service by the target backend service, the customer service client 140 processes a processing result obtained by the service request by using the target backend service and sends the processing result to the backend server 130 through the backend service, the backend server 130 sends the processing result to the web server 120, and the web server sends the processing result to the client 110.
Wherein the service request may include: the service request of the consultation class, such as the request of consulting the product condition, the product using method and the like needing customer service on-line solution, or the service request of the on-line operation class, such as the request of customer service for providing the help of the operation class, the request of customer service for on-line operation and the like. Of course, the present invention is not limited to a specific type of service request, and all service requests that can be distributed by load balancing principle are within the scope of the present invention.
According to an embodiment of the present invention, in order to obtain the historical data required for calculating the weight of the successfully logged-in customer service client 140, the backend server 130 monitors the service quality of the customer service client 140 in real time and stores the service quality in the storage module of the backend server 130 during the process of processing the service request by the customer service client 140 using the background service. The back-end server 130 also records the processing condition of the customer service client 140 to the service request, and stores the processing condition in the storage module of the back-end server 130. The backend server 130 also records the number of accumulated service requests processed by the customer service client 140 on the same day, and stores the number in the storage module of the backend server 130. Then, the back-end server 130 may calculate the weight of each online customer service client 140 according to the service quality of the online customer service client 140, the processing condition of the online customer service client 140 on the service request, and the number of the online customer service client 140 processing the service request in the current day, which are recorded in the storage device of the back-end server, and allocate the calculated weight to the background service corresponding to the customer service client.
The storage device may be a database, and the database may be a relational database such as MySQL, ACCESS, or a non-relational database such as NoSQL; the storage device may be a local database residing in the computing device, or may be disposed at a plurality of geographic locations as a distributed database, such as HBase, in short, the storage device is used to store the service quality of the customer service client 140, the processing condition of the service request, and the number of service requests processed cumulatively on the same day.
The method is convenient to realize and deploy, and the method applies the network server with the load balancing strategy to the load balancing of the client-customer service client based on the method, namely, the load balancing strategy of the network server is utilized to realize the load balancing of the client-customer service client without the need of pre-designing the load balancing strategy by developers. It should be noted that the service request processing system 100 shown in fig. 1 is only an example, and in a specific implementation, different numbers of client clients 110, web servers 120, backend servers 130, and customer service clients 140 may be deployed, and the number and the deployment area of these devices are not limited by the present invention.
The web server 120 and the backend server 130 in the service request processing system 100 may be implemented as one computing device 200, but the invention is not limited to the specific device type of the server, for example, the web server 120 and the backend server 130 may be implemented as a computing device such as a desktop computer, a notebook computer, a processor chip, a mobile phone, a tablet computer, and the like, but are not limited thereto.
Fig. 2 shows a block diagram of a computing device 200 according to an embodiment of the present invention, and it should be noted that the computing device 200 shown in fig. 2 is only an example, and in practice, a computing device for implementing the service request processing method of the present invention may be any type of device, and the hardware configuration thereof may be the same as the computing device 200 shown in fig. 2 or different from the computing device 200 shown in fig. 2. In practice, the computing device for implementing the load balancing method of the present invention may add or delete hardware components of the computing device 200 shown in fig. 2, and the present invention does not limit the specific hardware configuration of the computing device.
As shown in FIG. 2, in a basic configuration 202, a computing device 200 typically includes a system memory 206 and one or more processors 204. A memory bus 208 may be used for communication between the processor 204 and the system memory 206.
Depending on the desired configuration, the processor 204 may be any type of processing, including but not limited to: a microprocessor (μ P), a microcontroller (μ C), a digital information processor (DSP), or any combination thereof. The processor 204 may include one or more levels of cache, such as a level one cache 210 and a level two cache 212, a processor core 214, and registers 216. Example processor cores 214 may include Arithmetic Logic Units (ALUs), Floating Point Units (FPUs), digital signal processing cores (DSP cores), or any combination thereof. The example memory controller 218 may be used with the processor 204, or in some implementations the memory controller 218 may be an internal part of the processor 204.
Depending on the desired configuration, system memory 206 may be any type of memory, including but not limited to: volatile memory (such as RAM), non-volatile memory (such as ROM, flash memory, etc.), or any combination thereof. The physical memory in the computing device is usually referred to as a volatile memory RAM, and data in the disk needs to be loaded into the physical memory to be read by the processor 204. System memory 206 may include an operating system 220, one or more applications 222, and program data 224. In some implementations, the application 222 can be arranged to execute instructions on the operating system with the program data 224 by the one or more processors 204. Operating system 220 may be, for example, Linux, Windows, or the like, which includes program instructions for handling basic system services and for performing hardware-dependent tasks. The application 222 includes program instructions for implementing various user-desired functions, and the application 222 may be, for example, but not limited to, a browser, instant messenger, a software development tool (e.g., an integrated development environment IDE, a compiler, etc.), and the like. When the application 222 is installed into the computing device 200, a driver module may be added to the operating system 220.
When computing device 200 is started, processor 204 reads program instructions for operating system 220 from system memory 206 and executes them. Applications 222 run on top of operating system 220, utilizing the interface provided by operating system 220 and the underlying hardware to implement various user-desired functions. When a user launches an application 222, the application 222 is loaded into the system memory 206, and the processor 204 reads and executes the program instructions of the application 222 from the system memory 206.
Computing device 200 also includes storage device 232, storage device 232 including removable storage 236 and non-removable storage 238, each of removable storage 236 and non-removable storage 238 being connected to storage interface bus 234.
Computing device 200 may also include an interface bus 240 that facilitates communication from various interface devices (e.g., output devices 242, peripheral interfaces 244, and communication devices 246) to the basic configuration 202 via the bus/interface controller 230. The example output device 242 includes a graphics processing unit 248 and an audio processing unit 250. They may be configured to facilitate communication with various external devices, such as a display or speakers, via one or more a/V ports 252. Example peripheral interfaces 244 can include a serial interface controller 254 and a parallel interface controller 256, which can be configured to facilitate communications with external devices such as input devices (e.g., keyboard, mouse, pen, voice input device, touch input device) or other peripherals (e.g., printer, scanner, etc.) via one or more I/O ports 258. An example communication device 246 may include a network controller 260, which may be arranged to facilitate communications with one or more other computing devices 262 over a network communication link via one or more communication ports 264.
A network communication link may be one example of a communication medium. Communication media may typically be embodied by computer readable instructions, data structures, program modules, and may include any information delivery media, such as carrier waves or other transport mechanisms, in a modulated data signal. A "modulated data signal" may be a signal that has one or more of its data set or its changes made in such a manner as to encode information in the signal. By way of non-limiting example, communication media may include wired media such as a wired network or private-wired network, and various wireless media such as acoustic, Radio Frequency (RF), microwave, Infrared (IR), or other wireless media. The term computer readable media as used herein may include both storage media and communication media.
In a computing device 200 according to the invention, the application 222 includes instructions for performing the method 300 of the invention, which may instruct the processor 204 to perform the method 300 of the invention.
Fig. 3 shows a flow diagram of a service request processing method 300 according to an embodiment of the invention, which is suitable for being executed in the service request processing system shown in fig. 1. The service request processing method 300 of one embodiment of the present invention begins at step S310. In step S310, the back-end server starts a background service for the customer service client in response to the login action of the customer service client. The customer service client side uses the terminal equipment to realize login operation by logging in an account password registered in the back-end server.
It should be noted that starting the background service for the customer service client 140 is equivalent to creating a background process for the customer service client 140, or allocating a free background process for the customer service client 140. Each background service comprises an IP address of the customer service client, and a port and a weight which are distributed to the customer service client.
Here, the IP address is a unique address of the terminal device used when the customer service client logs in the account password. Here, the port, i.e. the port number corresponding to the port, may be the same or different, and whether the port number is the same or not has a direct relationship with the IP address of the customer service client 140. When the IP addresses of the multiple customer service clients 140 are the same, the multiple customer service clients 140 have the same IP address, which means that the terminal devices used when the customer service clients 140 log in the account password are the same, and the port numbers of the same terminal device cannot be repeated, so that the port numbers of the customer service clients may not be the same. However, if the IP addresses of the multiple customer service clients are different, the different IP addresses indicate that the terminal devices used when the customer service client 140 logs in the account password are different, and the port numbers of different terminal devices can be repeated, so the port numbers of different customer service clients can be the same at this time.
For example, when two service clients 140 numbered 1 and 2 log in account passwords through the same terminal device, and at this time, the IP addresses of the 1 service client and the 2 service client are the same, the port number of the background service corresponding to the 1 service client may be the dynamic port 1090, and the port number of the background service corresponding to the 2 service client may be other than 1090, for example, 1098. When two service clients 140 numbered 1 and 2 log in the account password through different terminal devices, and the IP addresses of the client 1 and the client 2 are different, the port number of the background service corresponding to the client 1 may be 1090, and the port number of the background service corresponding to the client 2 may be 1090 or other port numbers.
According to an embodiment of the present invention, the weight of the customer service client 140 is calculated and distributed by the background calculation module of the backend server 130 based on a preset rule, and the preset rule includes one or more of the following rules: determining the weight of the customer service client according to the experience level of the logged customer service client, wherein the experience level of the customer service client is determined according to the service quality of the customer service client and the processing condition of the service request; determining the weight of the customer service client according to the service quality of the logged customer service client; and determining the weight of the customer service client according to the accumulated service request amount of the logged customer service clients in the same day, equally distributing the weight to the customer service clients according to the number of the logged customer service clients, and the like. Of course, the present invention is not limited to the specific calculation method of the weight of the customer service client, and all the calculation methods of the weight of the customer service client are within the protection scope of the present invention.
Then, in step S320, the backend server updates the configuration file according to the background service, and synchronizes the configuration file to the network server. The configuration items of the configuration file are incidence relations between the background service and the IP address, the port and the weight, and the configuration file is composed of one or more configuration items.
And the configuration file is a special format file required by the web server 120 to implement load balancing. Since the configuration file is a file commonly used in the network server 120 to implement the load balancing function, the content, requirement, and the like of the configuration file are well known technologies, and the present invention does not describe this in detail, but the content, requirement, and the like of the configuration file are within the scope of the present invention.
According to the above contents, after the customer service client 140 successfully logs in, a background service is created or an idle background service is allocated to the customer service client, but after the customer service client 140 exits the account, in order to timely release the memory resources occupied by the background service and improve the operating efficiency of the system, the back-end server 130 destroys the background service corresponding to the exited customer service client in response to the exit action of the customer service client, so that the memory resources occupied by the background service are timely released, the resource utilization rate of the system and the operating efficiency of the system are improved, and the customer service client can more efficiently process the service request of the customer client.
Subsequently, the IP address, port and weight associated with the destroyed background service are deleted in the configuration file, which is equivalent to updating the configuration file again. After updating the configuration file, the backend server 130 synchronizes the configuration file to the web server 120, specifically: the backend server 130 sends a load instruction to the web server 120 through the backend control module, for example, the load instruction may be an nginx-s reload instruction. The loading instruction includes a command for controlling the network server to load the configuration file, and a memory address of the updated configuration file in the back-end server. The web server 120 reads the updated configuration file according to the memory address of the configuration file in the back-end server 130, and loads the configuration file, so that the updated configuration file becomes effective in the web server 120.
After the configuration file required for implementing load balancing based on the load balancing policy of the web server 120 has been generated and made effective in the web server 120, the web server 120 may reasonably allocate ports and IP addresses for the service request of the client 110. Therefore, in step S330, when the web server receives the service request from the client, a target IP address and a target port are allocated to the service request according to the configuration file, and the service request is sent to a target backend service corresponding to the target port and the target IP address in the backend server. Specifically, when the web server 120 receives a service request from the client 110, a target port and a target IP address are allocated to the service request according to the configuration file and the load balancing policy, and the service request is sent to a target background service corresponding to the target port and the target IP address in the backend server. The load balancing strategy is a well-known technology and is not described herein, but all strategies for achieving load balancing are within the scope of the present invention.
Finally, in step S340, the backend server sends the service request to the customer service client associated therewith by the target backend service. The customer service client 140 processes the processing result obtained by the service request by using the target background service, and sends the processing result to the backend server 130 through the background service, the backend server 130 sends the processing result to the web server 120, and the web server sends the processing result to the customer client 110.
Wherein the service request may include: the service request of the consultation class, such as the request of consulting the product condition, the product using method and the like needing customer service on-line solution, or the service request of the on-line operation class, such as the request of customer service for providing the help of the operation class, the request of customer service for on-line operation and the like. Of course, the present invention is not limited to a specific type of service request, and all service requests that can be distributed by load balancing principle are within the scope of the present invention.
According to an embodiment of the present invention, in order to obtain the historical data required for calculating the weight of the successfully logged-in customer service client 140, the backend server 130 monitors the service quality of the customer service client 140 in real time and stores the service quality in the storage module of the backend server 130 during the process of processing the service request by the customer service client 140 using the background service. The back-end server 130 also records the processing condition of the customer service client 140 to the service request, and stores the processing condition in the storage module of the back-end server 130. The backend server 130 also records the number of accumulated service requests processed by the customer service client 140 on the same day, and stores the number in the storage module of the backend server 130. Then, the back-end server 130 may calculate the weight of each online customer service client 140 according to the service quality of the online customer service client 140, the processing condition of the online customer service client 140 on the service request, and the number of the online customer service client 140 processing the service request in the current day, which are recorded in the storage device of the back-end server, and allocate the calculated weight to the background service corresponding to the customer service client.
The method provided by the invention realizes load balancing between the client side and the customer service client side by the network server with the load balancing strategy through setting the network server with the load balancing strategy. Specifically, in response to a customer service login action, a background service is dynamically started for a customer service, each background service comprises an IP address of a customer service client and a port and weight allocated to the customer service client, a configuration file is updated according to the background service, a target IP address and a target port are allocated to a service request based on the configuration file and a load balancing policy of a network server, the service request is sent to a target background service corresponding to the target port and the target IP address in a back-end server, the target background service sends the service request to the customer service client related to the target background service, and the customer service client processes the service request through the target background service to obtain a processing result and returns the processing result to the customer client. The method is convenient to realize and deploy, and the load balancing strategy of the network server with the load balancing function is utilized to realize the load balancing of the client-customer service end without the need of pre-designing the load balancing logic by developers.
The various techniques described herein may be implemented in connection with hardware or software or, alternatively, with a combination of both. Thus, the methods and apparatus of the present invention, or certain aspects or portions thereof, may take the form of program code (i.e., instructions) embodied in tangible media, such as removable hard drives, U.S. disks, floppy disks, CD-ROMs, or any other machine-readable storage medium, wherein, when the program is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the invention.
In the case of program code execution on programmable computers, the computing device will generally include a processor, a storage medium readable by the processor (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device. Wherein the memory is configured to store program code; the processor is configured to perform the service request processing method of the present invention according to instructions in the program code stored in the memory.
By way of example, and not limitation, readable media may comprise readable storage media and communication media. Readable storage media store information such as computer readable instructions, data structures, program modules or other data. Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. Combinations of any of the above are also included within the scope of readable media.
In the description provided herein, algorithms and displays are not inherently related to any particular computer, virtual system, or other apparatus. Various general purpose systems may also be used with examples of this invention. The required structure for constructing such a system will be apparent from the description above. Moreover, the present invention is not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules or units or components of the devices in the examples disclosed herein may be arranged in a device as described in this embodiment or alternatively may be located in one or more devices different from the devices in this example. The modules in the foregoing examples may be combined into one module or may be further divided into multiple sub-modules.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
Furthermore, some of the described embodiments are described herein as a method or combination of method elements that can be performed by a processor of a computer system or by other means of performing the described functions. A processor having the necessary instructions for carrying out the method or method elements thus forms a means for carrying out the method or method elements. Further, the elements of the apparatus embodiments described herein are examples of the following apparatus: the apparatus is used to implement the functions performed by the elements for the purpose of carrying out the invention.
As used herein, unless otherwise specified the use of the ordinal adjectives "first", "second", "third", etc., to describe a common object, merely indicate that different instances of like objects are being referred to, and are not intended to imply that the objects so described must be in a given sequence, either temporally, spatially, in ranking, or in any other manner.
While the invention has been described with respect to a limited number of embodiments, those skilled in the art, having benefit of this description, will appreciate that other embodiments can be devised which do not depart from the scope of the invention as described herein. Furthermore, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter. Accordingly, many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the appended claims. The present invention has been disclosed in an illustrative rather than a restrictive sense with respect to the scope of the invention, as defined in the appended claims.

Claims (10)

1. A service request processing system comprising a web server and a backend server coupled to each other, the web server having one or more client connections and the backend server having one or more client connections, wherein:
the back-end server is suitable for responding to the login action of a customer service client, starting a background service for the customer service client, updating a configuration file according to the background service, and synchronizing the configuration file to the network server, wherein the background service comprises an IP address of the customer service client, a port and a weight which are distributed for the customer service client, and the configuration item of the configuration file is the incidence relation between the background service and the IP address, the port and the weight;
the network server is suitable for distributing a target IP address and a target port for the service request according to the configuration file when receiving the service request of the client side, and sending the service request to a target background service corresponding to the target port and the target IP address in the back-end server;
the back-end server is also suitable for sending the service request to a customer service client associated with the back-end server by the target background service.
2. The system of claim 1, wherein the backend server synchronizes a memory address of the configuration file to the web server through a load instruction, so that the web server loads the configuration file according to the memory address.
3. The system of claim 1, wherein the back-end server is further adapted to:
in response to the exit action of the customer service client, destroying the background service of the customer service client, and deleting the destroyed background service, and the IP address, port and weight associated with the destroyed background service from the configuration file.
4. The system of any one of claims 1 to 3, wherein the back-end server is further adapted to:
receiving a processing result obtained by the customer service client processing the service request by using the target background service, and sending the processing result to the network server;
the network server is further adapted to: and sending the processing result to the client side.
5. The system of claim 4, wherein the back-end server is further adapted to:
monitoring the service quality of the customer service client in real time in the process of processing the service request by the customer service client;
recording the service request processing condition of the customer service client;
and recording the service request amount of the customer service client for accumulated processing of the customer service client on the same day.
6. The system of claim 5, wherein the back-end server assigns the weights based on a first preset rule, wherein the first preset rule comprises one or more of the following rules: the method comprises the steps of determining the weight of a customer service client according to the experience level of a logged customer service client, determining the weight of the customer service client according to the service quality of the logged customer service client, determining the weight of the customer service client according to the service request amount of a current accumulated processing client of the logged customer service client, and equally distributing the weight to the customer service clients according to the number of the logged customer service clients, wherein the experience level of the customer service client is determined according to the service quality of the customer service client and the processing condition of the service request.
7. The system of claim 6, wherein the backend server allocates the port based on a second preset rule, wherein the second preset rule comprises:
when the IP addresses of the customer service clients are the same, distributing different ports for the customer service clients; and when the IP addresses of the customer service clients are different, allocating the same or different ports to the customer service clients.
8. A service request processing method adapted to operate in a service request processing system according to any one of claims 1-7, the method comprising:
the method comprises the steps that a back-end server responds to a login action of a customer service client and starts a background service for the customer service client, wherein the background service comprises an IP address of the customer service client, a port and a weight which are distributed for the customer service client;
the back-end server updates a configuration file according to the background service and synchronizes the configuration file to a network server, wherein a configuration item of the configuration file is an association relation between the background service and the IP address, the port and the weight;
when the network server receives a service request of a client, a target IP address and a target port are allocated to the service request according to the configuration file, and the service request is sent to a target background service corresponding to the target port and the target IP address in the back-end server;
and the back-end server sends the service request to a customer service client associated with the back-end server by the target background service.
9. A computing device, comprising:
at least one processor; and
a memory storing program instructions configured for execution by the at least one processor, the program instructions comprising instructions for performing the method of claim 8.
10. A readable storage medium storing program instructions that, when read and executed by a computing device, cause the computing device to perform the method of claim 8.
CN202011292015.1A 2020-11-18 2020-11-18 Service request processing system and method and computing device Active CN112104753B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110105699.8A CN112866395B (en) 2020-11-18 2020-11-18 Service request processing system and method and computing device
CN202011292015.1A CN112104753B (en) 2020-11-18 2020-11-18 Service request processing system and method and computing device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011292015.1A CN112104753B (en) 2020-11-18 2020-11-18 Service request processing system and method and computing device

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202110105699.8A Division CN112866395B (en) 2020-11-18 2020-11-18 Service request processing system and method and computing device

Publications (2)

Publication Number Publication Date
CN112104753A CN112104753A (en) 2020-12-18
CN112104753B true CN112104753B (en) 2021-03-19

Family

ID=73785925

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202110105699.8A Active CN112866395B (en) 2020-11-18 2020-11-18 Service request processing system and method and computing device
CN202011292015.1A Active CN112104753B (en) 2020-11-18 2020-11-18 Service request processing system and method and computing device

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202110105699.8A Active CN112866395B (en) 2020-11-18 2020-11-18 Service request processing system and method and computing device

Country Status (1)

Country Link
CN (2) CN112866395B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114866421B (en) * 2022-05-13 2024-05-14 西安广和通无线通信有限公司 Port management method, device, equipment and computer readable storage medium
CN115134227A (en) * 2022-06-17 2022-09-30 京东科技信息技术有限公司 Method and apparatus for maintaining server

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101227632A (en) * 2008-01-25 2008-07-23 深圳市科陆电子科技股份有限公司 Distributed call centre system and traffic distributed transferring method
CN103118142A (en) * 2013-03-14 2013-05-22 曙光信息产业(北京)有限公司 Load balancing method and system
WO2016133965A1 (en) * 2015-02-18 2016-08-25 KEMP Technologies Inc. Methods for intelligent data traffic steering
CN107465616A (en) * 2016-06-03 2017-12-12 中国移动通信集团四川有限公司 Client-based service routing method and device
CN109688280A (en) * 2018-08-21 2019-04-26 平安科技(深圳)有限公司 Request processing method, request processing equipment, browser and storage medium
CN109729228A (en) * 2018-12-28 2019-05-07 上海云信留客信息科技有限公司 Artificial intelligence calling system
CN109743392A (en) * 2019-01-07 2019-05-10 北京字节跳动网络技术有限公司 A kind of load-balancing method, device, electronic equipment and storage medium

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007074797A1 (en) * 2005-12-28 2007-07-05 International Business Machines Corporation Load distribution in client server system
US9866487B2 (en) * 2014-06-05 2018-01-09 KEMP Technologies Inc. Adaptive load balancer and methods for intelligent data traffic steering
CN105141693A (en) * 2015-09-10 2015-12-09 上海斐讯数据通信技术有限公司 Distributed server framework and operation method thereof
CN105391737B (en) * 2015-12-14 2016-11-23 福建六壬网安股份有限公司 A kind of load balancing main frame group's file synchronization processing method
CN107995013B (en) * 2016-10-26 2020-08-18 腾讯科技(深圳)有限公司 Customer service distribution method and device
CN109818997A (en) * 2017-11-21 2019-05-28 中兴通讯股份有限公司 A kind of load-balancing method, system and storage medium
CN110225131A (en) * 2019-06-19 2019-09-10 广州小鹏汽车科技有限公司 A kind of service calling method and device
CN110198359A (en) * 2019-07-08 2019-09-03 紫光云技术有限公司 A kind of load-balancing method and device
CN111857974A (en) * 2020-07-30 2020-10-30 江苏方天电力技术有限公司 Service access method and device based on load balancer

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101227632A (en) * 2008-01-25 2008-07-23 深圳市科陆电子科技股份有限公司 Distributed call centre system and traffic distributed transferring method
CN103118142A (en) * 2013-03-14 2013-05-22 曙光信息产业(北京)有限公司 Load balancing method and system
WO2016133965A1 (en) * 2015-02-18 2016-08-25 KEMP Technologies Inc. Methods for intelligent data traffic steering
CN107465616A (en) * 2016-06-03 2017-12-12 中国移动通信集团四川有限公司 Client-based service routing method and device
CN109688280A (en) * 2018-08-21 2019-04-26 平安科技(深圳)有限公司 Request processing method, request processing equipment, browser and storage medium
CN109729228A (en) * 2018-12-28 2019-05-07 上海云信留客信息科技有限公司 Artificial intelligence calling system
CN109743392A (en) * 2019-01-07 2019-05-10 北京字节跳动网络技术有限公司 A kind of load-balancing method, device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN112866395A (en) 2021-05-28
CN112866395B (en) 2023-04-07
CN112104753A (en) 2020-12-18

Similar Documents

Publication Publication Date Title
US10680892B2 (en) Managing servers with quality of service assurances
US9940595B2 (en) Policy-based scaling of computing resources in a networked computing environment
US9298485B2 (en) Maintaining virtual machines for cloud-based operators in a streaming application in a ready state
US9742652B2 (en) Proactive identification of hotspots in a cloud computing environment
US9590917B2 (en) Optimally provisioning and merging shared resources to maximize resource availability
US8843621B2 (en) Event prediction and preemptive action identification in a networked computing environment
US9875124B2 (en) Data assignment and data scheduling for physical machine in a virtual machine environment
US20160162308A1 (en) Deploying a virtual machine in a computing environment
US9148426B2 (en) Securely identifying host systems
US20140157269A1 (en) Common contiguous memory region optimized long distance virtual machine migration
US9628353B2 (en) Using cloud resources to improve performance of a streaming application
CN112104753B (en) Service request processing system and method and computing device
US20130254374A1 (en) Resource allocation based on social networking trends in a networked computing environment
US20120323821A1 (en) Methods for billing for data storage in a tiered data storage system
US20160077854A1 (en) Expediting host maintenance mode in cloud computing environments
US20150149608A1 (en) Information technology resource management
CN112346871A (en) Request processing method and micro-service system
CN112600761A (en) Resource allocation method, device and storage medium
CN112202750A (en) Control method for policy execution, policy execution system and computing device
US9729466B2 (en) Information technology resource management
CN111432357B (en) Information processing method and system and computing equipment
US11287982B2 (en) Associating data management policies to portions of data using connection information
CN103870748A (en) Method and device for safety processing of virtual machine
Pallavi et al. Improvised threshold based task scheduling
CN116800748A (en) Message processing system, method, computing device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant