CN113746933A - Method and device for displaying information - Google Patents

Method and device for displaying information Download PDF

Info

Publication number
CN113746933A
CN113746933A CN202111071707.8A CN202111071707A CN113746933A CN 113746933 A CN113746933 A CN 113746933A CN 202111071707 A CN202111071707 A CN 202111071707A CN 113746933 A CN113746933 A CN 113746933A
Authority
CN
China
Prior art keywords
load condition
condition information
backend server
server
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111071707.8A
Other languages
Chinese (zh)
Inventor
冷曼曼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jingdong Technology Information Technology Co Ltd
Original Assignee
Jingdong Technology Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jingdong Technology Information Technology Co Ltd filed Critical Jingdong Technology Information Technology Co Ltd
Priority to CN202111071707.8A priority Critical patent/CN113746933A/en
Publication of CN113746933A publication Critical patent/CN113746933A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/06Management of faults, events, alarms or notifications
    • H04L41/0631Management of faults, events, alarms or notifications using root cause analysis; using analysis of correlation between notifications, alarms or events based on decision criteria, e.g. hierarchy, tree or time analysis
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/10Active monitoring, e.g. heartbeat, ping or trace-route

Abstract

The application discloses a method and a device for displaying information, and relates to the technical field of service scheduling. One embodiment of the method comprises: in response to the access request, acquiring load condition information of each mounted back-end server, wherein the back-end servers are used for processing the access request; and displaying the load condition information for the user to determine the scheduling algorithm. The implementation method effectively avoids the influence of overlarge load of the back-end server on the generation of the service, and is beneficial to the timely adjustment of the scheduling algorithm or the optimization of service deployment by the user.

Description

Method and device for displaying information
Technical Field
The present application relates to the field of computer technologies, and in particular, to the field of service scheduling technologies, and in particular, to a method and an apparatus for displaying information.
Background
At present, load balancing products of various manufacturers provide various scheduling algorithms such as weighted polling and weighted source IP, but do not provide the distribution condition of service flow after scheduling in each back-end server. Therefore, which scheduling algorithm is most suitable for use in what service scenario completely depends on the understanding depth of the client technician on the service condition and the load balancing scheduling algorithm mechanism.
For example: when a client accesses a WeChat small program, the client firstly passes through 3 nginx WEB servers, a nginx agent replaces a source IP, then the service flow is forwarded to a load balancing product, and the load balancing distributes the service flow to 2 back-end servers. Client technicians select a 'weighted source IP' scheduling algorithm without a scheduling algorithm mechanism of load balancing, and find that all service accesses are scheduled to the same back-end server when a service system has a problem after one month, so that the back-end server is overloaded to influence the generation of services.
Disclosure of Invention
The embodiment of the application provides a method, a device, equipment and a storage medium for displaying information.
According to a first aspect, an embodiment of the present application provides a method for displaying information, the method including: in response to the access request, acquiring load condition information of each mounted back-end server, wherein the back-end servers are used for processing the access request; and displaying the load condition information for the user to determine the scheduling algorithm.
In some embodiments, the method further comprises: and outputting alarm information in response to the fact that the load condition information meets the preset alarm condition.
In some embodiments, the load condition information comprises at least one of: the flow size, query rate per second QPS and the number of concurrent connections are processed.
In some embodiments, obtaining the load condition information of each mounted backend server includes: counting the distribution condition of the historical access requests in each back-end server to obtain a statistical result; and determining the load condition information of each back-end server based on the statistical result.
In some embodiments, in response to detecting a newly mounted backend server, the query per second rate QPS for the newly mounted backend server is set to zero.
In some embodiments, in response to detecting a newly mounted backend server, the processing traffic size of the newly mounted backend server is set to zero bytes.
According to a second aspect, an embodiment of the present application provides an apparatus for displaying information, the apparatus including an obtaining module configured to, in response to obtaining an access request, obtain load condition information of each mounted backend server, where the backend server is configured to process the access request; and the display module is configured to display the load condition information for a user to determine the scheduling algorithm.
In some embodiments, the apparatus further comprises: an alarm module configured to output alarm information in response to determining that the load condition information satisfies a preset alarm condition.
In some embodiments, the load condition information comprises at least one of: the flow size, query rate per second QPS and the number of concurrent connections are processed.
In some embodiments, the acquisition module is further configured to: counting the distribution condition of the historical access requests in each back-end server to obtain a statistical result; and determining the load condition information of each back-end server based on the statistical result.
In some embodiments, the apparatus further comprises: a setting module configured to set a query per second rate QPS of the newly mounted backend server to zero in response to detecting the newly mounted backend server.
In some embodiments, the apparatus further comprises: a detection module configured to set a processing traffic size of a newly mounted backend server to zero bytes in response to detecting the newly mounted backend server.
According to a third aspect, embodiments of the present application provide an electronic device, which includes one or more processors; a storage device having one or more programs stored thereon, which when executed by the one or more processors, cause the one or more processors to implement a method of displaying information as in any embodiment of the first aspect.
According to a fourth aspect, embodiments of the present application provide a computer-readable medium, on which a computer program is stored, which when executed by a processor, implements a method of displaying information as in any of the embodiments of the first aspect.
The method comprises the steps that load condition information of each mounted back-end server is obtained in response to an obtained access request, and the back-end servers are used for processing the access request; the load condition information is displayed for the user to determine the scheduling algorithm, thereby effectively avoiding the influence of the overlarge load of the back-end server on the generation of the service and being beneficial to the user to timely adjust the scheduling algorithm or optimize the service deployment.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
FIG. 1 is an exemplary system architecture diagram in which the present application may be applied;
FIG. 2 is a flow diagram of one embodiment of a method of displaying information according to the present application;
FIG. 3 is a schematic diagram of an application scenario of a method of displaying information according to the present application;
FIG. 4 is a flow diagram of another embodiment of a method of displaying information according to the present application;
FIG. 5 is a schematic diagram of one embodiment of an apparatus to display information, according to the present application;
FIG. 6 is a schematic block diagram of a computer system suitable for use in implementing a server according to embodiments of the present application.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
FIG. 1 illustrates an exemplary system architecture 100 to which embodiments of the method of displaying information of the present application may be applied.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The terminal devices 101, 102, 103 interact with a server 105 via a network 104 to receive or send messages or the like. Various communication client applications, such as a display-type application, a communication-type application, and the like, may be installed on the terminal devices 101, 102, and 103.
The terminal apparatuses 101, 102, and 103 may be hardware or software. When the terminal devices 101, 102, 103 are hardware, they may be various electronic devices having a display screen, including but not limited to a mobile phone and a notebook computer. When the terminal apparatuses 101, 102, 103 are software, they can be installed in the electronic apparatuses listed above. It may be implemented as a plurality of software or software modules (for example to provide a service for displaying information) or as a single software or software module. And is not particularly limited herein.
The server 105 may be a server that provides various services, for example, in response to obtaining an access request, obtaining load condition information of each mounted backend server, which is used for processing the access request; and displaying the load condition information for the user to determine the scheduling algorithm.
The server 105 may be hardware or software. When the server 105 is hardware, it may be implemented as a distributed server cluster composed of a plurality of servers, or may be implemented as a single server. When the server is software, it may be implemented as a plurality of software or software modules (for example, a service for providing display information), or may be implemented as a single software or software module. And is not particularly limited herein.
It should be noted that the method for displaying information provided by the embodiment of the present disclosure may be executed by the server 105, or may be executed by the terminal devices 101, 102, and 103, or may be executed by the server 105 and the terminal devices 101, 102, and 103 in cooperation with each other. Accordingly, each part (for example, each unit, sub-unit, module, sub-module) included in the information display apparatus may be entirely provided in the server 105, may be entirely provided in the terminal devices 101, 102, and 103, and may be provided in the server 105 and the terminal devices 101, 102, and 103, respectively.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Fig. 2 shows a flow diagram 200 of a method of displaying information that may be applied to the present application. In this embodiment, the method of displaying information includes the steps of:
step 201, in response to obtaining the access request, obtaining load condition information of each mounted backend server.
In this embodiment, the execution subject (for example, the server 105 or the terminal devices 101, 102, 103 shown in fig. 1) may monitor the access request of the client in real time, and in response to monitoring the access request of the client, obtain load condition information of each backend server in at least one backend server currently or historically mounted.
The load condition information is used to indicate traffic data such as traffic number and request number processed by the back-end server.
Here, the load condition information may be the load condition information at the current time, or may be the load condition information in the historical statistical period, which is not limited in the present application.
It should be noted that, here, the execution subject may obtain the load condition information of each backend server from the local monitoring device.
The monitoring equipment is used for recording the load condition information of the mounted back-end servers.
In some optional ways, the load condition information includes at least one of: the flow size, query rate per second QPS and the number of concurrent connections are processed.
In this implementation manner, the execution subject obtains load condition information of each mounted backend server, and the load condition information may include at least one item described above.
The processing flow size may be used to indicate a total flow number processed in a historical preset statistical period from a current time of the back-end server, or may be used to indicate a total flow number processed in a historical preset statistical period in which the back-end server is closest to the current time, where a unit is byte.
The Query Per Second (Query Per Second) rate QPS may be used to indicate a QPS within a historical preset statistical period from the current time of the backend server, or may also be used to indicate the number of requests Per Second within a historical preset statistical period in which the backend server is closest to the current time, where QPS is the total number of requests/statistical period within the statistical period.
The number of concurrent connections may be used to indicate the number of concurrent connections of the backend server at the current time, or may also be used to indicate the number of concurrent connections of the backend server at an expiration time of a historical preset statistical period that is closest to the current time, that is, the number of all TCP connections in the ESTABLISHED state.
The historical preset statistical period may be set according to experience and actual requirements, for example, 30s, 40s, and the like, which is not limited in the present application.
It should be noted that the load condition information includes the per-second query rate QPS only if the access request is http/https.
Specifically, the execution main body, in response to acquiring an access request, for example, an http access request, may acquire load condition information of 2 mounted backend servers, for example, backend server a and backend server B, from a local monitoring device, where the load condition information includes a processing traffic size, a QPS, and a number of concurrent connections, where the processing traffic size is used to indicate a total traffic number processed in a statistical period in which the backend server is closest to a current time, the QPS is used to indicate a number of requests per second in a statistical period in which the backend server is closest to the current time, and the number of concurrent connections is used to indicate a number of concurrent connections at an expiration time of a statistical period in which the backend server is closest to the current time. Here, the statistical period may be 30 s. The load condition information of the back-end server a can be represented as [ back-end server a, processing traffic size ], [ back-end server a, QPS ], [ back-end server a, number of concurrent connections ]; the load condition information of the backend server B may be represented as [ backend server B, processing traffic size ], [ backend server B, QPS ], [ backend server B, concurrent connection number ].
The implementation mode obtains the load condition information of each back-end server, and the load condition information comprises at least one of the following items: the traffic size, QPS and the number of concurrent connections are processed, and the load condition information is displayed to enable a user to determine a scheduling algorithm, so that the user can optimize deployment based on various indexes, and the effectiveness of deployment optimization is effectively improved.
In some optional manners, obtaining load condition information of each mounted backend server includes: counting the distribution condition of the historical access requests in each back-end server to obtain a statistical result; and determining the load condition information of each back-end server based on the statistical result.
In this implementation manner, the execution main body acquires the historical access request after acquiring the access request sent by the client within the historical preset statistical period, and may allocate the access request to one of the currently mounted at least one back-end server according to a preset scheduling algorithm, and record an allocation result.
And the execution main body counts the distribution result of the access request in the preset counting period in each back-end server to obtain a counting result, and further determines the load condition information of each back-end server according to the counting result.
Specifically, within a historical preset statistical period closest to the current time, for example, 30s, after the execution main body obtains an access request (corresponding to a service traffic is Xbyte) sent by the client, the execution main body may allocate the access request to a currently mounted backend server, for example, one of backend server a and backend server B, for example, backend server a, according to a preset scheduling algorithm, and further determine whether the access request is http/https access, if so, increase the recorded processing traffic number of the backend server a by Xbyte, and increase the recorded processing request number of the backend server a by 1. If the access request is not http/https access, for example, TCP (Transmission Control Protocol) access, the recorded processing traffic number of the backend server a is increased by Xbyte. The number of processing requests and the size of the processing traffic of the back-end server B remain unchanged. Finally, the executive may count the load condition information of backend server a and backend server B within 30s, e.g., processing traffic size and QPS.
The implementation mode obtains a statistical result by counting the distribution condition of the historical access requests in each back-end server; the load condition information of each back-end server is determined based on the statistical result, and then the load condition information of each back-end server is displayed, so that the load condition information is determined based on the statistical result of the local record, the limitation caused by obtaining the load condition information from each back-end server is avoided, and the flexibility of obtaining the load condition information is improved.
In some optional ways, the method further comprises: in response to detecting the newly mounted backend server, setting a processing traffic size of the newly mounted backend server to zero bytes.
In this implementation manner, the load condition information includes a processing traffic size, the execution main body may detect whether there is a newly mounted backend server in real time, and if a newly mounted backend server is detected, set the processing traffic size corresponding to the newly mounted backend server to zero bytes.
In addition, if the execution main body detects the uninstalled backend server, the execution main body can delete the processing flow size and the related record of the uninstalled backend server.
Further, before deleting the processing traffic size and the related record of the uninstalled backend server, the execution main body may store the processing traffic size up to the current time so that the processing traffic size and the related record of the uninstalled backend server are presented to the user when an access request is subsequently acquired.
This implementation sets the processing traffic of the newly mounted backend server to zero bytes by responding to the newly mounted backend server, which helps to improve the accuracy and reliability of the processing traffic statistics.
In some optional ways, the method further comprises: in response to detecting the newly mounted backend server, the query per second rate QPS for the newly mounted backend server is set to zero.
In this implementation, the load condition information includes a QPS, and the execution main body may detect whether there is a newly mounted backend server in real time, and set a QPS corresponding to the newly mounted backend server to zero if the newly mounted backend server is detected.
Furthermore, if the executing agent detects a uninstalled backend server, the executing agent may delete the QPS and related records of the uninstalled backend server.
Further, before deleting the QPS and related records of the uninstalled backend server, the execution main body may store the QPS up to the current time so that the QPS and related records of the uninstalled backend server are presented to the user when an access request is subsequently acquired.
This implementation sets the QPS of the newly mounted backend server to zero by responding to the detection of the newly mounted backend server, contributing to the improvement of the accuracy and reliability of the QPS statistics.
Step 202, displaying the load condition information for the user to determine the scheduling algorithm.
In this embodiment, after acquiring the load condition information of each back-end server, the execution main body may use an information presentation manner in the prior art or a future development technology, for example, an API (Application Program Interface) Interface call, a console visual presentation, and the like, to perform presentation.
The scheduling algorithm is used for instructing the load balancing device to forward the received service flow to a forwarding rule followed when the back-end server cluster.
With continued reference to fig. 3, fig. 3 is a schematic diagram of an application scenario of the method of displaying information according to the present embodiment.
In the application scenario of fig. 3, an execution main body 301 is mounted with a backend server cluster, where the backend server cluster includes: a backend server E302, a backend server F303, a backend server G304. The back-end server is used for processing the access request. And the execution main body responds to the acquired access request sent by the client and locally acquires the load condition information of each mounted back-end server. Here, the load condition information may be load condition information of a history preset statistical period in which each backend server is the latest from the current time, for example, [ backend server E, load condition information 1]305, [ backend server F, load condition information 2]306, [ backend server G, load condition information 3] 307. After obtaining the load condition information of each back-end server, the execution main body may display 308 the load condition information of each back-end server, so that the user may determine the scheduling algorithm by combining with the service scenario of the user.
According to the method for displaying the information, the load condition information of each mounted back-end server is obtained by responding to the obtained access request, and the back-end servers are used for processing the access request; the load condition information is displayed for the user to determine the scheduling algorithm, thereby effectively avoiding the influence of the overlarge load of the back-end server on the generation of the service and being beneficial to the user to timely adjust the scheduling algorithm or optimize the service deployment.
With further reference to fig. 4, a flow 400 of yet another embodiment of a method of displaying information is shown. In this embodiment, the process 400 of the method for displaying information of this embodiment may include the following steps:
step 401, in response to obtaining the access request, obtaining load condition information of each currently mounted backend server.
In this embodiment, details of implementation and technical effects of step 401 may refer to the description of step 201, and are not described herein again.
Step 402, displaying the load condition information for the user to determine the scheduling algorithm.
In this embodiment, reference may be made to the description of step 202 for details of implementation and technical effects of step 402, which are not described herein again.
And 403, responding to the condition information of the load meeting the preset alarm condition, and outputting alarm information.
In this embodiment, after obtaining the load condition information, the execution main body may determine that the load condition information matches a preset alarm condition, and if the load condition information meets the preset alarm condition, output alarm information to prompt a user to adjust a scheduling algorithm in time to optimize deployment.
The alarm condition may be that one or more items of the load condition information exceed a preset load threshold range, or that one or more items of the load condition information are greater than or equal to a preset load threshold, which is not limited in this application.
The preset load threshold range and the preset load threshold may be determined according to experience and specific application scenarios, which are not limited in the present application.
Specifically, the load threshold may be set by the user according to a specific service scenario.
Here, the alarm information may be output by using an information presentation manner in the prior art or the future development technology, for example, an API (Application Program Interface) Interface call, a console visual display, and the like, which is not limited in the present Application.
Compared with the embodiment corresponding to fig. 2, the flow 400 of the method for displaying information in the embodiment shows that the alarm information is output in response to determining that the load condition information is greater than or equal to the corresponding load threshold, which is helpful for a user to obtain the overload condition information of each back-end server in time, so that service deployment optimization is performed in time, and optimization efficiency is improved.
With further reference to fig. 5, as an implementation of the methods shown in the above-mentioned figures, the present application provides an embodiment of an apparatus for displaying information, which corresponds to the method embodiment shown in fig. 2, and which is particularly applicable to various electronic devices.
As shown in fig. 5, the apparatus 500 for displaying information of the present embodiment includes: an acquisition module 501 and a presentation module 502.
The obtaining module 501 may be configured to obtain load condition information of each mounted backend server in response to obtaining the access request.
The presentation module 502 may be configured to present the load situation information for the user to determine the scheduling algorithm.
In some optional manners of this embodiment, the apparatus further includes: an alarm module configured to output alarm information in response to determining that the load condition information satisfies a preset alarm condition.
In some optional aspects of this embodiment, the obtaining module is further configured to: counting the distribution condition of the historical access requests in each back-end server to obtain a statistical result; and determining the load condition information of each back-end server based on the statistical result.
In some optional aspects of this embodiment, the load condition information includes at least one of: the flow size, query rate per second QPS and the number of concurrent connections are processed.
In some optional manners of this embodiment, the apparatus further includes: a setting module configured to set a query per second rate QPS of the newly mounted backend server to zero in response to detecting the newly mounted backend server.
In some optional manners of this embodiment, the apparatus further includes: a detection module configured to set a processing traffic size of a newly mounted backend server to zero bytes in response to detecting the newly mounted backend server.
According to an embodiment of the present application, an electronic device and a readable storage medium are also provided.
As shown in fig. 6, the embodiment of the present application is a block diagram of an electronic device for displaying information.
600 is a block diagram of an electronic device in accordance with a method of displaying information according to an embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 6, the electronic apparatus includes: one or more processors 601, memory 602, and interfaces for connecting the various components, including a high-speed interface and a low-speed interface. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display graphical information of a GUI on an external input/output apparatus (such as a display device coupled to the interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). In fig. 6, one processor 601 is taken as an example.
The memory 602 is a non-transitory computer readable storage medium as provided herein. Wherein the memory stores instructions executable by at least one processor to cause the at least one processor to perform the method of displaying information provided herein. The non-transitory computer readable storage medium of the present application stores computer instructions for causing a computer to perform the method of displaying information provided herein.
The memory 602, which is a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the method for displaying information in the embodiments of the present application (for example, the obtaining module 501 and the displaying module 502 shown in fig. 5). The processor 601 executes various functional applications of the server and displays information by running non-transitory software programs, instructions and modules stored in the memory 602, that is, implements the method of displaying information in the above-described method embodiments.
The memory 602 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created by use of the electronic device displaying information, and the like. Further, the memory 602 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 602 optionally includes memory located remotely from the processor 601, which may be connected via a network to an electronic device that displays information. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device of the method of displaying information may further include: an input device 603 and an output device 604. The processor 601, the memory 602, the input device 603 and the output device 604 may be connected by a bus or other means, and fig. 6 illustrates the connection by a bus as an example.
The input device 603 may receive input numeric or character information, such as an input device like a touch screen, keypad, mouse, track pad, touch pad, pointer, one or more mouse buttons, track ball, joystick, etc. The output devices 604 may include a display device, auxiliary lighting devices (e.g., LEDs), and tactile feedback devices (e.g., vibrating motors), among others. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device can be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
According to the technical scheme of the embodiment of the application, the generation of the service is effectively avoided due to the overlarge load of the back-end server, and the user can adjust the scheduling algorithm or optimize the service deployment in time.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, and the present invention is not limited thereto as long as the desired results of the technical solutions disclosed in the present application can be achieved.
The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (14)

1. A method of displaying information, the method comprising:
in response to the access request, acquiring load condition information of each mounted back-end server, wherein the back-end servers are used for processing the access request;
and displaying the load condition information for a user to determine a scheduling algorithm.
2. The method of claim 1, further comprising:
and outputting alarm information in response to the fact that the load condition information meets the preset alarm condition.
3. The method according to claim 1, wherein the obtaining of the load condition information of the mounted backend servers includes:
counting the distribution condition of the historical access requests in each back-end server to obtain a statistical result;
and determining the load condition information of each back-end server based on the statistical result.
4. A method according to any of claims 1-3, wherein the load situation information comprises at least one of: the flow size, query rate per second QPS and the number of concurrent connections are processed.
5. The method of claim 4, the load condition information comprising a query rate per second (QPS), and the method further comprising:
in response to detecting a newly mounted backend server, setting a query per second rate QPS of the newly mounted backend server to zero.
6. The method of claim 4, the load condition information comprising processing traffic size, and the method further comprising:
in response to detecting a newly mounted backend server, setting a processing traffic size of the newly mounted backend server to zero bytes.
7. An apparatus for displaying information, the apparatus comprising:
an obtaining module configured to obtain load condition information of each mounted backend server in response to obtaining an access request, the backend server being used for the processing of the access request;
a presentation module configured to present the load situation information for a user to determine a scheduling algorithm.
8. The apparatus of claim 7, further comprising:
an alarm module configured to output alarm information in response to determining that the load condition information satisfies a preset alarm condition.
9. The apparatus of claim 7, wherein the acquisition module is further configured to:
counting the distribution condition of the historical access requests in each back-end server to obtain a statistical result;
and determining the load condition information of each back-end server based on the statistical result.
10. The apparatus of any of claims 7-9, wherein the load situation information comprises at least one of: the flow size, query rate per second QPS and the number of concurrent connections are processed.
11. The apparatus of claim 10, the load condition information comprising a query rate per second (QPS), and the apparatus further comprising:
a setting module configured to set a query per second rate QPS of a newly mounted backend server to zero in response to detecting the newly mounted backend server.
12. The apparatus of claim 10, the load condition information comprising a processing traffic size, and the apparatus further comprising:
a detection module configured to set a processing traffic size of a newly mounted backend server to zero bytes in response to detecting the newly mounted backend server.
13. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory is stored with instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-6.
14. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-6.
CN202111071707.8A 2021-09-14 2021-09-14 Method and device for displaying information Pending CN113746933A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111071707.8A CN113746933A (en) 2021-09-14 2021-09-14 Method and device for displaying information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111071707.8A CN113746933A (en) 2021-09-14 2021-09-14 Method and device for displaying information

Publications (1)

Publication Number Publication Date
CN113746933A true CN113746933A (en) 2021-12-03

Family

ID=78738448

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111071707.8A Pending CN113746933A (en) 2021-09-14 2021-09-14 Method and device for displaying information

Country Status (1)

Country Link
CN (1) CN113746933A (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107086966A (en) * 2016-02-16 2017-08-22 阿里巴巴集团控股有限公司 A kind of load balancing of network, control and network interaction method and device
CN107124472A (en) * 2017-06-26 2017-09-01 杭州迪普科技股份有限公司 Load-balancing method and device, computer-readable recording medium
US20180159920A1 (en) * 2016-12-07 2018-06-07 Alibaba Group Holding Limited Server load balancing method, apparatus, and server device
CN109951566A (en) * 2019-04-02 2019-06-28 深圳市中博科创信息技术有限公司 A kind of Nginx load-balancing method, device, equipment and readable storage medium storing program for executing
CN110233860A (en) * 2018-03-05 2019-09-13 杭州萤石软件有限公司 A kind of load-balancing method, device and system
CN110650209A (en) * 2019-10-09 2020-01-03 北京百度网讯科技有限公司 Method and device for realizing load balance
CN110677496A (en) * 2019-10-18 2020-01-10 北京天融信网络安全技术有限公司 Middleware service scheduling method and device and readable storage medium
CN111770176A (en) * 2020-06-29 2020-10-13 北京百度网讯科技有限公司 Traffic scheduling method and device
CN112003945A (en) * 2020-08-26 2020-11-27 杭州迪普科技股份有限公司 Service request response method and device
CN112532743A (en) * 2020-12-18 2021-03-19 上海安畅网络科技股份有限公司 Intelligent load balancing method and device and storage medium
CN112738220A (en) * 2020-12-28 2021-04-30 杭州迪普科技股份有限公司 Management method, load balancing method and load balancing device of server cluster
WO2021083284A1 (en) * 2019-10-31 2021-05-06 贵州白山云科技股份有限公司 Load balancing method and apparatus, medium and device
WO2021083281A1 (en) * 2019-10-31 2021-05-06 贵州白山云科技股份有限公司 Load balancing method and apparatus, and medium and device
CN112929408A (en) * 2021-01-19 2021-06-08 郑州阿帕斯数云信息科技有限公司 Dynamic load balancing method and device

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107086966A (en) * 2016-02-16 2017-08-22 阿里巴巴集团控股有限公司 A kind of load balancing of network, control and network interaction method and device
WO2017140216A1 (en) * 2016-02-16 2017-08-24 阿里巴巴集团控股有限公司 Method and device for network load balancing, control, and network interaction
US20180159920A1 (en) * 2016-12-07 2018-06-07 Alibaba Group Holding Limited Server load balancing method, apparatus, and server device
CN107124472A (en) * 2017-06-26 2017-09-01 杭州迪普科技股份有限公司 Load-balancing method and device, computer-readable recording medium
CN110233860A (en) * 2018-03-05 2019-09-13 杭州萤石软件有限公司 A kind of load-balancing method, device and system
CN109951566A (en) * 2019-04-02 2019-06-28 深圳市中博科创信息技术有限公司 A kind of Nginx load-balancing method, device, equipment and readable storage medium storing program for executing
CN110650209A (en) * 2019-10-09 2020-01-03 北京百度网讯科技有限公司 Method and device for realizing load balance
CN110677496A (en) * 2019-10-18 2020-01-10 北京天融信网络安全技术有限公司 Middleware service scheduling method and device and readable storage medium
WO2021083284A1 (en) * 2019-10-31 2021-05-06 贵州白山云科技股份有限公司 Load balancing method and apparatus, medium and device
WO2021083281A1 (en) * 2019-10-31 2021-05-06 贵州白山云科技股份有限公司 Load balancing method and apparatus, and medium and device
CN111770176A (en) * 2020-06-29 2020-10-13 北京百度网讯科技有限公司 Traffic scheduling method and device
CN112003945A (en) * 2020-08-26 2020-11-27 杭州迪普科技股份有限公司 Service request response method and device
CN112532743A (en) * 2020-12-18 2021-03-19 上海安畅网络科技股份有限公司 Intelligent load balancing method and device and storage medium
CN112738220A (en) * 2020-12-28 2021-04-30 杭州迪普科技股份有限公司 Management method, load balancing method and load balancing device of server cluster
CN112929408A (en) * 2021-01-19 2021-06-08 郑州阿帕斯数云信息科技有限公司 Dynamic load balancing method and device

Similar Documents

Publication Publication Date Title
US8868984B2 (en) Relevant alert delivery in a distributed processing system with event listeners and alert listeners
US8621277B2 (en) Dynamic administration of component event reporting in a distributed processing system
CN112437018B (en) Flow control method, device, equipment and storage medium of distributed cluster
CN110570217B (en) Cheating detection method and device
CN111756579A (en) Abnormity early warning method, device, equipment and storage medium
CN111694646A (en) Resource scheduling method and device, electronic equipment and computer readable storage medium
CN111083058B (en) Content distribution network service flow limiting method and electronic equipment
CN111835592B (en) Method, apparatus, electronic device and readable storage medium for determining robustness
CN112311597B (en) Message pushing method and device
CN113220420A (en) Service monitoring method, device, equipment, storage medium and computer program product
CN111813623A (en) Page monitoring method and device, electronic equipment and storage medium
CN111865720B (en) Method, apparatus, device and storage medium for processing request
CN112351042B (en) Attack flow calculation method and device, electronic equipment and storage medium
CN111177513A (en) Method and device for determining abnormal access address, electronic equipment and storage medium
CN108764866B (en) Method and equipment for allocating resources and drawing resources
CN112491858B (en) Method, device, equipment and storage medium for detecting abnormal information
CN113825170A (en) Method and apparatus for determining network channel
CN113746933A (en) Method and device for displaying information
CN111597026B (en) Method and device for acquiring information
CN111597461B (en) Target object aggregation prediction method and device and electronic equipment
CN111510480B (en) Request sending method and device and first server
CN111338937B (en) Method, device, equipment and medium for downloading small program intervention
CN114095564A (en) Data processing method and device, equipment and medium
CN112527635A (en) Fault injection method and device, electronic equipment and storage medium
CN111367963A (en) Method, device and equipment for determining source channel and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination