CN113163002B - Server switching method and device and storage medium - Google Patents
Server switching method and device and storage medium Download PDFInfo
- Publication number
- CN113163002B CN113163002B CN202110381436.XA CN202110381436A CN113163002B CN 113163002 B CN113163002 B CN 113163002B CN 202110381436 A CN202110381436 A CN 202110381436A CN 113163002 B CN113163002 B CN 113163002B
- Authority
- CN
- China
- Prior art keywords
- proxy server
- client
- predicted
- state information
- network state
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 84
- 238000003860 storage Methods 0.000 title claims abstract description 19
- 230000005540 biological transmission Effects 0.000 claims abstract description 127
- 230000008569 process Effects 0.000 claims abstract description 50
- 238000012545 processing Methods 0.000 claims description 16
- 238000012216 screening Methods 0.000 claims description 6
- 238000005516 engineering process Methods 0.000 abstract description 15
- 238000010586 diagram Methods 0.000 description 23
- 238000004891 communication Methods 0.000 description 8
- 230000006870 function Effects 0.000 description 6
- 238000005259 measurement Methods 0.000 description 5
- 238000000691 measurement method Methods 0.000 description 5
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 239000013598 vector Substances 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 239000013307 optical fiber Substances 0.000 description 2
- 241000470001 Delaya Species 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000003139 buffering effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000000644 propagated effect Effects 0.000 description 1
- 230000001902 propagating effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1001—Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
- H04L67/1004—Server selection for load balancing
- H04L67/101—Server selection for load balancing based on network conditions
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/14—Session management
- H04L67/148—Migration or transfer of sessions
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/56—Provisioning of proxy services
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
The application provides a server switching method, a server switching device and a storage medium, relates to the technical field of computers, in particular to a cloud technology, and is used for guaranteeing smoothness of target application in an operation process. In the running process of the target application, the client acquires the current first network state information of the first proxy server; the client determines the network reference information and obtains a fluctuation value between the first network state information and the network reference information; if the fluctuation value exceeds the preset range, the client selects a second proxy server from each candidate proxy server based on the current second network state information of each candidate proxy server; and the client switches the data transmission link corresponding to the target application from the first proxy server to the second proxy server. And the client measures the network state information in the operation process, reselects the second proxy server for access after determining that the first proxy server needs to be switched according to the network state information so as to ensure the smoothness of the operation of the target application.
Description
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a server switching method, device, and storage medium.
Background
In the running process of various applications, a corresponding client is required to acquire required data from a corresponding application server. Before acquiring the required data, the network state is measured in order to guarantee the application experience.
At present, a network state measuring method is mainly performed before formal operation of an application program, but network state information has volatility, so that the network state information measured before the formal operation does not have referential property in the operation process; the current measurement method is mostly point-to-point measurement, that is, only the whole network state information between the client and the application server is considered, and the network state information between each node in the data transmission link cannot be measured more accurately.
In summary, the network state information measured in the running process of the application program is not accurate, and only the network state information is counted, and the node in the data transmission link is not adjusted, so that the smoothness of the application in the using process cannot be guaranteed.
Disclosure of Invention
The application provides a server switching method, a server switching device and a storage medium, which are used for guaranteeing the smoothness of an application program in the using process.
In a first aspect, an embodiment of the present application provides a server switching method, where the method includes:
in the running process of the target application, the client acquires the current first network state information of a first proxy server, and the first proxy server provides relay service between the client and an application server corresponding to the target application;
the client determines the network reference information and obtains a fluctuation value between the first network state information and the network reference information;
if the fluctuation value exceeds the preset range, the client selects a second proxy server from each candidate proxy server based on the current second network state information of each candidate proxy server;
the client switches a data transmission link corresponding to the target application from the first proxy server to the second proxy server, and the data transmission link is a link established between the client and the application server for the target application.
In a second aspect, the present application provides a server switching apparatus, comprising:
the first obtaining unit is used for obtaining the current first network state information of a first proxy server in the running process of the target application, and the first proxy server provides relay service between the client and an application server corresponding to the target application;
the second obtaining unit is used for determining the network reference information and obtaining a fluctuation value between the first network state information and the network reference information;
the selecting unit is used for selecting a second proxy server from the candidate proxy servers respectively based on the current second network state information of the candidate proxy servers if the fluctuation value exceeds the preset range;
and the switching unit is used for switching the data transmission link corresponding to the target application from the first proxy server to the second proxy server, and the data transmission link is a link established between the client and the application server aiming at the target application.
In a possible implementation manner, before the first obtaining unit obtains the current first network state information of the first proxy server, the first obtaining unit is further configured to:
respectively pulling file data from each proxy server, and determining the predicted bandwidth quality corresponding to each proxy server based on the file data amount respectively pulled from each proxy server within a set time; and
respectively determining the predicted transmission time delay corresponding to each proxy server based on the time stamp carried in the data packet sent by each proxy server, wherein the time stamp comprises: the data transmission link is used for representing the data transmission time between each adjacent node connected in the data transmission link;
and screening out the first proxy server from each proxy server based on the predicted bandwidth quality and the predicted transmission delay corresponding to each proxy server.
In a possible implementation manner, the first obtaining unit is specifically configured to, when screening out the first proxy server from each proxy server based on the predicted bandwidth quality and the predicted transmission delay respectively corresponding to each proxy server:
weighting the predicted bandwidth quality and the predicted transmission delay corresponding to each proxy server, and respectively determining the predicted network state information corresponding to each proxy server;
and screening out the proxy servers with the predicted network state information meeting the preset conditions as first proxy servers based on the predicted network state information.
In one possible implementation, the predicted bandwidth quality includes a predicted bandwidth variance, and the predicted transmission delay includes one or a combination of a predicted average transmission delay and a predicted transmission delay variance.
In a possible implementation manner, if the network reference information includes reference bandwidth quality, the determining, by the second obtaining unit, the network reference information specifically includes:
and determining the reference bandwidth quality in the network reference information based on the obtained predicted bandwidth quality, the current actual frame rate and the expected frame rate.
In a possible implementation manner, the first obtaining unit is specifically configured to:
in a set time period, obtaining current first network state information based on application data sent by a first proxy server, wherein the first network state information comprises one or a combination of actual bandwidth quality and actual transmission delay;
the actual bandwidth quality is determined by counting according to the data quantity in the application data within a set time period;
the actual transmission delay is determined based on the time stamp in the application data.
In a possible implementation manner, after the second obtaining unit determines the network reference information and obtains the fluctuation value between the first network state information and the network reference information, the second obtaining unit is further configured to:
and if the fluctuation value is within the preset range, adjusting target parameter information for processing the application data, wherein the target parameter information comprises one or a combination of a cache queue and a decoding parameter.
In a third aspect, an embodiment of the present application provides a server switching device, including: a memory and a processor, wherein the memory is configured to store computer instructions; and the processor is used for executing the computer instructions to realize the server switching method provided by the embodiment of the application.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, where computer instructions are stored, and when the computer instructions are executed by a processor, the server switching method provided in the embodiment of the present application is implemented.
The beneficial effect of this application is as follows:
in the embodiment of the application, a client obtains first network state information of a first proxy server which is accessed currently in the running process of a target application, and the first proxy server provides relay service between the client and an application server corresponding to the target application; and in the operation process, measuring the network state information in real time aiming at the currently accessed first proxy server. The client determines the network reference information and obtains a fluctuation value between the first network state information and the network reference information; if the fluctuation value exceeds the preset range, the client selects a second proxy server from each candidate proxy server based on the current second network state information of each candidate proxy server; the client switches a data transmission link corresponding to the target application from the first generation proxy server to the second generation server, and the data transmission link is a link established between the client and the application server for the target application. And dynamically measuring the first network state information in the operation process, selecting a second proxy server from each candidate proxy server after determining that the current network is poor based on the fluctuation value between the first network state information and the network reference information, and switching from the first proxy server to the second proxy server to ensure the smoothness of the operation of the target application.
Additional feature vectors and advantages of the present application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the present application. The objectives and other advantages of the application may be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic diagram of measuring network state information before a target application runs;
FIG. 2 is a schematic diagram of loading data after measuring network status information;
FIG. 3 is a diagram of an application scenario;
fig. 4 is a flowchart of a server switching method provided in an embodiment of the present application;
fig. 5 is a schematic diagram of measuring predicted bandwidth quality according to an embodiment of the present application;
fig. 6 is a schematic diagram for determining a predicted bandwidth quality according to an embodiment of the present application;
fig. 7 is a schematic diagram of measuring a predicted transmission delay according to an embodiment of the present application;
fig. 8 is a schematic diagram of determining a predicted transmission delay according to an embodiment of the present application;
fig. 9 is a schematic diagram illustrating that a client obtains data from an application server during a target application running process according to an embodiment of the present application;
FIG. 10 is a diagram of a display interface of a target application when a fluctuation value exceeds a predetermined range according to an embodiment of the present disclosure;
FIG. 11 is a diagram of a display interface of a target application after switching to a second proxy server according to an embodiment of the present application;
fig. 12 is a flowchart of an overall method for server switching according to an embodiment of the present disclosure;
fig. 13 is a structural diagram of a server switching device according to an embodiment of the present application;
fig. 14 is a schematic diagram of a server switching system according to an embodiment of the present application;
fig. 15 is a block diagram of a computing device according to an embodiment of the present application.
Detailed Description
In order to make the purpose, technical solution and advantages of the present application more clearly and clearly understood, the technical solution in the embodiments of the present application will be described below in detail and completely with reference to the accompanying drawings in the embodiments of the present application, and it is obvious that the described embodiments are only some embodiments of the present application, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," and the like in the description and in the claims of the present application and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the application described herein may be implemented in sequences other than those illustrated or described herein.
Some terms in the embodiments of the present application are explained below to facilitate understanding by those skilled in the art.
1. Cloud technology refers to a hosting technology for unifying serial resources such as hardware, software, network and the like in a wide area network or a local area network to realize calculation, storage, processing and sharing of data.
The Cloud technology (Cloud technology) is based on the general names of network technology, information technology, integration technology, management platform technology, application technology and the like applied in a Cloud computing business model, can form a resource pool, can be used as required, and is flexible and convenient. Cloud computing technology will become an important support. Background services of the technical network system require a large amount of computing and storage resources, such as video websites, picture-like websites and more web portals. With the high development and application of the internet industry, each article may have its own identification mark and needs to be transmitted to a background system for logic processing, data in different levels are processed separately, and various industrial data need strong system background support and can only be realized through cloud computing.
2. The proxy server is a transfer station for network information. Generally, when a client is used to directly link an application server and obtain network information, a request signal is sent to obtain a response, and then the other party transmits the information back. Thus, a proxy server is a server that is interposed between a client and an application server. With the proxy server, the client does not go directly to the application server to retrieve the data, but sends a request to the proxy server, and the request signal is sent to the proxy server first, and the proxy server retrieves the data required by the client and transmits the data to the corresponding client. Moreover, most proxy servers have a buffering function, like a large Cache, and continuously store newly acquired data packets in a local memory, and if data requested by a client exists and is up-to-date in the local memory, the proxy servers do not re-fetch data from an application server, but directly transmit the data in the memory to the corresponding client, so that the data acquisition speed and efficiency can be remarkably improved.
The proxy server can not only realize the functions of improving the data acquisition speed and efficiency, but also realize the functions of network security filtering, flow control (reducing the Internet use cost), user management and the like.
3. The average bitrate is generally the average bitrate of digital music or video, and can be simply considered as equal to the file size divided by the playing time.
4. The Frame rate (Frame rate) is the frequency (rate) at which bitmap images called units of frames appear continuously on the display. When the target application is an application such as a game or a video, the frames on the display are continuously updated during the running process of the target application, so that the frame rate can be determined according to the number of the frames displayed in a certain time. The frame rate may also be referred to as a frame frequency and is expressed in hertz (Hz).
The following briefly introduces the design concept of the embodiments of the present application.
In the method and the device, in the operation process of the client, the current first network state information is measured aiming at the first proxy server currently connected with the client, and whether the first proxy server connected in the data transmission link is switched or not is determined based on the first network state information so as to ensure the smoothness of the operation of the target application.
In the related art, when measuring current network state information for a first proxy server to which a client is currently connected, a measurement method is generally performed before the client operates, and the measured network state information is used as network state information measured in an operation process. And a point-to-point measurement method is adopted in the measurement process, and only the network state information of the whole application server and the whole client is measured.
Namely, clicking a certain client icon in certain terminal equipment to trigger a target application starting instruction corresponding to the client. At this time, after receiving a start instruction of the target application, the client measures a network state for each accessible server, please refer to fig. 1, where fig. 1 exemplarily provides a schematic diagram of measuring network state information. Fig. 1 illustrates a target application as an example of a game, and measures network status information to guarantee game experience.
After the network state information is measured, an accessible server is selected based on the measured network state information, and data is loaded from the accessed server. Referring to fig. 2, fig. 2 exemplarily provides a schematic diagram of loading data after network status information measurement. The bandwidth quality and the transmission delay when loading data are shown in fig. 2, and the bandwidth quality and the transmission delay at this time use the network state information measured in the measurement process of fig. 1 as reference values. And in the related art, in the running process of the target application, the network state information measured in the measurement process of fig. 1 is also used as a reference value.
Therefore, in the related art, the measured network state information has the following problems in the running process of the target application:
1. the timeliness problem is that when the network state information measured before the operation process is taken as the network state information measured in the operation process, the network state information measured in advance has no referential property for the operation process because the network state information has volatility;
2. the accuracy problem is that only the network state information between the client and the application server is measured by using a point-to-point measurement method, but nodes such as a home gateway and a proxy server are also connected in a data transmission link corresponding to the client and the application server, and the network state information between the nodes has a certain influence on the network state information of the whole link, so that the network state information between the nodes in the data transmission link cannot be accurately measured by using the point-to-point measurement method.
In addition to the above problems, in the related art, the measured network status information only stays at the statistical level, so that the node in the data transmission link cannot be adjusted when the measured network status information does not meet the condition.
In summary, in the related art, the network state information measured before the running process of the target application is inaccurate, and the network state information used as the network state information in the running process has no referential property, so that the network state information cannot be accurately measured in the running process of the client, and the proxy server cannot be switched according to the network state information, which finally results in the problem of poor fluency of the target application in the running process.
Based on the above problems, embodiments of the present application provide a server switching method, apparatus, and storage medium; in the embodiment of the application, in the running process of the target application, a client measures the current first network state information of a connected first proxy server; obtaining a fluctuation value between the first network state information and the determined network reference information; when the fluctuation value is determined to exceed the preset range, the client selects a second proxy server from each candidate proxy server based on the current second network state information of each candidate proxy server; the client switches a data transmission link corresponding to the target application from the first proxy server to the second proxy server, and the target data transmission link is a link established between the client and the application server for the target application.
In the embodiment of the application, in order to ensure that the client side can quickly acquire application data from the proxy server and improve game experience, the client side should acquire the predicted network state information corresponding to each accessible proxy server, and screen and access the first proxy server from each accessible proxy server based on the predicted network state information.
In one possible implementation, the client obtains the predicted network state information of each accessible proxy server and determines the first proxy server by:
the client side respectively pulls the file data from each proxy server, and determines the predicted bandwidth quality corresponding to each proxy server based on the file data amount respectively pulled from each proxy server within the set time; and
the client determines the predicted transmission time delay corresponding to each proxy server respectively based on the time stamp carried in the data packet sent by each proxy server, wherein the time stamp comprises: the data transmission link is used for representing the data transmission time between each adjacent node connected in the data transmission link;
and the client screens out the first proxy server from each proxy server based on the predicted bandwidth quality and the predicted transmission delay corresponding to each proxy server.
In the embodiment of the application, a client firstly acquires the predicted network state information corresponding to each proxy server aiming at each accessible proxy server, then screens the first proxy server based on the predicted network state information, continuously acquires required data from the first proxy server during the running process of a target application corresponding to the client, measures the network state information of the first proxy server in real time to ensure the smoothness during the running process of the target application, determines that the network state information of the first proxy server is poor when detecting that the fluctuation value between the network state information of the first proxy server and the network reference information is not in a preset range, reselects and accesses the second proxy server based on the second network state information of a candidate proxy server in order to ensure the smoothness during the running and not influence the application experience, the fluency of the target application process is guaranteed, and the use of the client by the user is not influenced.
After introducing the design concept of the embodiment of the present application, some simple descriptions are provided below for application scenarios to which the technical solution of the embodiment of the present application can be applied, and it should be noted that the application scenarios described below are only used for describing the embodiment of the present application and are not limited. In a specific implementation process, the technical scheme provided by the embodiment of the application can be flexibly applied according to actual needs.
Referring to fig. 3, fig. 3 exemplarily provides an application scenario diagram of the embodiment of the present application, where the application scenario includes a terminal device 30 (such as may include, but is not limited to, 30-1 or 30-2 illustrated in the figure), a proxy server 31, and an application server 32;
among them, various clients are installed and operated in the terminal device 30. The terminal device 30 may be a personal computer, a mobile phone, a tablet computer, a notebook, a vehicle-mounted terminal, or other computer device;
the proxy server 31 and the application server 32 may be independent physical servers, may also be server clusters or distributed systems formed by a plurality of physical servers, and may also be cloud servers that provide basic cloud computing services such as cloud services, cloud databases, cloud computing, cloud functions, cloud storage, Network services, cloud communications, middleware services, domain name services, security services, Content Delivery Networks (CDNs), and big data and artificial intelligence platforms.
In one possible embodiment, the terminal device 30 and the proxy server 31 may communicate with each other through a communication network, which is a wired network or a wireless network. The terminal device 30 and the proxy server 31 may be directly or indirectly connected by wired or wireless communication. For example, the terminal device 30 may be indirectly connected to the proxy server 31 through the wireless access point 33, or the terminal device 30 may be directly connected to the proxy server 31 through the internet, which is not limited herein. Similarly, the proxy server 31 and the application server 32 may communicate with each other through a communication network, which is a limited network or a wireless network and will not be described herein.
In a possible application scenario, the embodiment of the present application may be applied to a Cloud gaming (Cloud gaming), which may also be called a game on demand (gaming on demand), and is an online gaming technology based on a Cloud computing technology. Cloud game technology enables light-end devices (thin clients) with relatively limited graphics processing and data computing capabilities to run high-quality games. In a cloud game scene, a game is not operated in a player game terminal but in a cloud server, and the cloud server renders the game scene into a video and audio stream which is transmitted to the player game terminal through a network. The player game terminal does not need to have strong graphic operation and data processing capacity, and only needs to have basic streaming media playing capacity and capacity of acquiring player input instructions and sending the instructions to the cloud server.
In a possible application scenario, in order to reduce the communication delay, the proxy servers 31 may be deployed in each area, or in order to balance the load, different proxy servers 31 may respectively serve the areas corresponding to the terminal devices 30. For example, the terminal device 30-1 is located at the site a and is in communication connection with the proxy server 31; the terminal device 30-2 is located at the site b and is in communication connection with other proxy servers 31. The plurality of proxy servers 31 may also share data by a block chain, and the plurality of proxy servers 31 constitute a data sharing system. At this time, the terminal device 30-2 located at the site b can acquire data from the proxy server 31 serving the site a. Therefore, in the embodiment of the present application, before the target application runs, the client measures the network state information of each proxy server 31 in the data sharing system implemented by the blockchain. Similarly, the plurality of application servers 32 may also implement data sharing through the blockchain, and the plurality of application servers 32 form a data sharing system, which is not repeated.
Based on the above application scenarios, the server switching method provided by the exemplary embodiment of the present application is described below with reference to the above application scenarios and according to the accompanying drawings, it should be noted that the above application scenarios are only illustrated for the convenience of understanding the spirit and principle of the present application, and the embodiments of the present application are not limited in this respect.
Referring to fig. 4, fig. 4 exemplarily provides a flowchart of a server switching method in an embodiment of the present application, including the following steps:
step S400, in the running process of the target application, the client obtains the current first network state information of the first proxy server, and the first proxy server provides the relay service between the client and the application server corresponding to the target application.
In the embodiment of the application, before the running process of the target application, the client needs to select and access a first proxy server from a plurality of accessible proxy servers. That is, before the client obtains the current first network state information of the accessed first proxy server, the client needs to select and access the first proxy server from a plurality of accessible proxy servers.
In one possible implementation, the client screens out the first proxy server from the plurality of accessible proxy servers based on predicted network state information of each accessible proxy server, wherein the predicted network state information includes one or a combination of predicted bandwidth quality and predicted transmission delay.
When the predicted network state information includes only the predicted bandwidth quality, the predicted bandwidth quality is used as the predicted network state information.
In this case, the client pulls the file data from each proxy server, and determines the predicted bandwidth quality corresponding to each proxy server based on the amount of the file data pulled from each proxy server within a set time.
Referring to fig. 5, fig. 5 is a schematic diagram illustrating an example of measuring predicted bandwidth quality in an embodiment of the present application. As shown in fig. 5, the client starts N threads to pull file data from each accessible proxy server, where N is 2 × number of CPU cores. That is, the client determines a bandwidth value by pulling the data amount and the pulling time of the file data on the proxy server, and measures the predicted bandwidth quality of each accessible proxy server based on the determined bandwidth value.
Since the bandwidth quality has volatility, in order to ensure the accuracy of the measured predicted bandwidth quality, the bandwidth value is obtained for each accessible proxy server for multiple times, and the variance of the bandwidth value is used to determine the predicted bandwidth quality of each accessible proxy server.
Referring to fig. 6, fig. 6 exemplarily provides a schematic diagram for determining the predicted bandwidth quality in an embodiment of the present application, where an average code rate is used to characterize a bandwidth value corresponding to each period. As can be seen from the graph in figure 6,and for each accessible proxy server, pulling file data from the accessible proxy server within a set time, and determining a bandwidth value corresponding to at least one set time based on the data amount of the file data and the set time. That is, periodically, at least one bandwidth value is determined based on the data amount and the cycle time of the pulled file data. Then, a predicted bandwidth variance δ for characterizing the quality of the predicted bandwidth is determined based on the measured at least one bandwidth valueA。
Therefore, the predicted bandwidth quality in the embodiment of the present application includes only the predicted bandwidth variance δAWill predict the bandwidth variance δAAs predicted network state information.
And secondly, when the predicted network state information comprises the predicted transmission delay, the predicted transmission delay is used as the predicted network state information.
At this time, the client determines the predicted transmission delay corresponding to each proxy server based on the timestamp carried in the data packet sent by each proxy server, wherein the timestamp comprises: for characterizing data transmission time between respective adjacent nodes connected in the data transmission link.
Referring to fig. 7, fig. 7 exemplarily provides a schematic diagram of measuring a predicted transmission delay in an embodiment of the present application. As can be seen from fig. 7, the process of acquiring data from the application server by the client is that the application server sends a data packet to the corresponding proxy server, the proxy server sends the received data packet to the home gateway, and the home gateway sends the received data packet to the client; on the contrary, the process of sending data to the application server by the client is that the client sends a data packet to the home gateway, the home gateway sends the data packet to the proxy server, and the proxy server sends the data packet to the application server.
At this time, the data packet carries a timestamp, and the timestamp is used for recording data transmission time between each adjacent node connected in the data transmission link. For example, recording the time required by the client to transmit the data packet to the home gateway and the time required by the home gateway to transmit the data packet to the proxy server; based on the timestamp carried in the data packet, the time required for the client to transmit the data packet to the proxy server can be determined. Therefore, the client can actively measure the transmission time required by each data packet transmitted from the client to each proxy server.
Referring to fig. 8, fig. 8 exemplarily provides a schematic diagram for determining a predicted delay in an embodiment of the present application. In fig. 8, it is noted that the timestamp of the data packet sent by the sending end is T, and the timestamp of the data packet received by the receiving end is T, and the single transmission delay is T-T. When the sending end is a client, the receiving end is a proxy server; and when the sending end is a proxy server, the receiving end is a client. In order to ensure the accuracy of the measured predicted transmission delay, the predicted transmission delay is measured for a plurality of times, and the predicted average transmission delay is determinedPredicting the propagation delay variance δT。
Thus, predicting propagation delay in embodiments of the present application includes predicting average propagation delayPredicting the propagation delay variance δTOne or a combination of (a).
In one possible implementation, the predicted transmission delay comprises only the predicted average transmission delayWill predict the average transmission delayAs predicted network state information;
when the predicted propagation delay only includes the predicted propagation delay variance δTWill predict the propagation delay variance δTAs predicted network state information;
when predicting the propagation delay comprises predicting an average propagation delayAnd predicting the propagation delay variance δTWill predict the average transmission delayAnd predicting the propagation delay variance δTAnd performing weighting processing, and taking the result of the weighting processing as the predicted network state information.
And thirdly, when the predicted network state information comprises the predicted bandwidth quality and the predicted transmission delay, the predicted network state information is a weighted value of the predicted bandwidth quality and the predicted transmission delay.
Specifically, when the predicted bandwidth quality includes a predicted bandwidth variance and the predicted transmission delay includes a predicted average transmission delay and a predicted transmission delay variance, the predicted network state information is determined by the following formula:
wherein, omega is the predicted network state information, mu is the weight factor of the predicted bandwidth variance, and deltaATo predict the bandwidth variance, η is a weighting factor for predicting the average propagation delay,for predicting the average propagation delay, k is a weight factor for predicting the variance of the propagation delay, δTTo predict the propagation delay variance.
Based on the formula, the client can obtain the predicted network state information corresponding to each accessible proxy server.
After the client acquires the predicted network state information corresponding to each accessible proxy server, the optimal proxy server, namely the proxy server with the minimum predicted network state information omega value is screened out as the first proxy server based on the predicted network state information corresponding to each accessible proxy server, and the selected first proxy server is accessed.
It should be noted that, in order to ensure the smoothness of the target application, when the first proxy server to be accessed is selected for the first time, various network state information should be considered comprehensively, that is, the predicted bandwidth quality and the predicted transmission delay are considered.
In a possible implementation manner, the client may further determine at least one candidate proxy server according to each obtained predicted network state information, for example, select a proxy server whose predicted network state information ω is smaller than a preset value as the candidate proxy server.
In the embodiment of the application, after the client selects and accesses the first proxy server, the client performs data transmission with the application server corresponding to the target application through the first proxy server in the running process of the target application. Referring to fig. 9, fig. 9 exemplarily provides a schematic diagram that a client acquires data from an application server during a target application running process in an embodiment of the present application.
As can be seen from fig. 9, the client sends the uplink request to the application server through the first proxy server, the application server sends the downlink data to the client through the first proxy server based on the uplink request, and after receiving the downlink data, the client displays data corresponding to the target application currently to the user on the display interface of the terminal device based on the received downlink data, where the downlink data is application data returned by the application server for the target application.
Therefore, the network status information of the first proxy server seriously affects the speed and efficiency of data transmission. In order to ensure the smoothness of the target application in the running process, the client needs to measure the network state information of the first proxy server in real time or periodically.
In one possible implementation manner, the client obtains current first network state information of the first proxy server based on the received application data, wherein the first network state information includes one or a combination of actual bandwidth quality and actual transmission delay.
When the first network state information only comprises actual bandwidth quality, the client takes the actual bandwidth quality as the first network state information; and determining the actual bandwidth quality of the first proxy server from the received application data.
Specifically, the data amount of the application data received by the target client within the set time period may be accumulated.
And secondly, when the first network state information only comprises actual transmission delay, the client takes the actual transmission delay as the first network state information, and determines the time required by data transmission between the nodes according to the timestamp carried in the received application data so as to determine the transmission delay of the application data transmitted between the client and the first generation server.
And thirdly, when the first network state information comprises actual bandwidth quality and actual transmission delay, the client performs weighting processing on the actual bandwidth quality and the actual transmission delay, and the processing data after the weighting processing is used as the first network state information.
It should be noted that, during the operation process, the main parameter affecting data transmission is bandwidth quality, so when determining the network state information of the currently accessed first proxy server during the operation process of the target application, in order to reduce the determination time, it is preferable to measure only the current actual bandwidth quality of the first proxy server.
In step S401, the client determines the network reference information and obtains a fluctuation value between the first network state information and the network reference information.
Wherein the fluctuation value is a difference between the network reference information and the first network state information.
In the embodiment of the application, to determine whether the first network state information of the currently accessed first proxy server has an influence on the fluency of the target application, it is required to compare the acquired first network state information with the network reference information, determine a fluctuation value between the first network state information and the network reference information, and determine whether the proxy server needs to be switched based on the fluctuation value.
Therefore, after obtaining the first network state information, the client further needs to determine network reference information, where the network reference information includes one or a combination of the reference bandwidth quality and the reference bandwidth delay.
When the network reference information only comprises reference bandwidth quality, the reference bandwidth quality is used as the network reference information, a fluctuation value between the reference bandwidth quality and actual bandwidth quality is determined, and whether a proxy server needs to be switched or not is determined according to the fluctuation value;
the client determines the reference bandwidth quality in the network reference information by the following method:
the method I comprises the steps of taking the average value of the predicted bandwidth quality corresponding to each proxy server as the reference bandwidth quality in the network reference information;
since the predicted bandwidth quality only includes the predicted bandwidth variance δAThus the reference bandwidth quality isI.e. the network reference information isAt this time, the process of the present invention,where a' is the first network state information.
Determining reference bandwidth quality in the network reference information based on the predicted bandwidth quality corresponding to each proxy server, the current actual frame rate and the expected frame rate;
in particular, by the formulaDetermining reference bandwidth quality in the network reference information; wherein,representing the average value of the predicted bandwidth quality corresponding to each proxy server, fps representing the current actual frame rate, fps0 representing the expected frame rate; at this time, the process of the present invention,a' is the first network state information.
When the network reference information only comprises reference transmission delay, the reference transmission delay is used as the network reference information, a fluctuation value between the reference transmission delay and the actual transmission delay is determined, and whether the proxy server needs to be switched or not is determined according to the fluctuation value;
and the client takes the average value of the predicted transmission delay corresponding to each proxy server as the reference transmission delay in the network reference information.
Since predicting the propagation delay includes predicting the average propagation delayPredicting transmission delay variance deltaTThus, the reference bandwidth quality includes an average of the predicted average propagation delay, an average of the predicted variance of the propagation delay, and an average of the predicted average propagation delay and the predicted variance of the propagation delay after weighting.
In this case, the reference bandwidth quality is associated with information contained in the predicted transmission delay, such as the predicted transmission delay including only the predicted transmission delay variance δTThen the reference bandwidth quality isI.e. the network reference information isAt this time, the process of the present invention,where a' is the first network state information.
Thirdly, when the network reference information comprises reference bandwidth quality and reference transmission delay, weighting the reference bandwidth quality and the reference transmission delay, taking a weighting processing result as the network reference information, determining a fluctuation value between the network reference information and the first network state information, and determining whether the proxy server needs to be switched according to the fluctuation value;
at this time, the first network state information is a weighting result after weighting processing of actual bandwidth quality and actual transmission delay; and the weight of the actual bandwidth quality when the first network state information is determined is consistent with the weight of the reference bandwidth quality when the network reference information is determined, and the weight of the actual transmission delay when the first network state information is determined is consistent with the weight of the reference transmission delay when the network reference information is determined.
Step S402, if the fluctuation value exceeds the preset range, the client selects a second proxy server from each candidate proxy server based on the current second network state information of each candidate proxy server.
In the embodiment of the application, the acquired fluctuation value is compared with a preset range, and whether the fluctuation value exceeds the preset range is determined.
When the fluctuation value exceeds the preset range, the current network state is not good, the smoothness of the target application is affected, the use experience of the user is affected, and at the moment, corresponding prompt information is displayed in a display interface corresponding to the target application to remind the user that the current network state is not good. Referring to fig. 10, fig. 10 exemplarily provides a display interface diagram of a target application when a fluctuation value exceeds a preset range in the embodiment of the present application, and fig. 10 illustrates the target application as a game, and it can be seen from fig. 10 that it is prompted that a user is in a poor current network state, which may affect application experience.
After the fluctuation value is determined to exceed the preset range, in order to ensure the smoothness of the running of the target application, the client automatically selects the proxy server with a better network state to access again, at this time, the second network state information is measured for each candidate proxy server, specifically, the step of measuring and predicting the network state information for each accessible proxy server when the first proxy server is selected from the accessible proxy servers can be referred to, and repeated description is omitted here.
After the client side obtains the second network state information of each candidate proxy server, based on the second network state information, the optimal candidate proxy server is selected as the second proxy server and is connected to the second proxy server.
In a possible implementation manner, when the fluctuation value does not exceed the preset range, target parameter information for processing the application data is adjusted, wherein the target parameter information includes a buffer queue, decoding parameters, and the like. At this time, no more load is caused to the running of the target application, and the capability of detecting the seizure and the stability in real time can be provided. The size of the cache queue, decoding parameters and the like can be dynamically adjusted by a user under the condition of no perception, the smoothness of application is guaranteed, and the user can obtain better application experience.
Step S403, the client switches the data transmission link corresponding to the target application from the first proxy server to the second proxy server, where the data transmission link is a link established between the client and the application server for the target application.
After the second proxy server is selected, the client switches from the first proxy server to the second proxy server, that is, the client re-accesses the second proxy server, specifically referring to fig. 9, at this time, the client obtains data required by the target application from the application server through the second proxy server.
After the client switches the data transmission link corresponding to the target application from the first proxy server to the second proxy server, the network state is promoted, and the smoothness of the target application is promoted, please refer to fig. 11, where fig. 11 exemplifies a game using the target application, and exemplarily provides a display interface diagram of the target application after the target application is changed to the second proxy server in the embodiment of the present application.
Referring to fig. 12, fig. 12 exemplarily provides a flowchart of an overall method for server switching in the embodiment of the present application, including the following steps:
step S1200, the client end respectively pulls the text data from each proxy server, and determines the predicted bandwidth quality corresponding to each proxy server based on the number of files respectively pulled from each proxy server within the set time;
step S1201, the client determines the predicted transmission time delay corresponding to each proxy server respectively based on the time carried in the data packet sent by each proxy server;
step S1202, the client determines the predicted network state information corresponding to each proxy server respectively based on the predicted bandwidth quality and the predicted transmission delay corresponding to each proxy server respectively;
step S1203, the client screens out a first proxy server from each proxy server based on the predicted network state information corresponding to each proxy server;
step S1204, the customer end is in the correspondent data transmission link of goal, insert the first proxy server;
step S1205, in the running process of the target application, the client acquires the current first network state information of the first proxy server;
step S1206, the client determines the network reference information and obtains a fluctuation value between the first network state information and the network reference information;
step S1207, the client determines whether the fluctuation value exceeds a preset range, if so, step S1208 is executed, otherwise, step S1210 is executed;
step S1208, the client selects a second proxy server from the candidate proxy servers respectively based on the current second network state information of the candidate proxy servers;
step S1209, the client switches the data transmission link corresponding to the target application from the first proxy server to the second proxy server;
step S1210, the client adjusts one or a combination of the buffer queue and the decoding parameters.
Based on the same inventive concept, the present application further provides a server switching apparatus 1300, and fig. 13 exemplarily provides a server switching apparatus 1300 in the present application, where the apparatus 1300 includes:
a first obtaining unit 1301, configured to obtain current first network state information of a first proxy server in an operation process of a target application, where the first proxy server provides a relay service between a client and an application server corresponding to the target application;
a second obtaining unit 1302, which determines the network reference information and obtains a fluctuation value between the first network state information and the network reference information;
a selecting unit 1303, configured to select a second proxy server from the candidate proxy servers based on current second network state information of the candidate proxy servers respectively if the fluctuation value exceeds the preset range;
a switching unit 1304, configured to switch a data transmission link corresponding to the target application from the first proxy server to the second proxy server, where the data transmission link is a link established between the client and the application server for the target application.
In a possible implementation manner, before the first obtaining unit obtains 1301 the current first network state information of the first proxy server, the first obtaining unit is further configured to:
respectively pulling file data from each proxy server, and determining the predicted bandwidth quality corresponding to each proxy server based on the file data amount respectively pulled from each proxy server within a set time; and
respectively determining the predicted transmission time delay corresponding to each proxy server based on the time stamp carried in the data packet sent by each proxy server, wherein the time stamp comprises: the data transmission link is used for representing the data transmission time between each adjacent node connected in the data transmission link;
and screening out the first proxy server from each proxy server based on the predicted bandwidth quality and the predicted transmission delay corresponding to each proxy server.
In a possible implementation manner, when the first obtaining unit 1301 screens out the first proxy server from each proxy server based on the predicted bandwidth quality and the predicted transmission delay respectively corresponding to each proxy server, specifically:
weighting the predicted bandwidth quality and the predicted transmission delay corresponding to each proxy server, and respectively determining the predicted network state information corresponding to each proxy server;
and screening out the proxy servers with the predicted network state information meeting the preset conditions as first proxy servers based on the predicted network state information.
In one possible implementation, the predicted bandwidth quality includes a predicted bandwidth variance, and the predicted transmission delay includes one or a combination of a predicted average transmission delay and a predicted transmission delay variance.
In a possible implementation manner, if the network reference information includes reference bandwidth quality, the determining, by the second obtaining unit 1302, the network reference information specifically includes:
and determining the reference bandwidth quality in the network reference information based on the obtained predicted bandwidth quality, the current actual frame rate and the expected frame rate.
In a possible implementation manner, the first obtaining unit 1301 is specifically configured to:
in a set time period, obtaining current first network state information based on application data sent by a first proxy server, wherein the first network state information comprises one or a combination of actual bandwidth quality and actual transmission delay;
the actual bandwidth quality is determined by counting according to the data quantity in the application data within a set time period;
the actual transmission delay is determined based on the time stamp in the application data.
In a possible implementation manner, after the second obtaining unit 1302 determines the network reference information and obtains the fluctuation value between the first network state information and the network reference information, the second obtaining unit is further configured to:
and if the fluctuation value is within the preset range, adjusting target parameter information for processing the application data, wherein the target parameter information comprises one or a combination of a cache queue and a decoding parameter.
Referring to fig. 14, fig. 14 exemplarily provides a server switching system 1400 in the embodiment of the present application, the system includes a network state evaluation module 1401, a network state calculation module 1402, a feedback control module 1403, and a balancing policy module 1404, wherein;
and a network state evaluation module 1401, configured to detect network bandwidth and delay between adjacent nodes in a data transmission link established between the client and the application server for the target application.
A network status measuring and calculating module 1402, configured to measure and calculate network status information based on the network bandwidth and the delay detected by the network status evaluating module 1401;
a feedback control module 1403, configured to feed back network state information to the connected first proxy server;
a balancing policy module 1404 configured to determine a policy service based on the network status information. The policy service includes, but is not limited to, selecting and accessing a first proxy server, selecting and accessing a second proxy server, and adjusting the size of a cache queue, decoding parameters, and the like.
For convenience of description, the above submodels are described separately in terms of functional division into units (or modules). Of course, the functionality of the various elements (or modules) may be implemented in the same one or more pieces of software or hardware in practicing the present application.
After introducing the server switching method and apparatus of the exemplary embodiment of the present application, a server switching computing device of another exemplary embodiment of the present application is introduced next.
As will be appreciated by one skilled in the art, aspects of the present application may be embodied as a system, method or program product. Accordingly, various aspects of the present application may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
In one possible implementation, a server switching computing device provided by an embodiment of the present application may include at least a processor and a memory. Wherein the memory stores program code which, when executed by the processor, causes the processor to perform any of the steps of the server switching methods of the various exemplary embodiments of this application.
A server switching computing device 1500 according to such an embodiment of the present application is described below with reference to fig. 15. The server switching computing device 1500 as shown in fig. 15 is only an example and should not bring any limitations to the functionality and scope of use of the embodiments of the present application.
As shown in fig. 15, the submodels of computing device 1500 may include, but are not limited to: the at least one processor 1501, the at least one memory 1502, and the bus 1503 connecting the different system submodels (including the memory 1502 and the processor 1501).
Bus 1503 represents one or more of several types of bus structures, including a memory bus or memory controller, a peripheral bus, a processor, or a local bus using any of a variety of bus architectures.
The memory 1502 may include readable media in the form of volatile memory, such as Random Access Memory (RAM)15021 and/or cache memory 15022, and may further include Read Only Memory (ROM) 15023.
The memory 1502 may also include a program/utility 15025 having a set (at least one) of program modules 15024, such program modules 15024 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which or some combination thereof may comprise an implementation of a network environment.
In some possible embodiments, the various aspects of the server switching method provided in the present application may also be implemented in the form of a program product including program code for causing a computer device to perform the steps in the server switching method according to various exemplary embodiments of the present application described above in this specification when the program product is run on the computer device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The program product for transmission control of a short message according to the embodiment of the present application may employ a portable compact disc read only memory (CD-ROM) and include program codes, and may be executed on a computing device.
A readable signal medium may include a data signal propagating in baseband or as a submodel to a carrier wave, in which readable program code is carried. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with a command execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages.
It should be noted that although several units or sub-units of the apparatus are mentioned in the above detailed description, such division is merely exemplary and not mandatory. Indeed, according to embodiments of the present application, the feature vectors and functions of two or more units described above may be embodied in one unit. Conversely, the feature vectors and functions of one unit described above may be further divided into embodiments by a plurality of units.
Further, while the operations of the methods of the present application are depicted in the drawings in a particular order, this does not require or imply that these operations must be performed in this particular order, or that all of the illustrated operations must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.
Claims (9)
1. A method for server switching, the method comprising:
in the running process of a target application, a client acquires current first network state information of a first proxy server, and the first proxy server provides relay service between the client and an application server corresponding to the target application;
the client determines network reference information and obtains a fluctuation value between the first network state information and the network reference information;
if the fluctuation value exceeds a preset range, the client selects a second proxy server from each candidate proxy server based on the current second network state information of each candidate proxy server;
the client switches a data transmission link corresponding to the target application from the first proxy server to the second proxy server, wherein the data transmission link is a link established between the client and the application server for the target application;
the network reference information comprises one or a combination of reference bandwidth quality and reference transmission delay, wherein the reference bandwidth quality is determined based on the predicted bandwidth quality, the current actual frame rate and the expected frame rate of each proxy server, and the reference transmission delay is determined based on the predicted transmission delay of each proxy server.
2. The method of claim 1, wherein prior to the client obtaining the current first network state information of the first proxy server, further comprising:
the client side respectively pulls file data from each proxy server, and determines the predicted bandwidth quality corresponding to each proxy server based on the file data amount respectively pulled from each proxy server within a set time; and
the client determines the predicted transmission delay corresponding to each proxy server respectively based on a timestamp carried in a data packet sent by each proxy server, wherein the timestamp comprises: the data transmission link is used for representing the data transmission time between each adjacent node connected in the data transmission link;
and the client screens out the first proxy server from each proxy server based on the predicted bandwidth quality and the predicted transmission delay corresponding to each proxy server.
3. The method of claim 2, wherein the client screening the first proxy server from the respective proxy servers based on the predicted bandwidth quality and the predicted transmission delay corresponding to the respective proxy servers comprises:
the client performs weighting processing on the predicted bandwidth quality and the predicted transmission delay corresponding to each proxy server, and respectively determines the predicted network state information corresponding to each proxy server;
and the client screens out the proxy server with the predicted network state information meeting the preset conditions as the first proxy server based on the predicted network state information.
4. The method of claim 2 or 3, wherein the predicted bandwidth quality comprises a predicted bandwidth variance;
the predicted propagation delay comprises one or a combination of a predicted average propagation delay and a predicted variance of the propagation delay.
5. The method of claim 1 or 2, wherein the client obtaining current first network state information of the first proxy server comprises:
the client acquires current first network state information based on application data sent by the first proxy server within a set time period, wherein the first network state information comprises one or a combination of actual bandwidth quality and actual transmission delay;
wherein the actual bandwidth quality is determined statistically for the data amount in the application data within the set time period;
the actual transmission delay is determined based on a timestamp in the application data.
6. The method of claim 1 or 2, wherein after the client determines network reference information and obtains a fluctuation value between the first network status information and the network reference information, further comprising:
and if the fluctuation value is within the preset range, the client adjusts target parameter information for processing application data, wherein the target parameter information comprises one or a combination of a cache queue and decoding parameters.
7. A server switching apparatus, comprising:
a first obtaining unit, configured to obtain current first network state information of a first proxy server in an operation process of a target application, where the first proxy server provides a relay service between a client and an application server corresponding to the target application;
the second obtaining unit is used for determining network reference information and obtaining a fluctuation value between the first network state information and the network reference information;
a selecting unit, configured to select a second proxy server from the candidate proxy servers based on current second network state information of the candidate proxy servers respectively if the fluctuation value exceeds a preset range;
a switching unit, configured to switch a data transmission link corresponding to the target application from the first proxy server to the second proxy server, where the data transmission link is a link established between the client and the application server for the target application;
the network reference information comprises one or a combination of reference bandwidth quality and reference transmission delay, wherein the reference bandwidth quality is determined based on the predicted bandwidth quality, the current actual frame rate and the expected frame rate of each proxy server, and the reference transmission delay is determined based on the predicted transmission delay of each proxy server.
8. A server switching apparatus, characterized in that the apparatus comprises: a memory and a processor, wherein the memory is configured to store computer instructions; a processor for executing computer instructions to implement the method of any one of claims 1-6.
9. A computer-readable storage medium, wherein the computer-readable storage medium stores computer instructions which, when executed by a processor, implement the method of any one of claims 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110381436.XA CN113163002B (en) | 2021-04-09 | 2021-04-09 | Server switching method and device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110381436.XA CN113163002B (en) | 2021-04-09 | 2021-04-09 | Server switching method and device and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113163002A CN113163002A (en) | 2021-07-23 |
CN113163002B true CN113163002B (en) | 2022-06-17 |
Family
ID=76888951
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110381436.XA Active CN113163002B (en) | 2021-04-09 | 2021-04-09 | Server switching method and device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113163002B (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113965577B (en) * | 2021-08-31 | 2024-02-27 | 联通沃音乐文化有限公司 | System and method for intelligently switching Socks5 proxy server nodes |
CN113794710A (en) * | 2021-09-10 | 2021-12-14 | 联想(北京)有限公司 | Method and system for switching operation modes |
CN114244602B (en) * | 2021-12-15 | 2023-04-25 | 腾讯科技(深圳)有限公司 | Multi-user online network service system, method, device and medium |
CN114726850B (en) * | 2022-04-02 | 2024-01-05 | 福达新创通讯科技(厦门)有限公司 | Method, device and storage medium for remote access of VNC |
CN115396529A (en) * | 2022-08-25 | 2022-11-25 | 深圳市元征科技股份有限公司 | Multichannel communication method, device, terminal equipment and storage medium |
CN116055555B (en) * | 2023-01-28 | 2023-08-04 | 深圳市明源云科技有限公司 | Proxy server setting method, proxy server setting device, electronic equipment and readable storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104519540A (en) * | 2013-09-29 | 2015-04-15 | 中国移动通信集团广东有限公司 | Handover decision method, handover decision device and network-side equipment |
CN106231639A (en) * | 2016-08-10 | 2016-12-14 | 广东工业大学 | Vertical handoff method between a kind of heterogeneous network and device |
CN106899681A (en) * | 2017-03-10 | 2017-06-27 | 腾讯科技(深圳)有限公司 | The method and server of a kind of information pushing |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8812579B2 (en) * | 2006-12-21 | 2014-08-19 | Verizon Patent And Licensing Inc. | Apparatus for transferring data via a proxy server and an associated method and computer program product |
CN101640895B (en) * | 2009-08-31 | 2012-03-21 | 北京邮电大学 | Method and system for ensuring streaming media service quality |
CN108075934B (en) * | 2016-11-15 | 2021-05-25 | 腾讯科技(深圳)有限公司 | Network quality monitoring method, device and system |
CN106953926A (en) * | 2017-03-31 | 2017-07-14 | 北京奇艺世纪科技有限公司 | A kind of method for routing and device |
CN108769257B (en) * | 2018-06-28 | 2021-05-07 | 新华三信息安全技术有限公司 | Server switching method and device |
CN111770140A (en) * | 2020-06-09 | 2020-10-13 | 成都中云天下科技有限公司 | Communication method, user equipment and proxy server cluster |
-
2021
- 2021-04-09 CN CN202110381436.XA patent/CN113163002B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104519540A (en) * | 2013-09-29 | 2015-04-15 | 中国移动通信集团广东有限公司 | Handover decision method, handover decision device and network-side equipment |
CN106231639A (en) * | 2016-08-10 | 2016-12-14 | 广东工业大学 | Vertical handoff method between a kind of heterogeneous network and device |
CN106899681A (en) * | 2017-03-10 | 2017-06-27 | 腾讯科技(深圳)有限公司 | The method and server of a kind of information pushing |
Also Published As
Publication number | Publication date |
---|---|
CN113163002A (en) | 2021-07-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113163002B (en) | Server switching method and device and storage medium | |
US9106521B2 (en) | Method and system for visualizing an adaptive screen according to a terminal | |
US8782215B2 (en) | Performance testing in a cloud environment | |
US9294363B2 (en) | Adjusting quality of service in a cloud environment based on application usage | |
US11088953B2 (en) | Systems and methods for load balancing | |
US9723056B1 (en) | Adapting a page based on a client environment | |
US11677639B2 (en) | Connection management between applications and service resources | |
CN111404713A (en) | Network resource adjusting method, device and storage medium | |
CN113117326B (en) | Frame rate control method and device | |
US11223698B2 (en) | Intermediated retrieval of networked content | |
US9722851B1 (en) | Optimized retrieval of network resources | |
CN111865720B (en) | Method, apparatus, device and storage medium for processing request | |
CN115454637A (en) | Image rendering method, device, equipment and medium | |
US20110208854A1 (en) | Dynamic traffic control using feedback loop | |
CN112152879B (en) | Network quality determination method, device, electronic equipment and readable storage medium | |
CN114389959A (en) | Network congestion control method and device, electronic equipment and storage medium | |
CN116074323B (en) | Edge computing node selection method, device, computer equipment and medium | |
CN115086194B (en) | Cloud application data transmission method, computing device and computer storage medium | |
Shailesh et al. | An analysis of techniques and quality assessment for Web performance optimization | |
CN116962395A (en) | Cloud node determining method, device, equipment and computer readable storage medium | |
KR20220013585A (en) | Method And Apparatus for Providing Cloud Streaming Service | |
CN113574506B (en) | Request allocation based on compute node identifiers | |
WO2024077916A1 (en) | Video screenshot acquiring method and apparatus | |
CN112217853A (en) | Resource access method and device | |
CN114302187B (en) | Media resource playing method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
REG | Reference to a national code |
Ref country code: HK Ref legal event code: DE Ref document number: 40048381 Country of ref document: HK |
|
GR01 | Patent grant | ||
GR01 | Patent grant |