CN109688187B - Flow load balancing method, device, equipment and readable storage medium - Google Patents

Flow load balancing method, device, equipment and readable storage medium Download PDF

Info

Publication number
CN109688187B
CN109688187B CN201811047672.2A CN201811047672A CN109688187B CN 109688187 B CN109688187 B CN 109688187B CN 201811047672 A CN201811047672 A CN 201811047672A CN 109688187 B CN109688187 B CN 109688187B
Authority
CN
China
Prior art keywords
application server
near point
user side
point application
service website
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811047672.2A
Other languages
Chinese (zh)
Other versions
CN109688187A (en
Inventor
舒文捷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN201811047672.2A priority Critical patent/CN109688187B/en
Publication of CN109688187A publication Critical patent/CN109688187A/en
Application granted granted Critical
Publication of CN109688187B publication Critical patent/CN109688187B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
    • H04L67/63Routing a service request depending on the request content or context
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer And Data Communications (AREA)

Abstract

The invention discloses a flow load balancing method, a device, equipment and a readable storage medium, wherein the method is based on a distributed computing technology, and the method comprises the following steps: when an access request of a user side for accessing a service website is detected, determining whether the user side accesses the service website for the first time; if so, sending the access request to an associated distribution server of a service website; receiving a near point application server address list fed back to the user side by the association flow distribution server, and acquiring first target data corresponding to the access request, wherein the association flow distribution server acquires and feeds back the near point application server address list according to an IP address in the user side access request; and when the user side is detected to access the service website again, acquiring corresponding data content based on the near point application server address list. The invention solves the technical problem that the load pressure of the existing flow load server is too large, which influences the user experience.

Description

Flow load balancing method, device, equipment and readable storage medium
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a method, an apparatus, a device, and a readable storage medium for traffic load balancing.
Background
At present, service websites and the like are all in a passive traffic forwarding mode, that is, when a user accesses a service website, traffic is split and forwarded through a background traffic load server corresponding to the service website, that is, a request of the user needs to be forwarded to an actual application server before arriving at the background traffic load server of the service website each time. When a large number of clients request to access a service website, the background traffic load server has a large load pressure, and even a rushing phenomenon occurs.
Disclosure of Invention
The invention mainly aims to provide a traffic load balancing method, a traffic load balancing device, traffic load balancing equipment and a readable storage medium, and aims to solve the technical problem that user experience is influenced due to overlarge load pressure of a traffic load server because all user requests are segmented and forwarded by the traffic load server.
In order to achieve the above object, the present invention provides a traffic load balancing method, including:
when an access request of a user side for accessing a service website is detected, determining whether the user side accesses the service website for the first time;
if the user side accesses the service website for the first time, the access request is sent to an associated distribution server of the service website;
receiving a near point application server address list fed back to the user side by the association flow distribution server, and acquiring first target data corresponding to the access request, wherein the association flow distribution server acquires and feeds back the near point application server address list according to an IP address in the user side access request;
and when the user side is detected to access the service website again, acquiring corresponding data content based on the near point application server address list.
Optionally, the determining, when an access request for a user to access a service website is detected, whether the user accesses the service website for the first time includes:
when an access request of a user side for accessing a service website is detected, inquiring whether the user side stores an address list of a near point application server corresponding to the service website, wherein the address of the service website and the address of the near point application server have a preset association relation;
if the user side stores the address list of the near point application server corresponding to the service website, determining that the user side does not access the service website for the first time;
and if the address list of the near point application server corresponding to the service website is not stored in the user side, determining that the user side accesses the service website for the first time.
Optionally, the determining whether the user end accesses the service website for the first time when the access request of the user end to access the service website is detected includes:
if the user side does not access the service website for the first time, one address in the near point application server address list is selected randomly, and the selected address is used as an alternative address;
and sending the access request to a first near point application server corresponding to the alternative address, and receiving second target data fed back by the first near point application server.
Optionally, after the step of sending the access request to the near point application server corresponding to the alternative address, the method includes:
judging whether the user side receives second target data fed back by the first near point application server within a first preset time length;
if the user side does not receive the second target data within a first preset time length, another near point application server except the first near point application server is reselected from the address list of the near point application servers, and the selected another near point application server is used as a second near point application server;
and sending the access request to the second near point application server, and receiving third target data fed back by the second near point application server, so as to stop the selection of other third near point application servers based on the received third target data.
Optionally, the step of sending the access request to the second near-point application server includes:
judging whether the user side receives third target data fed back by the second near point application server within a second preset time length;
if the user side does not receive the third target data within a second preset time length, the access request is sent to an associated distribution server of the service website again, and prompt information that the first near point application server and the second near point application server are correspondingly invalid is generated and sent, so that the associated distribution server can obtain and feed back a new near point application server address again based on an IP address in the user side access request;
and if the user side receives the third target data within a second preset time length, stopping the selection of other third near point application servers based on the received third target data.
Optionally, the step of receiving an address list of a near point application server fed back to the user side by the association splitting server, and obtaining first target data corresponding to the access request includes:
receiving an address list of a near point application server fed back to the user side by the associated distribution server, and receiving the request number of the current to-be-processed access request of the associated distribution server fed back by the associated distribution server;
and determining a request path of the first target data corresponding to the access request according to the request number so as to obtain the first target data corresponding to the access request.
Optionally, the step of determining, according to the number of the requests, a request path of the first target data corresponding to the access request to obtain the first target data corresponding to the access request includes:
if the request number is smaller than the preset number, receiving an address list of a near point application server fed back to the user side by the association distribution server, so that the association distribution server forwards the access request to a target application server of the service website, and receiving first target data fed back by the target application server;
if the request number is larger than or equal to a preset number, receiving a near point application server address list fed back to the user side by the association shunting server, selecting one near point application server as a target application server based on the near point application server address list, and receiving first target data fed back by the target application server.
The present invention also provides a traffic load balancing apparatus, including:
the first determining module is used for determining whether a user side accesses the service website for the first time when an access request of the user side for accessing the service website is detected;
the sending module is used for sending the access request to an associated distribution server of the service website if the user side accesses the service website for the first time;
the first receiving module is configured to receive a near point application server address list fed back to the user side by the association offload server, and acquire first target data corresponding to the access request, where the association offload server acquires and feeds back the near point application server address list according to an IP address in the user side access request.
Optionally, the first determining module includes:
the system comprises a query unit, a processing unit and a processing unit, wherein the query unit is used for querying whether an address list of a near point application server corresponding to a service website exists in a user side when an access request of the user side for accessing the service website is detected, and a preset association relationship exists between the address of the service website and the address of the near point application server;
a first determining unit, configured to determine that the user side does not access the service website for the first time if the user side stores an address list of a near point application server corresponding to the service website;
a second determining unit, configured to determine that the user side accesses the service website for the first time if the user side does not store the address list of the near point application server corresponding to the service website.
Optionally, the traffic load balancing apparatus further includes:
the selecting module is used for randomly selecting one address in the near point application server address list if the user side does not access the service website for the first time, and taking the selected address as an alternative address;
and the second receiving module is used for sending the access request to the first near point application server corresponding to the alternative address and receiving second target data fed back by the first near point application server.
Optionally, the second receiving module further includes:
the judging unit is used for judging whether the user side receives second target data fed back by the first near point application server within a first preset time length;
a re-selection unit, configured to re-select, if the user side does not receive the second target data within a first preset duration, another near point application server other than the first near point application server from the near point application server address list, and use the selected another near point application server as a second near point application server;
a first receiving unit, configured to send the access request to the second near point application server, and receive third target data fed back by the second near point application server, so as to stop selection of another third near point application server based on the received third target data.
Optionally, the receiving unit includes:
the judging subunit is configured to judge whether the user side receives third target data fed back by the second near point application server within a second preset duration;
a resending subunit, configured to, if the user side does not receive the third target data within a second preset duration, resend the access request to an associated offload server of the service website, generate and send a prompt message that the first near point application server and the second near point application server are correspondingly invalid, so that the associated offload server reacquires and feeds back a new near point application server address based on an IP address in the user side access request;
and if the user side receives the third target data within a second preset time length, stopping the selection of other third near point application servers based on the received third target data. Optionally, the first receiving module includes:
a second receiving unit, configured to receive a near point application server address list fed back to the user side by the association offload server, and receive a request number of a current pending access request of the association offload server fed back by the association offload server;
and a third determining unit, configured to determine, according to the request number, a request path of the first target data corresponding to the access request, so as to obtain the first target data corresponding to the access request.
Optionally, the third determining unit includes:
the first receiving subunit is configured to receive, if the request number is smaller than a preset number, an address list of a near point application server fed back to the user side by the association offload server, so that the association offload server forwards the access request to a target application server of the service website, and receives first target data fed back by the target application server;
and the second receiving subunit is configured to receive a near point application server address list fed back to the user side by the association offload server if the request number is greater than or equal to a preset number, select one of the near point application servers as a target application server based on the near point application server address list, and receive first target data fed back by the target application server.
In addition, to achieve the above object, the present invention further provides a traffic load balancing device, including: a memory, a processor, a communication bus, and a traffic load balancing program stored on the memory,
the communication bus is used for realizing communication connection between the processor and the memory;
the processor is configured to execute the traffic load balancing program to implement the following steps:
when an access request of a user side for accessing a service website is detected, determining whether the user side accesses the service website for the first time;
if the user side accesses the service website for the first time, the access request is sent to an associated distribution server of the service website;
receiving a near point application server address list fed back to the user side by the association flow distribution server, and acquiring first target data corresponding to the access request, wherein the association flow distribution server acquires and feeds back the near point application server address list according to an IP address in the user side access request;
and when the user side is detected to access the service website again, acquiring corresponding data content based on the near point application server address list.
Further, to achieve the above object, the present invention also provides a readable storage medium storing one or more programs, the one or more programs being executable by one or more processors for:
when an access request of a user side for accessing a service website is detected, determining whether the user side accesses the service website for the first time;
if the user side accesses the service website for the first time, the access request is sent to an associated distribution server of the service website;
receiving a near point application server address list fed back to the user side by the association flow distribution server, and acquiring first target data corresponding to the access request, wherein the association flow distribution server acquires and feeds back the near point application server address list according to an IP address in the user side access request;
and when the user side is detected to access the service website again, acquiring corresponding data content based on the near point application server address list.
When detecting an access request of a user side for accessing a service website, determining whether the user side accesses the service website for the first time; if the user side accesses the service website for the first time, the access request is sent to an associated distribution server of the service website; receiving a near point application server address list fed back to the user side by the association flow distribution server, and acquiring first target data corresponding to the access request, wherein the association flow distribution server acquires and feeds back the near point application server address list according to an IP address in the user side access request; and when the user side is detected to access the service website again, acquiring corresponding data content based on the near point application server address list. In other words, in the present application, a passive traffic forwarding mode is no longer used, that is, all user requests are not split and forwarded by a traffic load server, but are actively forwarded in batches of traffic according to whether a user accesses the service website for the first time, so that it is avoided that a background traffic load server of the service website needs to split and forward all access requests, and thus, the pressure of the background traffic load server of the service website can be effectively relieved, and the technical problem that the user experience is affected due to the excessive load pressure of the traffic load server caused by the splitting and forwarding of all user requests by the traffic load server is solved.
Drawings
Fig. 1 is a schematic flow chart of a traffic load balancing method according to a first embodiment of the present invention;
fig. 2 is a schematic detailed flowchart of the step of determining whether the user side accesses the service website for the first time when detecting an access request of the user side to access the service website in the traffic load balancing method of the present invention;
fig. 3 is a schematic device structure diagram of a hardware operating environment related to the method according to the embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The present invention provides a traffic load balancing method, and in a first embodiment of the traffic load balancing method of the present invention, referring to fig. 1, the traffic load balancing method includes:
step S10, when detecting the access request of user end to access the service website, determining whether the user end accesses the service website for the first time;
step S20, if the user side accesses the service website for the first time, the access request is sent to the associated distribution server of the service website;
step S30, receiving a near point application server address list fed back to the user side by the association splitting server, and acquiring first target data corresponding to the access request, where the association splitting server acquires and feeds back the near point application server address list according to an IP address in the user side access request;
step S40, when it is detected that the user side accesses the service website again, obtaining corresponding data content based on the near point application server address list.
The method comprises the following specific steps:
step S10, when detecting the access request of user end to access the service website, determining whether the user end accesses the service website for the first time;
in this embodiment, the traffic load balancing method may be applied to a user side, where the user side may access a service website through a browser or may search for access through an official application of the service website, and this embodiment is specifically described by taking an application scenario where the user side directly accesses the service website in the browser as an example, when an access request of the user side to access the service website is detected, it is determined whether the user side accesses the service website for the first time, for the user side, it may be determined whether the user side accesses the service website for the first time by determining whether a cache address of the service website exists, and if the user side stores the cache address of the service website, it is obvious that the user side does not access the service website for the first time, and if the user side does not store the cache address of the service website, it cannot be identified whether the user side accesses the service website for the first time, this is because the cached service site addresses are time-sensitive for the client.
Referring to fig. 2, when an access request for a user to access a service website is detected, the determining whether the user accesses the service website for the first time includes:
step S11, when detecting an access request of a user side for accessing a service website, inquiring whether the user side has an address list of a near point application server corresponding to the service website, wherein the address of the service website and the address of the near point application server have a preset association relationship;
in this embodiment, whether the user side accesses the service website for the first time is determined by querying whether an address list of a near point application server corresponding to the service website exists in the user side, where the near point application server is an application server of the service website, and the near point server is generally an application server to which a background traffic load server of the service website needs to forward a user side request and needs to actually process the user side access request. The address of the service website and the address of the near point application server have a preset association relationship, the preset association relationship can be recognized by a user side, and the preset association relationship can be that the address of the service website is the same as the address of the near point application server, or that the address of the service website and the address of the near point application server both carry the same identification information.
Step S12, if the user side stores the address list of the near point application server corresponding to the service website, determining that the user side does not access the service website for the first time;
step S13, if the user side does not store the address list of the near point application server corresponding to the service website, it is determined that the user side accesses the service website for the first time.
If the address list of the near point application server corresponding to the service website is stored in the user side, it is determined that the user side does not access the service website for the first time, where one or more near point application servers in the address list of the near point application server corresponding to the service website may be present, and if the address list of the near point application server corresponding to the service website is not stored in the user side, it is determined that the user side accesses the service website for the first time. The determining step of determining whether the user side stores the address list of the near point application server corresponding to the service website may be performed after determining whether the user side stores the cache address of the service website.
Step S20, if the user side accesses the service website for the first time, the access request is sent to the associated distribution server of the service website;
if the user side accesses the service website for the first time, the access request is sent to an associated shunting server of the service website, wherein the associated shunting server of the service website is generally a shunting server of the service website, in addition, when the user side accesses the service website through a cloud acceleration function of a browser, the associated shunting server also comprises a shunting server of the browser, and when the user side accesses the service website through the cloud acceleration function of the browser, the access request of the user side is firstly shunted through the shunting server of the browser and then shunted through the shunting server of the service website. The present embodiment specifically describes an example of a distribution server in which the associated distribution server is a service website.
Step S30, receiving a near point application server address list fed back to the user side by the association splitting server, and acquiring first target data corresponding to the access request, where the association splitting server acquires and feeds back the near point application server address list according to an IP address in the user side access request;
after receiving an access request that a user side accesses the service website for the first time, the associated offload server acquires and feeds back the near point application server address list according to an IP address in the user side access request, specifically, if the IP address carried in the user side access request is the shanghai IP address, the near point application server address fed back by the service website is shanghai, and if the IP address carried in the user side access request is the shenzhen IP address, the near point application server address fed back by the service website is also shenzhen, wherein the service website determines the near point application server addresses of the user side according to the IP address of the user side and the historical busy degree of each application server corresponding to the IP address, and if the IP address corresponds to each application server a, b, c, d, e, f and the like, only 3 application server addresses fed back to the user side are required, determining the near point application server address of each user terminal according to the number of access requests processed by the application servers a, b, c, d, e, f in the past time period, after determining the near point application server address of each user terminal, sending the near point application server address of each user terminal to the user terminal, after receiving the near point application server address list fed back by the associated splitting server, the user terminal obtains the first target data corresponding to the access request,
step S40, when it is detected that the user side accesses the service website again, obtaining corresponding data content based on the near point application server address list.
In this embodiment, when it is detected that the user side accesses the service website again, since the user side already stores the near point application server address capable of processing the access request, the corresponding data content can be obtained based on the near point application server address list.
It should be noted that, in this embodiment, when detecting an access request of a user to access a service website, after the step of determining whether the user accesses the service website for the first time, the method includes:
step A1, if the user side does not access the service website for the first time, selecting an address in the address list of the near point application server arbitrarily, and taking the selected address as an alternative address;
in this embodiment, the addresses of the near point servers may be randomly ordered to generate a near point server address list, and if the user does not access the service website for the first time, one address in the near point application server address list is arbitrarily selected, and the selected address is used as an alternative address.
Step a2, sending the access request to the first near point application server corresponding to the alternative address, and receiving second target data fed back by the first near point application server.
After the alternative address is selected, the access request is sent to the first near point application server corresponding to the alternative address, and the second target data fed back by the first near point application server is received, wherein after the associated shunting server of the service website is attacked, the access request of the user side can be processed instead of a resting state due to the existence of the address list of the near point application server, so that the user experience is improved.
When detecting an access request of a user side for accessing a service website, determining whether the user side accesses the service website for the first time; if the user side accesses the service website for the first time, the access request is sent to an associated distribution server of the service website; receiving a near point application server address list fed back to the user side by the association flow distribution server, and acquiring first target data corresponding to the access request, wherein the association flow distribution server acquires and feeds back the near point application server address list according to an IP address in the user side access request; and when the user side is detected to access the service website again, acquiring corresponding data content based on the near point application server address list. In other words, in the present application, a passive traffic forwarding mode is no longer used, that is, all user requests are not split and forwarded by a traffic load server, but are actively forwarded in batches of traffic according to whether a user accesses the service website for the first time, so that it is avoided that a background traffic load server of the service website needs to split and forward all access requests, and thus, the pressure of the background traffic load server of the service website can be effectively relieved, and the technical problem that the user experience is affected due to the excessive load pressure of the traffic load server caused by the splitting and forwarding of all user requests by the traffic load server is solved.
Further, another embodiment of the traffic load balancing method provided by the present invention, in this embodiment, after the step of sending the access request to the near point application server corresponding to the alternative address, includes:
step B1, determining whether the user side receives the second target data fed back by the first near point application server within a first preset duration;
in this embodiment, the application server of the service website may have a change, and if the user side does not request to access the service website for a long time, therefore, after the application server of the service website is changed, the user side may not have a near point application server address after the address change, or the near point application server address currently stored by the user side may be invalid, and therefore, the user side may not obtain correspondingly required data based on the invalid near point application server address, that is, in this embodiment, it is necessary to determine whether the user side receives the second target data fed back by the first near point application server within the first preset time period.
Step B2, if the user end does not receive the second target data within a first preset time length, another near point application server except the first near point application server is reselected from the near point application server address list, and the selected another near point application server is used as a second near point application server;
step B3, sending the access request to the second near point application server, and receiving third target data fed back by the second near point application server, so as to stop selecting other third near point application servers based on the received third target data.
If the user side does not receive the second target data within the first preset time length, another near point application server outside the first near point application server needs to be reselected from the address list of the near point application server, and the selected another near point application server is used as a second near point application server, wherein the reselecting mode of the another near point application server can be the same as or different from the mode of selecting the first near point application server, after the second near point application server is selected, the access request is sent to the second near point application server, and third target data fed back by the second near point application server is received, so that the selection of other third near point application servers is stopped based on the received third target data.
In this embodiment, after the step of sending the access request to the second near point application server, the method includes:
step C1, determining whether the user side receives the third target data fed back by the second near point application server within a second preset duration;
step C2, if the user side does not receive the third target data within a second preset time, resending the access request to the associated offload server of the service website, and generating and sending the prompt information that the first near point application server and the second near point application server are correspondingly invalid, so that the associated offload server reacquires and feeds back a new near point application server address based on the IP address in the user side access request;
and if the user side receives the third target data within a second preset time length, stopping the selection of other third near point application servers based on the received third target data. The address of the second near point application server currently stored by the user side may also be invalid, so that it needs to be determined whether the user side receives the third target data fed back by the second near point application server within a second preset time period, and after the third target data fed back by the second near point application server is not received within the second preset time period, in order to avoid excessive time consumption, the access request is not selected any more, but is re-sent to the associated offloading server of the service website, so that the associated offloading server can re-acquire and feed back a new address of the near point application server based on the IP address in the access request of the user side; in addition, the user side also generates and sends the prompt information that the first near point application server and the second near point application server are correspondingly invalid, so that the invalid near point application server is removed when the server side sends a new near point application server again, and the user experience is improved.
In this embodiment, whether the user side receives the second target data fed back by the first near point application server within a first preset time is judged; if the user side does not receive the second target data within a first preset time length, another near point application server except the first near point application server is reselected from the address list of the near point application servers, and the selected another near point application server is used as a second near point application server; and sending the access request to the second near point application server, and receiving third target data fed back by the second near point application server, so as to stop the selection of other third near point application servers based on the received third target data. In this embodiment, when data to be fed back is not received, other schemes can be responded in time, so that a user side can be prevented from processing a card shell of an access request, and user experience is improved.
Further, the present invention provides another embodiment of the traffic load balancing method, in which the receiving an address list of a near point application server fed back to the user side by the associated offload server, and acquiring the first target data corresponding to the access request includes:
step S31, receiving the address list of the near point application server fed back to the user side by the association splitting server, and receiving the request number of the current pending access request of the association splitting server fed back by the association splitting server;
in this embodiment, after receiving the address list of the near point application server fed back to the user end by the association splitting server, the first access request of the user end requesting to access the service website may be split forwarding processing by the association splitting server, or may be correspondingly sent to the near point application server for processing by feeding back the address of the near point application server to the user end, and a specific processing manner or processing path needs to obtain the request number of the current to-be-processed access request of the association splitting server fed back by the association splitting server.
Step S32, determining a request path of the first target data corresponding to the access request according to the number of the requests, so as to obtain the first target data corresponding to the access request.
And determining a request path of the first target data corresponding to the access request according to the request number so as to obtain the first target data corresponding to the access request. Wherein, the step of determining a request path of the first target data corresponding to the access request according to the request number to obtain the first target data corresponding to the access request comprises:
step D1, if the request number is less than the preset number, receiving the address list of the near point application server fed back by the association distribution server to the user side, so that the association distribution server forwards the access request to the target application server of the service website, and receiving the first target data fed back by the target application server;
and step D2, if the request number is greater than or equal to a preset number, receiving a near point application server address list fed back by the associated distribution server to the user side, selecting one near point application server as a target application server based on the near point application server address list, and receiving first target data fed back by the target application server.
In this embodiment, when the associated offloading server is not busy, the access request may be forwarded to the target application server of the service website, and if the associated offloading server is busy, the access request is directly sent to the target application server for processing, and whether the associated offloading server is busy is determined according to whether the number of requests is greater than or equal to a preset number, so that processing time of the access request of the user side can be reduced, and user experience can be improved.
Referring to fig. 3, fig. 3 is a schematic device structure diagram of a hardware operating environment according to an embodiment of the present invention.
The traffic load balancing device in the embodiment of the present invention may be a PC, or may be a terminal device such as a smart phone, a tablet computer, an e-book reader, an MP3(Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3) player, an MP4(Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 3) player, a portable computer, or the like.
As shown in fig. 3, the traffic load balancing apparatus may include: a processor 1001, such as a CPU, a memory 1005, and a communication bus 1002. The communication bus 1002 is used for realizing connection communication between the processor 1001 and the memory 1005. The memory 1005 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 1005 may alternatively be a memory device separate from the processor 1001 described above.
Optionally, the traffic load balancing device may further include a target user interface, a network interface, a camera, an RF (Radio Frequency) circuit, a sensor, an audio circuit, a WiFi module, and the like. The target user interface may comprise a Display screen (Display), an input unit such as a Keyboard (Keyboard), and the optional target user interface may also comprise a standard wired interface, a wireless interface. The network interface may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface).
Those skilled in the art will appreciate that the traffic load balancing apparatus configuration shown in fig. 3 does not constitute a limitation of the traffic load balancing apparatus and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
As shown in fig. 3, a memory 1005, which is a kind of computer storage medium, may include therein an operating system, a network communication module, and a traffic load balancing program. The operating system is a program that manages and controls the hardware and software resources of the traffic load balancing device, and supports the operation of the traffic load balancing program as well as other software and/or programs. The network communication module is used to implement communication between the components inside the memory 1005 and with other hardware and software in the traffic load balancing device.
In the traffic load balancing apparatus shown in fig. 3, the processor 1001 is configured to execute a traffic load balancing program stored in the memory 1005, and implement the steps of the traffic load balancing method described in any one of the above.
The specific implementation of the traffic load balancing device of the present invention is basically the same as that of each embodiment of the traffic load balancing method described above, and is not described herein again.
The present invention also provides a traffic load balancing apparatus, including:
the first determining module is used for determining whether a user side accesses the service website for the first time when an access request of the user side for accessing the service website is detected;
the sending module is used for sending the access request to an associated distribution server of the service website if the user side accesses the service website for the first time;
the first receiving module is configured to receive a near point application server address list fed back to the user side by the association offload server, and acquire first target data corresponding to the access request, where the association offload server acquires and feeds back the near point application server address list according to an IP address in the user side access request.
The specific implementation of the traffic load balancing apparatus of the present invention is substantially the same as that of each of the embodiments of the traffic load balancing method described above, and will not be described herein again.
The present invention provides a readable storage medium storing one or more programs, the one or more programs being further executable by one or more processors for implementing the steps of the traffic load balancing method as set forth in any of the above.
The specific implementation of the readable storage medium of the present invention is substantially the same as that of each embodiment of the traffic load balancing method, and is not described herein again.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the present specification and drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (8)

1. A traffic load balancing method is characterized by comprising the following steps:
when an access request of a user side for accessing a service website is detected, determining whether the user side accesses the service website for the first time;
if the user side accesses the service website for the first time, the access request is sent to an associated distribution server of the service website;
receiving an address list of a near point application server fed back to the user side by the associated distribution server, and receiving the request number of the current to-be-processed access request of the associated distribution server fed back by the associated distribution server;
if the request number is smaller than the preset number, receiving an address list of a near point application server fed back to the user side by the association distribution server, so that the association distribution server forwards the access request to a target application server of the service website, and receiving first target data fed back by the target application server;
if the request number is larger than or equal to a preset number, receiving a near point application server address list fed back to the user side by the association shunting server, selecting one near point application server as a target application server based on the near point application server address list, and receiving first target data fed back by the target application server;
the related distribution server acquires and feeds back the address list of the near point application server according to the IP address in the user side access request;
and when the user side is detected to access the service website again, acquiring corresponding data content based on the near point application server address list.
2. The traffic load balancing method according to claim 1, wherein the determining whether the user side accesses the service website for the first time when the access request of the user side to access the service website is detected comprises:
when an access request of a user side for accessing a service website is detected, inquiring whether the user side stores an address list of a near point application server corresponding to the service website, wherein the address of the service website and the address of the near point application server have a preset association relation;
if the user side stores the address list of the near point application server corresponding to the service website, determining that the user side does not access the service website for the first time;
and if the address list of the near point application server corresponding to the service website is not stored in the user side, determining that the user side accesses the service website for the first time.
3. The traffic load balancing method according to claim 1, wherein the determining whether the user side accesses the service website for the first time after the step of detecting an access request of the user side to access the service website comprises:
if the user side does not access the service website for the first time, one address in the near point application server address list is selected randomly, and the selected address is used as an alternative address;
and sending the access request to a first near point application server corresponding to the alternative address, and receiving second target data fed back by the first near point application server.
4. The traffic load balancing method according to claim 3, wherein the step of sending the access request to the near point application server corresponding to the alternative address comprises, after the step of sending the access request to the near point application server corresponding to the alternative address:
judging whether the user side receives second target data fed back by the first near point application server within a first preset time length;
if the user side does not receive the second target data within a first preset time length, another near point application server except the first near point application server is reselected from the address list of the near point application servers, and the selected another near point application server is used as a second near point application server;
and sending the access request to the second near point application server, and receiving third target data fed back by the second near point application server, so as to stop the selection of other third near point application servers based on the received third target data.
5. The traffic load balancing method of claim 4, wherein the step of sending the access request to the second near-point application server is followed by:
judging whether the user side receives third target data fed back by the second near point application server within a second preset time length;
if the user side does not receive the third target data within a second preset time length, the access request is sent to an associated distribution server of the service website again, and prompt information that the first near point application server and the second near point application server are correspondingly invalid is generated and sent, so that the associated distribution server can obtain and feed back a new near point application server address again based on an IP address in the user side access request;
and if the user side receives the third target data within a second preset time length, stopping the selection of other third near point application servers based on the received third target data.
6. A traffic load balancing apparatus, comprising:
the first determining module is used for determining whether a user side accesses the service website for the first time when an access request of the user side for accessing the service website is detected;
the sending module is used for sending the access request to an associated distribution server of the service website if the user side accesses the service website for the first time;
a receiving module, configured to receive a near point application server address list fed back to the user side by the association offload server, and receive a request number of a current pending access request of the association offload server fed back by the association offload server;
if the request number is smaller than the preset number, receiving an address list of a near point application server fed back to the user side by the association distribution server, so that the association distribution server forwards the access request to a target application server of the service website, and receiving first target data fed back by the target application server;
and if the request number is greater than or equal to a preset number, receiving a near point application server address list fed back to the user side by the association shunting server, selecting one near point application server as a target application server based on the near point application server address list, and receiving first target data fed back by the target application server, wherein the association shunting server acquires and feeds back the near point application server address list according to an IP address in the user side access request.
7. A traffic load balancing device, characterized in that the traffic load balancing device comprises: a memory, a processor, a communication bus, and a traffic load balancing program stored on the memory,
the communication bus is used for realizing communication connection between the processor and the memory;
the processor is configured to execute the traffic load balancing program to implement the steps of the traffic load balancing method according to any one of claims 1 to 5.
8. A readable storage medium, having stored thereon a traffic load balancing program, which when executed by a processor implements the steps of the traffic load balancing method according to any one of claims 1 to 5.
CN201811047672.2A 2018-09-07 2018-09-07 Flow load balancing method, device, equipment and readable storage medium Active CN109688187B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811047672.2A CN109688187B (en) 2018-09-07 2018-09-07 Flow load balancing method, device, equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811047672.2A CN109688187B (en) 2018-09-07 2018-09-07 Flow load balancing method, device, equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN109688187A CN109688187A (en) 2019-04-26
CN109688187B true CN109688187B (en) 2022-04-22

Family

ID=66185680

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811047672.2A Active CN109688187B (en) 2018-09-07 2018-09-07 Flow load balancing method, device, equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN109688187B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112637316B (en) * 2020-12-17 2024-02-27 中国农业银行股份有限公司 Communication method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101557427A (en) * 2009-05-11 2009-10-14 阿里巴巴集团控股有限公司 Method for providing diffluent information and realizing the diffluence of clients, system and server thereof
WO2010028590A1 (en) * 2008-09-12 2010-03-18 中兴通讯股份有限公司 Method for providing address list, peer-to-peer network and scheduling method thereof
CN103634269A (en) * 2012-08-21 2014-03-12 中国银联股份有限公司 A single sign-on system and a method
CN103905500A (en) * 2012-12-27 2014-07-02 腾讯数码(天津)有限公司 Method and apparatus for accessing to application server
CN106599308A (en) * 2016-12-29 2017-04-26 郭晓凤 Distributed metadata management method and system

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8180896B2 (en) * 2008-08-06 2012-05-15 Edgecast Networks, Inc. Global load balancing on a content delivery network
CN102075409B (en) * 2009-11-24 2013-03-20 华为技术有限公司 Method and system for processing request message as well as load balancer equipment
US9888064B2 (en) * 2015-02-11 2018-02-06 International Business Machines Corporation Load-balancing input/output requests in clustered storage systems

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010028590A1 (en) * 2008-09-12 2010-03-18 中兴通讯股份有限公司 Method for providing address list, peer-to-peer network and scheduling method thereof
CN101557427A (en) * 2009-05-11 2009-10-14 阿里巴巴集团控股有限公司 Method for providing diffluent information and realizing the diffluence of clients, system and server thereof
CN103634269A (en) * 2012-08-21 2014-03-12 中国银联股份有限公司 A single sign-on system and a method
CN103905500A (en) * 2012-12-27 2014-07-02 腾讯数码(天津)有限公司 Method and apparatus for accessing to application server
WO2014101433A1 (en) * 2012-12-27 2014-07-03 腾讯科技(深圳)有限公司 Method and device for accessing application server
CN106599308A (en) * 2016-12-29 2017-04-26 郭晓凤 Distributed metadata management method and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Joint Task Offloading Scheduling and Transmit Power Allocation for Mobile-Edge Computing Systems;Yuyi Mao 等;《IEEE》;20170511;全文 *
多前端并行负载均衡集群服务器的设计与实现;王俊涛 等;《计算机工程与应用》;20051231;第140-142页 *

Also Published As

Publication number Publication date
CN109688187A (en) 2019-04-26

Similar Documents

Publication Publication Date Title
EP2985705A2 (en) Webpage access method and apparatus, and router
CN104754073A (en) Resource access method and device
EP2698730B1 (en) Data acquisition method, device and system
CN111625743B (en) Resource loading method and device and electronic equipment
US10547705B2 (en) Caching proxy method and apparatus
US20120244847A1 (en) Transfer of data-intensive content between portable devices
CN109753207B (en) Information processing method and device and storage medium
CN102394880B (en) Method and device for processing jump response in content delivery network
CN103544324A (en) Kernel-mode data access method, device and system
WO2017080459A1 (en) Method, device and system for caching and providing service contents and storage medium
WO2021237433A1 (en) Message pushing method and apparatus, and electronic device and computer-readable medium
CN111064713B (en) Node control method and related device in distributed system
US11889133B2 (en) Burst traffic processing method, computer device and readable storage medium
CN109937566B (en) Method and apparatus for computing offload in a networked environment
CN109088844B (en) Information interception method, terminal, server and system
CN105101456A (en) Internet of Things device trigger method, device and system
JP7086853B2 (en) Network access methods, related equipment and systems
CN111510353A (en) Detection method, device and equipment of online equipment and computer readable storage medium
CN114553762B (en) Method and device for processing flow table items in flow table
CN109688187B (en) Flow load balancing method, device, equipment and readable storage medium
US20210250408A1 (en) Server node selection method and terminal device
US9866641B2 (en) Information query method and device
CN110708293B (en) Method and device for distributing multimedia service
KR20150025249A (en) Method for content routing based on batching and apparatus performing the method
CN108112052B (en) Terminal network sharing method and device, air conditioner and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant