WO2020024379A1 - 一种服务器接入方法及网络系统 - Google Patents
一种服务器接入方法及网络系统 Download PDFInfo
- Publication number
- WO2020024379A1 WO2020024379A1 PCT/CN2018/105548 CN2018105548W WO2020024379A1 WO 2020024379 A1 WO2020024379 A1 WO 2020024379A1 CN 2018105548 W CN2018105548 W CN 2018105548W WO 2020024379 A1 WO2020024379 A1 WO 2020024379A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- server
- user request
- application server
- access
- application
- Prior art date
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1001—Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
- H04L67/1004—Server selection for load balancing
- H04L67/1008—Server selection for load balancing based on parameters of servers, e.g. available memory or workload
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1001—Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
- H04L67/1004—Server selection for load balancing
- H04L67/1017—Server selection for load balancing based on a round robin mechanism
Definitions
- the present application belongs to the field of communication technology, and particularly relates to a server access method and a network system.
- the embodiments of the present application provide a server access method and a network system to improve network stability and server load capacity.
- a first aspect of the embodiments of the present application provides a server access method, including:
- the external network server receives the user request sent by the mobile terminal and forms a user request queue; wherein the external network server is connected to multiple access servers through a load balancing device;
- the load balancing device distributes user requests in the user request queue to the access server with the lowest load
- the access server distributes the received user requests to the application server based on the load balancing algorithm, and the application server receives and feeds back the user requests; wherein each of the access servers is connected to multiple application servers.
- a second aspect of the embodiments of the present application provides a network system, including:
- An external network server is configured to receive the user request and form a user request queue.
- the external network server is connected to multiple access servers through a load balancing device.
- a load balancing device configured to distribute user request polling in the user request queue to an access server with the lowest load
- An access server configured to distribute received user requests to an application server based on a load balancing algorithm; wherein each of the access servers is connected to multiple application servers;
- An application server configured to receive and feedback the user request.
- a multi-server operation mode is adopted, and the single network support is no longer required to improve the stability and reliability of the system.
- the multi-server mode inevitably improves the overall load capacity of the server.
- the user requests in the user request queue are allocated to the access server with the lowest current load during each polling process, which can further improve the stability and reliability of the system.
- FIG. 1 is a schematic structural diagram of a network system according to an embodiment of the present application.
- FIG. 2 is an implementation flowchart of a server access method according to an embodiment of the present application
- step 21 is a flowchart of implementing step 21 in a server access method according to an embodiment of the present application
- FIG. 5 is a flowchart of another server access method according to an embodiment of the present application.
- FIG. 6 is a flowchart of implementing step 55 in still another method for accessing a server according to an embodiment of the present application.
- FIG. 1 is a schematic structural diagram of a network system according to an embodiment of the present application.
- the network system includes: a mobile terminal, an extranet server, a load balancing device, multiple access servers, and multiple application servers.
- the mobile terminal is used to send a user request.
- An external network server is configured to receive the user request and form a user request queue.
- the external network server is connected to multiple access servers through a load balancing device.
- a load balancing device is configured to distribute user request polling in the user request queue to an access server with the lowest load.
- the access server is configured to distribute the received user requests to the application server based on the load balancing algorithm; wherein each of the access servers is connected to multiple application servers.
- An application server configured to receive and feedback the user request.
- the mobile terminal includes portable terminal devices such as a laptop computer, a tablet computer (Portable Android Device, PAD), and a smart phone.
- portable terminal devices such as a laptop computer, a tablet computer (Portable Android Device, PAD), and a smart phone.
- the access server is a server located in a demilitarized zone (DMZ), and implements communication between the external network and the internal network through network address translation (NAT) to access the application server.
- the access server includes, for example, an enterprise Web server, an FTP server, or a mail server.
- the application server is an intranet server, including Base database) server, the application server is installed and supports the TCP protocol.
- a multi-server operation mode is adopted, which no longer relies on a single network support to improve the stability and reliability of the system.
- the multi-server method inevitably improves the overall load capacity of the server.
- by assigning user requests in the user request queue to the access server with the lowest load during each polling process the stability and reliability of the system can be further improved.
- the extranet server is specifically configured to:
- a preset address and port combination for example: a network address and port combination 202.101.112.0:80; or an independent mobile terminal address and port combination, for example: 202.101.112.115 : 80.
- the satisfied user request is intercepted, and the user request queue is formed in chronological order;
- the mobile terminal is assigned an address and port to authorize it to access the external network server. Therefore, the mobile terminal's address and port can be verified to improve the overall network security.
- the load balancing device is specifically configured to:
- the IP address of the access server is detected through ICMP packets at preset intervals. If an ICMP response is received from the IP address within the set time, the server is considered to be able to provide services; or, The time interval is used to detect the service port of the access server through the TCP packet. If the response of the service port of the access server is received within the set time, it is considered that the access server can provide services; the access server that can provide services is determined.
- the load balancing device distributes the user request polls in the user request queue to the lowest load access server that can provide services.
- Detect the IP address or service port of the access server to determine whether the access server can provide services, so as to determine the access server that can provide services, further distribute user requests to the access server that can provide services with the lowest load, and further improve the network System stability and reliability.
- the access server is further configured to:
- Acquire status information of multiple application servers determine the status of multiple application servers, and update an application server list according to the status of the application servers, where the application server list is used to allocate user requests.
- the status includes: whether the application server is down;
- the application server is deleted from the application server list and does not participate in polling until it returns to normal.
- the access server is an Ngnix server.
- the access server acquires status information of multiple application servers, including: detecting multiple application servers in real time to obtain status information of the application server.
- the access server obtaining status information of multiple application servers includes: the access server periodically sends a connection confirmation message to the application server.
- the connection confirmation message is real-time clock information. After receiving the return message from the application server, it is determined that the application server is down, and then the status information of the application server is updated to be down.
- the access server is specifically used to:
- the access server polls the received user request to each application server whose load capacity value VAL 1 is not zero according to the weight WEI.
- the load capacity value reflects whether the application server is able to bear the load currently.
- the initial value of the load capacity value VAL 1 of each application server is a preset value. For example, it can be 10 or different preset values.
- the initial load capacity value and data processing speed value VAL 2 of each application server can be pre-configured. In the access server.
- the load capacity value VAL 1 of the application server is reduced by 1, and if the application server returns feedback information corresponding to the user request, the load capacity value VAL 1 is increased by 1; If the load capacity value VAL 1 of the server is 0, the application server is deleted from the application server list and does not participate in polling until the load capacity value VAL 1 is not 0.
- the access server assigns the received user request to each application server whose load capacity value VAL 1 is not zero according to the weighted WEI poll, including:
- the access server allocates the received user request poll to the application server whose load capacity value VAL 1 is not zero and has the largest weight WEI; or
- the user request in the user request queue is allocated to the application server whose load capacity value VAL 1 is not zero according to the allocation probability, thereby achieving better resource allocation for the current poll and improving the system load capacity.
- FIG. 2 shows a flowchart of a server access method according to an embodiment of the present application. As shown in FIG. 2, the method includes steps S21 to S23.
- the external network server receives a user request sent by the mobile terminal to form a user request queue.
- the external network server connects to multiple access servers through a load balancing device.
- the load balancing device distributes the user request in the user request queue to the access server with the lowest load.
- the access server distributes the received user requests to the application server based on the load balancing algorithm, and the application server receives and feeds back the user requests; wherein each of the access servers is connected to multiple application servers.
- the foregoing server access method is implemented based on the network system shown in FIG. 1.
- the S21 includes:
- the external network server determines whether the user request sent by the mobile terminal satisfies a combination of a preset address and port, or a combination of an independent mobile terminal address and port; if it is satisfied, executes S212; if not, executes S213.
- the method includes steps S41 to S44.
- the external network server receives a user request sent by the mobile terminal to form a user request queue.
- the external network server connects to multiple access servers through a load balancing device.
- the load balancing device detects the IP address of the access server through an ICMP packet at a preset interval. If the ICMP response of the IP address is received within the set time, the server is considered to be able to provide services; or The load balancing device detects the service port of the access server through a TCP packet at a preset interval. If the response of the service port of the access server is received within the set time, it is considered that the access server can provide services; OK Access server capable of providing services;
- the load balancing device distributes the user request polling in the user request queue to the access server with the lowest load and capable of providing services.
- the access server distributes the received user request to the application server based on the load balancing algorithm, and the application server receives and feeds back the user request; wherein each of the access servers is connected to multiple application servers.
- the method includes steps S51 to S55.
- the external network server receives a user request sent by the mobile terminal and forms a user request queue.
- the external network server connects to multiple access servers through a load balancing device.
- the load balancing device detects the IP address of the access server through an ICMP packet at a preset interval. If the ICMP response of the IP address is received within the set time, the server is considered to be able to provide services; or The load balancing device detects the service port of the access server through a TCP packet at a preset interval. If the response of the service port of the access server is received within the set time, it is considered that the access server can provide services; OK An access server that can provide services.
- the load balancing device distributes the user request poll in the user request queue to the access server with the lowest load and capable of providing services.
- the access server obtains status information of multiple application servers, determines the status of multiple application servers, and updates an application server list according to the status of the application servers.
- the status includes: whether the application server is down.
- Each of the access servers is connected to a plurality of application servers.
- the application server list is used for subsequent allocation of user requests.
- the application server is deleted from the application server list and does not participate in polling until it returns to normal. If it is determined that the application server is normal, add the application server to the application server list.
- the access server polls the received user request to an application server in the application server list based on the load balancing algorithm, and the application server receives and feeds back the user request.
- the step S55 includes steps S551 to S553.
- the access server obtains a load capacity value VAL 1 and a data processing speed value VAL 2 of each application server in the application server list; wherein the load capacity value reflects the current ability of the application server to bear the load.
- the access server polls the received user request to each application server whose load capacity value VAL 1 is not zero according to the weight WEI.
- the load capacity value reflects whether the application server is able to bear the load currently.
- the initial value of the load capacity value VAL 1 of each application server is a preset value. For example, it can be 10 or different preset values.
- the initial load capacity value and data processing speed value VAL 2 of each application server can be pre-configured. In the access server.
- the load capacity value VAL 1 of the application server is reduced by 1, and if the application server returns feedback information corresponding to the user request, the load capacity value VAL 1 is increased by 1; If the load capacity value VAL 1 of the server is 0, the application server is deleted from the application server list and does not participate in polling until the load capacity value VAL 1 is not 0.
- S553 includes:
- the access server allocates the received user request poll to the application server whose load capacity value VAL 1 is not zero and has the largest weight WEI; or
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Computer And Data Communications (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
本申请适用于通信技术领域,提供了一种服务器接入方法及网络系统,其中,服务器接入方法包括:外网服务器接收到移动端发送的用户请求,形成用户请求队列;其中,外网服务器通过负载均衡设备连接多个访问服务器;负载均衡设备将用户请求队列中的用户请求轮询分发给最低负载的访问服务器;访问服务器将接收到的用户请求基于负载均衡算法轮询分配至应用服务器,所述应用服务器接收并反馈所述用户请求;其中,每个所述访问服务器连接多台应用服务器。本申请通过多服务器运行,提高了网络的稳定性和服务器的负载能力。
Description
本申请要求于2018年08月01日提交中国专利局、申请号为201810865078.8、发明名称为“一种服务器接入方法及网络系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
本申请属于通信技术领域,尤其涉及一种服务器接入方法及网络系统。
随着通信技术的不断发展,网络规模不断扩大,与此同时,终端设备也在不断升级,大量的应用开发都围绕着网络进行,促进了网络用户数和流量需求的快速增长。
但是,随着网络硬件设备的飞速进步,网络带宽的瓶颈效应日趋减弱,Web服务器的性能问题逐渐显现出来。由于单一服务器系统处理用户请求的能力有限,当并发访问的用户人数越来越多时,就会造成Web服务器的负荷加重甚至崩溃。
可见,如何解决用户请求快速增长,服务器高负载的问题,显得非常紧迫。
有鉴于此,本申请实施例提供了一种服务器接入方法及网络系统,以提高网络的稳定性和服务器的负载能力。
本申请实施例的第一方面提供了一种服务器接入方法,包括:
外网服务器接收到移动端发送的用户请求,形成用户请求队列;其中,外网服务器通过负载均衡设备连接多个访问服务器;
负载均衡设备将用户请求队列中的用户请求轮询分发给最低负载的访问服务器;
访问服务器将接收到的用户请求基于负载均衡算法轮询分配至应用服务器,所述应用服务器接收并反馈所述用户请求;其中,每个所述访问服务器连接多台应用服务器。
本申请实施例的第二方面提供了一种网络系统,包括:
移动端,用于发送用户请求;
外网服务器,用于接收所述用户请求,形成用户请求队列;其中,外网服务器通过负载均衡设备连接多个访问服务器。
负载均衡设备,用于将所述用户请求队列中的用户请求轮询分发给最低负载的访问服务器;
访问服务器,用于将接收到的用户请求基于负载均衡算法轮询分配至应用服务器;其中,每个所述访问服务器连接多台应用服务器;
应用服务器,用于接收并反馈所述用户请求。
在本申请实施例中,采用多服务器运行的方式,不再依赖单一网络支撑,提高系统的稳定性和可靠性,多服务器的方式必然提高了服务器整体的负载能力。此外,通过将用户请求队列中的用户请求,在每次轮询过程中,都分配给当前轮次负载最低的访问服务器,能够进一步提高系统的稳定性和可靠性。
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例提供的一种网络系统的的结构示意图;
图2是本申请实施例提供的一种服务器接入方法的实现流程图;
图3是本申请实施例提供的一种服务器接入方法中步骤21的实现流程图;
图4是本申请实施例提供的另一种服务器接入方法的实现流程图;
图5是本申请实施例提供的又一种服务器接入方法的实现流程图;
图6是本申请实施例提供的又一种服务器接入方法中步骤55的实现流程图。
以下描述中,为了说明而不是为了限定,提出了诸如特定系统结构、技术之类的具体细节,以便透彻理解本申请实施例。然而,本领域的技术人员应当清楚,在没有这些具体细节的其它实施例中也可以实现本申请。在其它情况中,省略对众所周知的系统、装置、电路以及方法的详细说明,以免不必要的细节妨碍本申请的描述。
为了说明本申请所述的技术方案,下面通过具体实施例来进行说明。
图1示出了本申请实施例提供的一种网络系统的结构示意图。如图1所示,所述网络系统包括:移动端,外网服务器,负载均衡设备,多个访问服务器,和多个应用服务器。
其中,移动端,用于发送用户请求。
外网服务器,用于接收所述用户请求,形成用户请求队列,其中,外网服务器通过负载均衡设备连接多个访问服务器。
负载均衡设备,用于将所述用户请求队列中的用户请求轮询分发给最低负载的访问服务器。
访问服务器,用于将接收到的用户请求基于负载均衡算法轮询分配至应用服务器;其中,每个所述访问服务器连接多台应用服务器。
应用服务器,用于接收并反馈所述用户请求。
在本申请实施例中,所述移动端包括笔记本电脑、平板电脑(Portable Android Device,PAD)和智能手机等便携式终端设备。
访问服务器为位于隔离区(Demilitarized Zone,DMZ区)的服务器,通过网络地址转换(NAT)实现外网与内网的通信,从而访问应用服务器。访问服务器包括如企业Web服务器、FTP服务器、或mail服务器等。应用服务器为内网服务器,包括如DB(Data
Base数据库)服务器,应用服务器安装并支持TCP协议。
本申请实施例,采用多服务器运行的方式,不再依赖单一网络支撑,提高系统的稳定性和可靠性,多服务器的方式必然提高了服务器整体的负载能力。此外,通过将用户请求队列中的用户请求,在每次轮询过程中,都分配给当次负载最低的访问服务器,能够进一步提高系统的稳定性和可靠性。
可选地,外网服务器具体用于:
判断所述移动端发送的用户请求是否满足预设地址和端口的组合,例如:一个网络地址和端口的组合202.101.112.0:80;或独立的移动端地址和端口的组合,例如:202.101.112.115:80。
若满足,则将满足的用户请求拦截,按照时间先后顺序形成用户请求队列;
若不满足,则反馈请求失败信息给移动端。
对移动端分配地址和端口,从而授权其访问外网服务器的权限,因此,可以通过验证移动端的地址和端口,提高网络整体的安全性。
可选地,所述负载均衡设备具体用于:
每隔预设时间间隔通过ICMP包对访问服务器的IP地址进行检测,如果在设定的时间内能收到该IP地址的ICMP的回应,则认为该服务器能提供服务;或,每隔预设时间间隔通过TCP包对访问服务器的服务端口进行检测,如果在设定的时间内能收到访问服务器的服务端口的回应,则认为该访问服务器能提供服务;确定能提供服务的访问服务器。
负载均衡设备将用户请求队列中的用户请求轮询分发给最低负载且能提供服务的访问服务器。
对访问服务器的IP地址或服务端口进行检测,确定访问服务器是否能提供服务,从而确定出能提供服务的访问服务器,进一步将用户请求分发至能提供服务的且最低负载的访问服务器,进一步提高网络系统的稳定性和可靠性。
可选地,所述访问服务器还用于:
获取多个应用服务器的状态信息,确定多个所述应用服务器的状态,根据所述应用服务器的状态更新应用服务器列表,所述应用服务器列表用于分配用户请求。
其中,所述状态包括:应用服务器是否宕机;
若确定应用服务器宕机,则从应用服务器列表中删除该应用服务器,不参与轮询,直到其恢复正常。
若确定应用服务器正常,则将该应用服务器加入到应用服务器列表。
作为本申请一实施例,所述访问服务器为Ngnix服务器,访问服务器获取多个应用服务器的状态信息,包括:实时检测多个应用服务器获得应用服务器的状态信息。作为本申请另一实施例,访问服务器获取多个应用服务器的状态信息,包括:访问服务器定时发送连接确认消息给应用服务器,例如,该连接确认消息为实时时钟信息,若未在预设时间间隔内收到所述应用服务器的返回消息,则确定该应用服务器宕机,然后更新该应用服务器的状态信息为宕机。
获取应用服务器的状态,确定应用服务器是否宕机,从而更新应用服务器列表,使得宕机的应用服务器不参与用户请求的轮询,进一步提高网络系统的稳定性和可靠性。
更进一步地,访问服务器具体用于:
获取应用服务器列表中各应用服务器的负载能力值VAL
1和数据处理速度值VAL
2;
根据VAL
1和VAL
2计算各应用服务器的权值WEI;WEI=W1·VAL
1+W2·VAL
2;其中,W1=0.4,W2=0.6;
访问服务器将接收到的用户请求根据所述权值WEI轮询分配至负载能力值VAL
1不为零的各应用服务器。
其中,所述负载能力值反应应用服务器当前能够承担负载的能力高低。各应用服务器的负载能力值VAL
1初始值为预设值,例如,可以均为10,也可为不同预设值,每个应用服务器的初始负载能力值和数据处理速度值VAL
2可预先配置于所述访问服务器中。
若应用服务器分配一个用户请求后,所述应用服务器的负载能力值VAL
1减1,若所述应用服务器返回与该用户请求对应的反馈信息,则所述负载能力值VAL
1加1;若应用服务器的负载能力值VAL
1为0,则将该应用服务器从应用服务器列表中删除,不参与轮询,直至负载能力值VAL
1不为0。
获取应用服务器的权值,根据所述权值确定用户请求的轮询分配,提高服务器负载能力。
可选地,所述访问服务器将接收到的用户请求根据所述权值WEI轮询分配至负载能力值VAL
1不为零的各应用服务器,包括:
访问服务器将接收到的用户请求轮询分配至负载能力值VAL
1不为零且权值WEI最大的应用服务器;或
根据所述应用服务器列表中负载能力值VAL
1不为零的应用服务器的权值WEI,确定负载能力值VAL
1不为零的应用服务器的分配概率;将访问服务器用户请求队列中的用户请求按照所述分配概率分配至负载能力值VAL
1不为零的所述应用服务器。
例如,应用服务器包括A、B和C,若应用服务器A、B和C的负载能力值均不为0,且权值WEI依次为2、3和5。确定应用服务器A、B和C的分配概率依次为2/(2+3+5)=20%,3/(2+3+5)=30%,和5/(2+3+5)=50%。若当次轮询访问服务器队列中的用户请求包括用户请求1至用户请求10,按照分配概率,将10个用户请求中的2个请求分配至应用服务器A、将3个用户请求分配至应用服务器B,将5个用户请求分配至应用服务器C。而当下一次轮询时,重复该步骤,即获得该次轮询对应的权值和分配概率,在此基础上进行下一次用户请求的分配。
通过将用户请求队列中的用户请求按照分配概率分配至负载能力值VAL
1不为零的应用服务器,从而实现当次轮询更优的资源分配,提高了系统负载能力。
图2示出了本申请实施例提供的一种服务器接入方法的流程图。如图2所示,所述方法包括步骤S21至S23。
S21,外网服务器接收到移动端发送的用户请求,形成用户请求队列;其中,外网服务器通过负载均衡设备连接多个访问服务器。
S22,负载均衡设备将用户请求队列中的用户请求轮询分发给最低负载的访问服务器;
S23,访问服务器将接收到的用户请求基于负载均衡算法轮询分配至应用服务器,所述应用服务器接收并反馈所述用户请求;其中,每个所述访问服务器连接多台应用服务器。
本申请实施例中,上述服务器接入方法基于图1所示的网络系统实现。
作为本申请一实施例,如图3所示,所述S21包括:
S211,外网服务器判断移动端发送的用户请求是否满足预设地址和端口的组合,或独立的移动端地址和端口的组合;若满足,则执行S212;若不满足,则执行S213。
S212,将满足的用户请求拦截,按照时间先后顺序形成用户请求队列;
S213,反馈请求失败信息给移动端。
作为本申请另一实施例,在图2或图3所示实施例的基础上,本申请另一实施例做了进一步改进,以对图2所示实施例进行改进为例进行说明。如图4所示,所述方法包括步骤S41至S44。
S41,外网服务器接收到移动端发送的用户请求,形成用户请求队列;其中,外网服务器通过负载均衡设备连接多个访问服务器。
S42,负载均衡设备每隔预设时间间隔通过ICMP包对访问服务器的IP地址进行检测,如果在设定的时间内能收到该IP地址的ICMP的回应,则认为该服务器能提供服务;或,负载均衡设备每隔预设时间间隔通过TCP包对访问服务器的服务端口进行检测,如果在设定的时间内能收到访问服务器的服务端口的回应,则认为该访问服务器能提供服务;确定能提供服务的访问服务器;
S43,负载均衡设备将用户请求队列中的用户请求轮询分发给最低负载且能提供服务的访问服务器;
S44,访问服务器将接收到的用户请求基于负载均衡算法轮询分配至应用服务器,所述应用服务器接收并反馈所述用户请求;其中,每个所述访问服务器连接多台应用服务器。
作为本申请另一实施例,在图2、图3或图4所示实施例的基础上,本申请另一实施例做了进一步改进,以对图4所示实施例进行改进为例进行说明。如图5所示,所述方法包括步骤S51至S55。
S51,外网服务器接收到移动端发送的用户请求,形成用户请求队列;其中,外网服务器通过负载均衡设备连接多个访问服务器。
S52,负载均衡设备每隔预设时间间隔通过ICMP包对访问服务器的IP地址进行检测,如果在设定的时间内能收到该IP地址的ICMP的回应,则认为该服务器能提供服务;或,负载均衡设备每隔预设时间间隔通过TCP包对访问服务器的服务端口进行检测,如果在设定的时间内能收到访问服务器的服务端口的回应,则认为该访问服务器能提供服务;确定能提供服务的访问服务器。
S53,负载均衡设备将用户请求队列中的用户请求轮询分发给最低负载且能提供服务的访问服务器。
S54,访问服务器获取多个应用服务器的状态信息,确定多个所述应用服务器的状态,根据所述应用服务器的状态更新应用服务器列表。
其中,所述状态包括:应用服务器是否宕机。每个所述访问服务器连接多台应用服务器。
应用服务器列表用于后续分配用户请求。
若确定应用服务器宕机,则从应用服务器列表中删除该应用服务器,不参与轮询,直到其恢复正常。若确定应用服务器正常,则将该应用服务器加入到应用服务器列表。
S55,访问服务器将接收到的用户请求基于负载均衡算法轮询分配至应用服务器列表中的应用服务器,所述应用服务器接收并反馈所述用户请求。
作为本申请一实施例,如图6所示,所述步骤S55包括步骤S551至S553。
S551,访问服务器获取应用服务器列表中各应用服务器的负载能力值VAL
1和数据处理速度值VAL
2;其中,所述负载能力值反应应用服务器当前能够承担负载的能力高低。
S552,根据VAL
1和VAL
2计算各应用服务器的权值WEI;WEI=W1·VAL
1+W2·VAL
2;其中,W1=0.4,W2=0.6。
S553,访问服务器将接收到的用户请求根据所述权值WEI轮询分配至负载能力值VAL
1不为零的各应用服务器。
其中,所述负载能力值反应应用服务器当前能够承担负载的能力高低。各应用服务器的负载能力值VAL
1初始值为预设值,例如,可以均为10,也可为不同预设值,每个应用服务器的初始负载能力值和数据处理速度值VAL
2可预先配置于所述访问服务器中。
若应用服务器分配一个用户请求后,所述应用服务器的负载能力值VAL
1减1,若所述应用服务器返回与该用户请求对应的反馈信息,则所述负载能力值VAL
1加1;若应用服务器的负载能力值VAL
1为0,则将该应用服务器从应用服务器列表中删除,不参与轮询,直至负载能力值VAL
1不为0。
负载能力值VAL
1和数据处理速度值VAL
2的权重占比分别为W1=0.4,W2=0.6,取这些数值,相比于直接将用户请求依次分配至应用服务器的情形,能够提高应用服务器的整体负载能力10%左右,因此本申请中权重占比取这些数值。
获取应用服务器的权值,根据所述权值确定用户请求的轮询分配,提高服务器负载能力。
更进一步地,S553包括:
访问服务器将接收到的用户请求轮询分配至负载能力值VAL
1不为零且权值WEI最大的应用服务器;或
根据所述应用服务器列表中负载能力值VAL
1不为零的应用服务器的权值WEI,确定负载能力值VAL
1不为零的应用服务器的分配概率;将访问服务器用户请求队列中的用户请求按照所述分配概率分配至负载能力值VAL
1不为零的所述应用服务器。
例如,应用服务器包括A、B和C,若应用服务器A、B和C的负载能力值均不为0,且权值WEI依次为2、3和5。确定应用服务器A、B和C的分配概率依次为2/(2+3+5)=20%,3/(2+3+5)=30%,和5/(2+3+5)=50%。若当次轮询访问服务器队列中的用户请求包括用户请求1至用户请求10,按照分配概率,将10个用户请求中的2个请求分配至应用服务器A、将3个用户请求分配至应用服务器B,将5个用户请求分配至应用服务器C。而当下一次轮询时,重复该步骤,即获得该次轮询对应的权值和分配概率,在此基础上进行下一次用户请求的分配。
以上所述实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围,均应包含在本申请的保护范围之内。
Claims (12)
- 一种服务器接入方法,其特征在于,包括:外网服务器接收到移动端发送的用户请求,形成用户请求队列;其中,外网服务器通过负载均衡设备连接多个访问服务器;负载均衡设备将用户请求队列中的用户请求轮询分发给最低负载的访问服务器;访问服务器将接收到的用户请求基于负载均衡算法轮询分配至应用服务器,所述应用服务器接收并反馈所述用户请求;其中,每个所述访问服务器连接多台应用服务器。
- 如权利要求1所述的方法,其特征在于,所述外网服务器接收到移动端发送的用户请求,形成用户请求队列,包括:外网服务器判断移动端发送的用户请求是否满足预设地址和端口的组合,或独立的移动端地址和端口的组合;若满足,则将满足的用户请求拦截,按照时间先后顺序形成用户请求队列;若不满足,则反馈请求失败信息给移动端。
- 如权利要求1或2所述的方法,其特征在于,负载均衡设备将用户请求队列中的用户请求轮询分发给最低负载的访问服务器之前,还包括:负载均衡设备每隔预设时间间隔通过ICMP包对访问服务器的IP地址进行检测,如果在设定的时间内能收到该IP地址的ICMP的回应,则认为该服务器能提供服务;或,负载均衡设备每隔预设时间间隔通过TCP包对访问服务器的服务端口进行检测,如果在设定的时间内能收到访问服务器的服务端口的回应,则认为该访问服务器能提供服务;确定能提供服务的访问服务器;相应的,负载均衡设备将用户请求队列中的用户请求轮询分发给最低负载的访问服务器,包括:负载均衡设备将用户请求队列中的用户请求轮询分发给最低负载且能提供服务的访问服务器。
- 如权利要求1或2所述的方法,其特征在于,访问服务器将接收到的用户请求基于负载均衡算法轮询分配至应用服务器之前,还包括:访问服务器获取多个应用服务器的状态信息,确定多个所述应用服务器的状态,根据所述应用服务器的状态更新应用服务器列表,所述应用服务器列表用于分配用户请求。
- 如权利要求4所述的方法,其特征在于,访问服务器将接收到的用户请求基于负载均衡算法轮询分配至应用服务器,包括:访问服务器获取应用服务器列表中各应用服务器的负载能力值VAL 1和数据处理速度值VAL 2;其中,所述负载能力值反应应用服务器当前能够承担负载的能力高低;根据VAL 1和VAL 2计算各应用服务器的权值WEI;WEI=W1·VAL 1+W2·VAL 2;其中,W1=0.4,W2=0.6;访问服务器将接收到的用户请求根据所述权值WEI轮询分配至负载能力值VAL 1不为零的各应用服务器。
- 如权利要求5所述的方法,其特征在于,所述访问服务器将接收到的用户请求根据所述权值WEI轮询分配至负载能力值VAL 1不为零的各应用服务器,包括:访问服务器将接收到的用户请求轮询分配至负载能力值VAL 1不为零且权值WEI最大的应用服务器;或根据所述应用服务器列表中负载能力值VAL 1不为零的应用服务器的权值WEI,确定负载能力值VAL 1不为零的应用服务器的分配概率;将访问服务器用户请求队列中的用户请求按照所述分配概率分配至负载能力值VAL 1不为零的所述应用服务器。
- 一种网络系统,其特征在于,包括:移动端,用于发送用户请求;外网服务器,用于接收所述用户请求,形成用户请求队列;其中,外网服务器通过负载均衡设备连接多个访问服务器。负载均衡设备,用于将所述用户请求队列中的用户请求轮询分发给最低负载的访问服务器;访问服务器,用于将接收到的用户请求基于负载均衡算法轮询分配至应用服务器;其中,每个所述访问服务器连接多台应用服务器;应用服务器,用于接收并反馈所述用户请求。
- 如权利要求7所述的网络系统,其特征在于,外网服务器具体用于:判断所述移动端发送的用户请求是否满足预设地址和端口的组合,或独立的移动端地址和端口的组合;若满足,则将满足的用户请求拦截,按照时间先后顺序形成用户请求队列;若不满足,则反馈请求失败信息给移动端;
- 如权利要求7或8所述的网络系统,其特征在于,所述负载均衡设备具体用于:每隔预设时间间隔通过ICMP包对访问服务器的IP地址进行检测,如果在设定的时间内能收到该IP地址的ICMP的回应,则认为该服务器能提供服务;或,每隔预设时间间隔通过TCP包对访问服务器的服务端口进行检测,如果在设定的时间内能收到访问服务器的服务端口的回应,则认为该访问服务器能提供服务;确定能提供服务的访问服务器;将用户请求队列中的用户请求轮询分发给最低负载且能提供服务的访问服务器。
- 如权利要求9所述的网络系统,其特征在于,所述访问服务器具体用于:获取多个应用服务器的状态信息,确定多个所述应用服务器的状态,所述状态包括:应用服务器是否宕机;若确定应用服务器宕机,则从应用服务器列表中删除该应用服务器,不参与轮询,直到其恢复正常。若确定应用服务器正常,则将该应用服务器加入到应用服务器列表,应用服务器列表用于后续分配用户请求。
- 如权利要求10所述的网络系统,其特征在于,所述访问服务器还用于:获取应用服务器列表中各应用服务器的负载能力值VAL 1和数据处理速度值VAL 2;其中,所述负载能力值反应应用服务器当前能够承担负载的能力高低;根据VAL 1和VAL 2计算各应用服务器的权值WEI;WEI=W1·VAL 1+W2·VAL 2;其中,W1=0.4,W2=0.6;将接收到的用户请求根据所述权值WEI轮询分配至负载能力值VAL 1不为零的各应用服务器。
- 如权利要求11所述的网络系统,其特征在于,所述访问服务器,具体用于:将接收到的用户请求轮询分配至负载能力值VAL 1不为零且权值WEI最大的应用服务器;或根据所述应用服务器列表中负载能力值VAL 1不为零的应用服务器的权值WEI,确定负载能力值VAL 1不为零的应用服务器的分配概率;将访问服务器用户请求队列中的用户请求按照所述分配概率分配至负载能力值VAL 1不为零的所述应用服务器。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810865078.8A CN109040236A (zh) | 2018-08-01 | 2018-08-01 | 一种服务器接入方法及网络系统 |
CN201810865078.8 | 2018-08-01 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020024379A1 true WO2020024379A1 (zh) | 2020-02-06 |
Family
ID=64647453
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2018/105548 WO2020024379A1 (zh) | 2018-08-01 | 2018-09-13 | 一种服务器接入方法及网络系统 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN109040236A (zh) |
WO (1) | WO2020024379A1 (zh) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111049919B (zh) * | 2019-12-19 | 2022-09-06 | 上海米哈游天命科技有限公司 | 一种用户请求的处理方法、装置、设备及存储介质 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102447719A (zh) * | 2010-10-12 | 2012-05-09 | 上海遥薇(集团)有限公司 | Web GIS服务的动态负载均衡信息处理系统 |
CN104301414A (zh) * | 2014-10-21 | 2015-01-21 | 无锡云捷科技有限公司 | 基于网络协议栈的服务器负载均衡方法 |
CN106230992A (zh) * | 2016-09-28 | 2016-12-14 | 中国银联股份有限公司 | 一种负载均衡方法和负载均衡节点 |
CN107493351A (zh) * | 2017-10-09 | 2017-12-19 | 郑州云海信息技术有限公司 | 一种客户端访问存储系统的负载均衡的方法及装置 |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7480705B2 (en) * | 2001-07-24 | 2009-01-20 | International Business Machines Corporation | Dynamic HTTP load balancing method and apparatus |
-
2018
- 2018-08-01 CN CN201810865078.8A patent/CN109040236A/zh active Pending
- 2018-09-13 WO PCT/CN2018/105548 patent/WO2020024379A1/zh active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102447719A (zh) * | 2010-10-12 | 2012-05-09 | 上海遥薇(集团)有限公司 | Web GIS服务的动态负载均衡信息处理系统 |
CN104301414A (zh) * | 2014-10-21 | 2015-01-21 | 无锡云捷科技有限公司 | 基于网络协议栈的服务器负载均衡方法 |
CN106230992A (zh) * | 2016-09-28 | 2016-12-14 | 中国银联股份有限公司 | 一种负载均衡方法和负载均衡节点 |
CN107493351A (zh) * | 2017-10-09 | 2017-12-19 | 郑州云海信息技术有限公司 | 一种客户端访问存储系统的负载均衡的方法及装置 |
Also Published As
Publication number | Publication date |
---|---|
CN109040236A (zh) | 2018-12-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10499279B2 (en) | Method and apparatus for dynamic association of terminal nodes with aggregation nodes and load balancing | |
US9948503B2 (en) | Gateway redundancy protocol for communications networks | |
WO2023000935A1 (zh) | 一种数据处理方法、网元设备以及可读存储介质 | |
US20040054766A1 (en) | Wireless resource control system | |
WO2021088592A1 (zh) | 业务分流方法、装置及系统以及电子设备和存储介质 | |
US9979656B2 (en) | Methods, systems, and computer readable media for implementing load balancer traffic policies | |
US10367578B2 (en) | Apparatus and method for a bandwidth allocation approach in a shared bandwidth communications system | |
US11425602B2 (en) | Load balancing and service selection in mobile networks | |
WO2021147354A1 (zh) | 服务等级配置方法以及装置 | |
WO2010078765A1 (zh) | 交互式网络电视的内容分发网络中的业务处理方法及系统 | |
WO2020024380A1 (zh) | 一种数据访问方法及系统 | |
CN106790734B (zh) | 一种网络地址分配方法及装置 | |
US11218912B2 (en) | Method and apparatus for controlling traffic of network device in wireless communication network | |
CN114286391B (zh) | 处理多个精细定时测量测距请求 | |
WO2020024379A1 (zh) | 一种服务器接入方法及网络系统 | |
CN115278514B (zh) | 使用高带宽信道的企业部署中的精细定时测量 | |
US20110202592A1 (en) | Use of Multiple Connections to Extend RADIUS Identifier Space | |
KR101382177B1 (ko) | 동적 메시지 라우팅 시스템 및 방법 | |
JP2003296208A (ja) | サーバーシステム、サーバーシステムによるサービス提供方法、サーバーシステムからサービスを受けるためのプログラム、このプログラムを記録した記録媒体 | |
US20230254691A1 (en) | Authentication and data flow control configuration | |
US20240276341A1 (en) | Network path instantiation and dynamic data flow control | |
Shukla et al. | A Survey: Improvement of QoS Via Load Balancing Approach in Parallel Distributed System | |
WO2021068260A1 (zh) | 调整服务质量的方法、装置和系统 | |
KR20230160585A (ko) | 제어평면(cp)의 리소스 운영 장치 및 사용자평면(up)의 장치, 리소스 운영 방법 | |
WO2021151960A1 (en) | Routing of bursty data flows |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18928256 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18928256 Country of ref document: EP Kind code of ref document: A1 |