WO2019205406A1 - 高并发业务请求处理方法、装置、计算机设备和存储介质 - Google Patents

高并发业务请求处理方法、装置、计算机设备和存储介质 Download PDF

Info

Publication number
WO2019205406A1
WO2019205406A1 PCT/CN2018/104151 CN2018104151W WO2019205406A1 WO 2019205406 A1 WO2019205406 A1 WO 2019205406A1 CN 2018104151 W CN2018104151 W CN 2018104151W WO 2019205406 A1 WO2019205406 A1 WO 2019205406A1
Authority
WO
WIPO (PCT)
Prior art keywords
service
service request
load balancing
server
data
Prior art date
Application number
PCT/CN2018/104151
Other languages
English (en)
French (fr)
Inventor
刘丹
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2019205406A1 publication Critical patent/WO2019205406A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1008Server selection for load balancing based on parameters of servers, e.g. available memory or workload
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1012Server selection for load balancing based on compliance of requirements or conditions with available server resources

Definitions

  • the present application relates to the field of Internet technologies, and in particular, to a high concurrent service request processing method, apparatus, computer device, and storage medium.
  • the high concurrent request thread includes a request from the customer to the commercial organization to purchase the product, contact the customer service, and the customer service to contact the customer for post-sale processing or return visit.
  • the connection line is often crowded and the access speed is greatly reduced.
  • the e-commerce platform for online shopping especially in the "Double Eleven", “Double Twelve” and other large-scale promotional activities or the best-selling merchandise snapping business, there have been many commercial activities such as spike products, spike red packets, spikes and so on. . These activities are usually carried out in a short period of time and generate a large amount of traffic, so the characteristics of high concurrent services will place great load pressure on the service provider's network servers, such as application servers and databases.
  • the defects of the prior art in handling high-concurrency requests of customer information mainly include that the checksum caching procedure is cumbersome, slowing down the processing efficiency of high concurrent requests, and requesting access lines is poor, which easily leads to packet loss.
  • a high concurrent service request processing method including:
  • the service request is allocated by the load balancing server according to the status information, where the load balancing server selects an appropriate application server and allocates to one of the application servers responsible for the service processing;
  • the current service request to be processed is coupled with the service data, and the service data is extracted in the first-in first-out order and coupled with the service request of the client, and sent through the service interface.
  • the client of the to-be-processed service request returns the service data; if there is no service data in the application server, the service-free data is returned to the client that sends the current service request to be processed.
  • a high concurrent service request processing apparatus comprising:
  • a receiving unit configured to receive a service request sent by multiple clients
  • An allocating unit configured to allocate, by the load balancing server, the service request according to the status information by the load balancing server, where the load balancing server selects an appropriate application server and allocates to one of the application servers responsible for the service processing;
  • a returning unit configured to couple the current pending service request with the service data as described in the application server, and perform the first-in first-out order to extract the service data and the client's service request, and Returning, by the service interface, the service data to the client that sends the to-be-processed service request; if there is no service data in the application server, the service-free data is returned to the client that sends the current service request to be processed.
  • a computer device comprising a memory and a processor, the memory storing computer readable instructions, the computer readable instructions being executed by the processor such that the processor performs the following steps:
  • the service request is allocated by the load balancing server according to the status information, where the load balancing server selects an appropriate application server and allocates to one of the application servers responsible for the service processing;
  • the current service request to be processed is coupled with the service data, and the service data is extracted in the first-in first-out order and coupled with the service request of the client, and sent through the service interface.
  • the client of the to-be-processed service request returns the service data; if there is no service data in the application server, the service-free data is returned to the client that sends the current service request to be processed.
  • a storage medium storing computer readable instructions that, when executed by one or more processors, cause one or more processors to perform the steps of:
  • the service request is allocated by the load balancing server according to the status information, where the load balancing server selects an appropriate application server and allocates to one of the application servers responsible for the service processing;
  • the current service request to be processed is coupled with the service data, and the service data is extracted in the first-in first-out order and coupled with the service request of the client, and sent through the service interface.
  • the client of the to-be-processed service request returns the service data; if there is no service data in the application server, the service-free data is returned to the client that sends the current service request to be processed.
  • the high concurrent service request processing method, device, computer device, and storage medium by receiving a service request sent by multiple clients, allocate the service request by the load balancing server according to the status information, and the load balancing server Selecting an appropriate application server and assigning it to one of the application servers responsible for the service processing.
  • the service interface allocates the received service request to the load balancing server according to load balancing.
  • Each application server, each load balancing server periodically collects its own state information, and periodically sends its own state information to other load balancing servers.
  • Each application server periodically counts its own state information and periodically sends itself to all load balancing servers.
  • each application server periodically counts its own state information, and periodically sends its own state information to at least one load balancing server of the load balancing server group; the load balancing server re-times to other load balancing servers Send status information for all application servers. If the service data already exists in the application server, the current service request to be processed is coupled with the service data, and the service data is extracted in the first-in first-out order and coupled with the service request of the client, and sent through the service interface. The client of the to-be-processed service request returns the service data. If there is no service data in the application server, the client sends the no-service data information to the client that sends the current service request, and the high-concurrency request is improved. Thread access efficiency reduces the loss of requested data and ensures the integrity of the business data request process.
  • FIG. 1 is a flowchart of a method for processing a high concurrent service request provided in an embodiment of the present application
  • FIG. 2 is a schematic diagram of a method for processing a load balancer according to an embodiment of the present application
  • FIG. 3 is a flowchart of a method for processing a load balancing server according to an embodiment of the present application
  • FIG. 4 is a structural block diagram of a high concurrent service request processing apparatus according to an embodiment of the present application.
  • a high concurrent service request processing method includes the following steps:
  • Step S101 Receive a service request sent by multiple clients.
  • the service request sent by different clients may be used to request to process the same service, or may be used to request different services.
  • the gateway can concurrently receive service requests sent by multiple clients at the same time. It can be understood that the so-called concurrently receiving service requests of multiple clients may receive service requests of multiple clients in the same time interval. For example, if 1 million service requests are received within 1 second, it can be regarded as receiving 1 million service requests concurrently within 1 second.
  • Step S102 assigning a service request to one of the application servers responsible for the service processing
  • the received high concurrent service request is directly allocated to the application server.
  • the service interface allocates the received service request to each application server through a load balancing server according to load balancing, when the load balancing server and the address resolution protocol request
  • the load balancing server feeds back the physical address to the client, and the client sends a service request to the load balancing server according to the physical address of the load balancing server, and the load balancing server schedules the service request to The node server with the smallest load value connected to the load balancing server performs processing.
  • Step S103 If the service data already exists in the application server, the current service request to be processed is coupled with the service data, and the service data is extracted in the order of first in first out, coupled with the service request of the client, and sent to the service interface.
  • the client of the processed service request returns service data; if there is no service data in the current application server, no service data information is returned to the client that sends the current service request to be processed.
  • the application server After receiving the service request, the application server is selected to process the service request, and the service request to be processed may be a redemption, a red envelope, a spike, and the like.
  • the service data may be coupled with the service request sent by the client, and the service data is taken out in a first-in-first-out order and coupled with the service request of the client, for example, the client sends a spike draw.
  • the service request if there is currently redemption data in the application server, the redemption data is coupled with the spike request of the client, and the winning information is returned to the client, and if there is no redemption data in the application server, the client returns No winning information.
  • the application server needs to store the client's winning information in a database or cache for subsequent operations.
  • a plurality of redemption data may be cached by the queue.
  • the redemption data When coupled with the service request of the client, for example, the redemption data may be retrieved in a first-in-first-out order and coupled with the service request of the client.
  • the business data is returned to the client through the service interface, and if there is no service data in the application server, no service data information is returned to the client through the service interface, that is, the service request fails.
  • the application server also needs to store the client and business data information that gets the business data into the database or cache.
  • the business data may be, for example, redemption data, red envelope data, spiked lottery data, and the like.
  • the client sends a spike lottery service request
  • the application server currently has redemption data
  • the redemption data is coupled with the spike request of the client, and the winning information is returned to the client; if there is no redemption data currently in the application server , returns unwinning information to the client.
  • the application server needs to store the client's winning information in a database or cache for subsequent operations.
  • a plurality of redemption data may be cached by the queue, and when coupled with the service request of the client, for example, the redemption data may be retrieved in a first-in-first-out order and coupled with the service request of the client. .
  • the application server also needs to store the client information and the service data information of the service data into a database or a cache.
  • a conventional network storage system uses a centralized storage server to store all of the data.
  • a distributed memory cache may be used, that is, data is distributed and stored on multiple independent devices.
  • the distributed memory cache adopts a scalable system structure, and uses multiple storage servers to share storage load and utilize a location server. Positioning storage information, including REDIS cache, MHMCACHED cache, etc.
  • the method further includes: limiting traffic to the service request, and acquiring pre-stored global state data in a preset storage area, where the global state data is used to represent a global state of the service, and the global state data is
  • the server processes the service request that is not restricted by the flow, and sends the service request that is not restricted by the service request to the application server.
  • the load pressure of the high concurrent service is further dispersed.
  • the gateway with the service request carrying capacity is stronger than the application server to receive the sent by the client.
  • the request for example, the service request carrying capacity of the application server is 1 million service requests per second, and then the gateway that carries 10 million service requests per second can be selected.
  • the cache data is further configured to further spread the load pressure when the high concurrent service is dispersed.
  • the conventional network storage system uses a centralized storage server to store all data.
  • Distributed memory caching can also be used to store data in multiple independent devices.
  • Distributed memory caching uses a scalable system structure, which uses multiple storage servers to share storage load and locates storage information using location servers. Including REDIS cache, MHMCACHED cache, and so on.
  • the service interface allocates the received service request to each application server through the load balancing server according to load balancing.
  • the Load-Balancing Server uses load balancing technology to allocate highly concurrent service requests to application servers.
  • the load balancing server distributes service requests from user PCs to applications. Servers 1 to 3.
  • the service interface determines which application server to specifically allocate the high concurrent service request according to the service processing status of each application server.
  • the specific load balancing technology can be selected according to actual needs, and the application is not limited thereto.
  • each load balancing server periodically counts its own state information and periodically sends its own state information to other load balancing servers.
  • the resolution server has a threshold for setting a load condition: if the load condition exceeds the threshold, after receiving the request from the client, the other application server of the same group is selected for processing; if the load condition is less than the threshold, the direct processing is performed by itself.
  • the client's request starts to parse the request sent by the client, and re-statistics its own status information update list, and sends it to other resolution servers. Other resolution servers update the list after receiving the status information. If the current load situation is greater than the threshold, the application server checks the list of the resolved server load conditions maintained by the application server, first selects the application server that satisfies the condition, such as the resolution server whose load condition is less than the set threshold, and then in all the resolution servers that satisfy the condition.
  • each application server periodically counts its own state information and periodically sends its own state information to all load balancing servers.
  • the resolution server assigned to the client resolves the client's request and selects a lower-loaded application server for the client.
  • each application server periodically counts its own state information, and periodically sends its own state information to at least one load balancing server of the load balancing server group; step S202, the load The equalization server then periodically sends status information of all application servers to other load balancing servers.
  • the resolution server checks the list of application server status information maintained by the server, selects an application server that satisfies the condition, for example, an application server whose load condition is less than a set threshold, and uses a certain method to make a decision in all the application servers that satisfy the condition, and selects the final application server. For example, the gambling roulette decision method, the smaller the application server load situation, the easier it is to be selected.
  • a high concurrent service request processing apparatus comprising:
  • a receiving unit configured to receive a service request sent by multiple clients
  • An allocating unit configured to allocate, by the load balancing server, the service request according to the status information by the load balancing server, where the load balancing server selects an appropriate application server and allocates to one of the application servers responsible for the service processing;
  • a returning unit configured to couple the current pending service request with the service data as described in the application server, and perform the first-in first-out order to extract the service data and the client's service request, and Returning, by the service interface, the service data to the client that sends the to-be-processed service request; if there is no service data in the application server, the service-free data is returned to the client that sends the current service request to be processed.
  • the method further includes: a current limiting unit, configured to limit the service request, and obtain pre-stored global state data in the preset storage area, where the global state data is used to represent a global state of the service.
  • the global state data is obtained by the server processing the service request that is not restricted by the flow, and the service request that is not restricted by the service request is sent to the application server.
  • the allocating unit is further configured to: when the application server is multiple, the service interface is load balanced according to load balancing, and when the load balancing server matches the address resolution protocol request, load balancing The server feeds back the physical address to the client, and the client sends a service request to the load balancing server according to the physical address of the load balancing server, and the load balancing server schedules the service request to the load balancing server.
  • the connected node server with the lowest load value is processed.
  • the allocating unit is further configured to periodically count its own state information for each load balancing server, and periodically send its own state information to other load balancing servers.
  • the allocating unit is further configured to periodically count its own state information for each application server, and periodically send its own state information to all load balancing servers.
  • the allocating unit is further configured to periodically count its own state information for each application server, and periodically send its own state information to at least one load balancing server of the load balancing server group; the load balancing server Regularly send status information of all application servers to other load balancing servers.
  • a computer apparatus comprising a memory and a processor, the memory storing computer readable instructions, the computer readable instructions being executed by the processor, causing the processor to execute the computer program
  • a storage medium storing computer readable instructions, when executed by one or more processors, causes one or more processors to perform high concurrent services in the various embodiments described above Request the steps in the processing method.
  • the storage medium may be a non-volatile storage medium.
  • the program may be stored in a computer readable storage medium, and the storage medium may include: Read Only Memory (ROM), Random Access Memory (RAM), disk or optical disk.
  • ROM Read Only Memory
  • RAM Random Access Memory

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer And Data Communications (AREA)

Abstract

本申请涉及一种高并发业务请求处理方法、装置、计算机设备和存储介质,方法包括:接收多个客户端发送的业务请求;将业务请求由负载均衡服务器按照状态信息对业务请求进行分配;如应用服务器中已存在业务数据,则将当前待处理的业务请求与业务数据进行耦合,按照先进先出的顺序取出业务数据与客户端的业务请求进行耦合,并通过业务接口向发送待处理的业务请求的客户端返回业务数据;如当前应用服务器中没有业务数据,则向发送当前待处理的业务请求的客户端返回无业务数据信息。上述方法提高了高并发请求线程访问效率,减少请求数据的损失量,保证了业务数据请求过程的完整性。

Description

高并发业务请求处理方法、装置、计算机设备和存储介质
本申请要求于2018年05月22日提交中国专利局、申请号为201810363678.4、发明名称为“高并发业务请求处理方法、装置、计算机设备和存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及互联网技术领域,尤其涉及高并发业务请求处理方法、装置、计算机设备和存储介质。
背景技术
高并发请求线程包括客户向商业机构发出购买商品请求、联系客服等请求和客服联系客户进行售后处理或者回访等请求,当出现多个请求同时发送时,往往导致连接线路拥挤,访问速度大大降低。比如在电商平台进行网络购物,特别在在“双十一”、“双十二”等大型促销活动或是畅销商品的抢购业务中,出现了许多秒杀商品、秒杀红包、秒杀抽奖等商业活动。这些活动通常是在短时间内进行,并产生大量的访问量,因此具有高并发业务的特点,会对服务商的网络服务器,诸如应用服务器、数据库等产生极大的负载压力。
由于客户请求数量的暴增,请求线路高并发,系统反应缓慢,成为商业领域内的一大难题,众多公司逐渐对自己领域内的客服联系客户或者客户发出的请求,采取各种提高交互效率的手段,一般系统采取高并发异步线程同时请求,通过大量的校验判断和缓存技术提高数据高并发请求效率。
现有技术在处理客户信息高并发请求时的缺陷主要有,校验和缓存程序繁琐,减慢高并发请求处理效率;请求访问线路较差,容易导致丢包。
发明内容
有鉴于此,有必要针对现有技术的不足,提供一种高并发业务请求处理方法、装置、计算机设备和存储介质。
一种高并发业务请求处理方法,包括:
接收多个客户端发送的业务请求;
将所述业务请求由负载均衡服务器按照状态信息对业务请求进行分配,所述负载均衡服务器选择合适的应用服务器并分配给负责业务处理的其中一台应用服务器;
如所述应用服务器中已存在业务数据,则将当前待处理的业务请求与所述业务数据进行耦合,按照先进先出的顺序取出业务数据与客户端的业务请求进行耦合,并通过业务接口向发送所述待处理的业务请求的客户端返回所述业务数据;如当前所述应用服务器中没有业务数据,则向发送当前待处理的业务请求的客户端返回无业务数据信息。
一种高并发业务请求处理装置,包括:
接收单元,设置为接收多个客户端发送的业务请求;
分配单元,设置为将所述业务请求由负载均衡服务器按照状态信息对业务请求进行分配,所述负载均衡服务器选择合适的应用服务器并分配给负责业务处理的其中一台应用服务器;
返回单元,设置为如所述应用服务器中已存在业务数据,则将当前待处理的业务请求与所述业务数据进行耦合,按照先进先出的顺序取出业务数据与客户端的业务请求进行耦合,并通过业务接口向发送所述待处理的业务请求的客户端返回所述业务数据;如当前所述应用服务器中没有业务数据,则向发送当前待处理的业务请求的客户端返回无业务数据信息。
一种计算机设备,包括存储器和处理器,所述存储器中存储有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得所述处理器执行以下步骤:
接收多个客户端发送的业务请求;
将所述业务请求由负载均衡服务器按照状态信息对业务请求进行分配,所述负载均衡服务器选择合适的应用服务器并分配给负责业务处理的其中一台应用服务器;
如所述应用服务器中已存在业务数据,则将当前待处理的业务请求与所述 业务数据进行耦合,按照先进先出的顺序取出业务数据与客户端的业务请求进行耦合,并通过业务接口向发送所述待处理的业务请求的客户端返回所述业务数据;如当前所述应用服务器中没有业务数据,则向发送当前待处理的业务请求的客户端返回无业务数据信息。
一种存储有计算机可读指令的存储介质,所述计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器执行以下步骤:
接收多个客户端发送的业务请求;
将所述业务请求由负载均衡服务器按照状态信息对业务请求进行分配,所述负载均衡服务器选择合适的应用服务器并分配给负责业务处理的其中一台应用服务器;
如所述应用服务器中已存在业务数据,则将当前待处理的业务请求与所述业务数据进行耦合,按照先进先出的顺序取出业务数据与客户端的业务请求进行耦合,并通过业务接口向发送所述待处理的业务请求的客户端返回所述业务数据;如当前所述应用服务器中没有业务数据,则向发送当前待处理的业务请求的客户端返回无业务数据信息。
上述高并发业务请求处理方法、装置、计算机设备和存储介质,通过接收多个客户端发送的业务请求,将所述业务请求由负载均衡服务器按照状态信息对业务请求进行分配,所述负载均衡服务器选择合适的应用服务器并分配给负责业务处理的其中一台应用服务器,当所述应用服务器为多台时,所述业务接口根据负载均衡,通过负载均衡服务器将接收到的所述业务请求分配给各台应用服务器,每台负载均衡服务器定时统计自己的状态信息,并定时向其他负载均衡服务器发送自己的状态信息,每台应用服务器定时统计自己的状态信息,并定时向所有负载均衡服务器发送自己的状态信息,每台应用服务器定时统计自己的状态信息,并定时向负载均衡服务器组的至少一台负载均衡服务器发送自己的状态信息;所述负载均衡服务器再定时向其它负载均衡服务器发送所有应用服务器的状态信息。如所述应用服务器中已存在业务数据,则将当前待处理的业务请求与所述业务数据进行耦合,按照先进先出的顺序取出业务数据与客户端的业务请求进行耦合,并通过业务接口向发送所述待处理的业务请求的客 户端返回所述业务数据,如当前所述应用服务器中没有业务数据,则向发送当前待处理的业务请求的客户端返回无业务数据信息,提高了高并发请求线程访问效率,减少请求数据的损失量,保证了业务数据请求过程的完整性。
附图说明
通过阅读下文优选实施方式的详细描述,各种其他的优点和益处对于本领域普通技术人员将变得清楚明了。附图仅用于示出优选实施方式的目的,而并不认为是对本申请的限制。
图1为本申请一个实施例中提供的高并发业务请求处理方法的流程图;
图2为本申请一个实施例中负载均衡器处理方法示意图;
图3为本申请一个实施例中负载均衡服务器处理方法流程图;
图4为本申请一个实施例中高并发业务请求处理装置的结构框图。
具体实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
本技术领域技术人员可以理解,除非特意声明,这里使用的单数形式“一”、“一个”、“所述”和“该”也可包括复数形式。应该进一步理解的是,本申请的说明书中使用的措辞“包括”是指存在所述特征、整数、步骤、操作、元件和/或组件,但是并不排除存在或添加一个或多个其他特征、整数、步骤、操作、元件、组件和/或它们的组。
作为一个较好的实施例,如图1所示,一种高并发业务请求处理方法,该处理方法包括以下步骤:
步骤S101,接收多个客户端发送的业务请求;
这里不同客户端发送的业务请求,可以是用于请求处理同一业务的,也可以是用于请求不同业务的。网关可以并发地接收多个客户端同时发来的业务请求,可以理解的是,这里所谓的并发地接收多个客户端的业务请求,可以是在 同一时间区间内接收到多个客户端的业务请求,比如在1秒内接收到100万业务请求,可以视为1秒内并发地接收100万业务请求。
步骤S102,将业务请求分配给负责业务处理的其中一台应用服务器;
当仅包括一台应用服务器时,直接把接收到的高并发业务请求分配到该应用服务器中。当所述应用服务器为多台时,所述业务接口根据负载均衡,通过负载均衡服务器将接收到的所述业务请求分配给各台应用服务器,当所述负载均衡服务器与所述地址解析协议请求相匹配时,负载均衡服务器将物理地址反馈至所述客户端,客户端根据所述负载均衡服务器的物理地址,将业务请求发送至所述负载均衡服务器,负载均衡服务器将所述业务请求调度给所述负载均衡服务器所连接的负载值最小的节点服务器进行处理。
步骤S103,如应用服务器中已存在业务数据,则将当前待处理的业务请求与业务数据进行耦合,按照先进先出的顺序取出业务数据与客户端的业务请求进行耦合,并通过业务接口向发送待处理的业务请求的客户端返回业务数据;如当前应用服务器中没有业务数据,则向发送当前待处理的业务请求的客户端返回无业务数据信息。
接收业务请求后,选择应用服务器对业务请求进行处理,待处理的业务请求可以是兑奖、红包、秒杀抽奖等。如果应用服务器存在业务数据,则可以将该业务数据与该客户端发送的该业务请求进行耦合,按照先进先出的顺序取出业务数据与客户端的业务请求进行耦合,如客户端发送的是秒杀抽奖业务请求,如果应用服务器中当前有兑奖数据,则将兑奖数据与该客户端的该秒杀抽奖请求耦合,并向该客户端返回中奖信息,如果应用服务器中当前无兑奖数据,则向该客户端返回未中奖信息。同样地,应用服务器需要将客户端的中奖信息存储到数据库或缓存中,以便后续操作。如果应用服务器中有多个兑奖数据时,例如可以采用队列缓存多个兑奖数据,在与客户端的业务请求耦合时,例如可以按照先进先出的顺序取出兑奖数据与客户端的业务请求进行耦合。并通过业务接口向该客户端返回该业务数据,而如果该应用服务器中当前无业务数据,则通过业务接口向该客户端返回无业务数据信息,即业务请求未通过。此外,为了方便后续作业,应用服务器还需要将得到业务数据的客户端及业务数据信 息存储到数据库或缓存中。业务数据例如可以为兑奖数据、红包数据、秒杀抽奖数据等。如客户端发送的是秒杀抽奖业务请求,如果应用服务器当前有兑奖数据,则将兑奖数据与该客户端的该秒杀抽奖请求耦合,并向该客户端返回中奖信息;如果应用服务器中当前无兑奖数据,则向该客户端返回未中奖信息。同样地,应用服务器需要将客户端的中奖信息存储到数据库或缓存中,以便后续操作。另外,如果应用服务器中有多个兑奖数据时,例如可以采用队列缓存多个兑奖数据,在与客户端的业务请求耦合时,例如可以按照先进先出的顺序取出兑奖数据与客户端的业务请求进行耦合。
如果该应用服务器中当前无业务数据,则通过业务接口向该客户端返回无业务数据信息,即业务请求未通过。另外,为了方便后续应用,应用服务器还需要将得到业务数据的客户端信息及业务数据信息存储到数据库或缓存中。在一些实施例中,可以是传统的网络存储系统采用集中的存储服务器存放所有数据。在另一些实施例中,可以采用分布式内存缓存,就是将数据分散存储在多台独立的设备上,分布式内存缓存采用可扩展的系统结构,利用多台存储服务器分担存储负荷,利用位置服务器定位存储信息,常见的包括REDIS缓存、MHMCACHED缓存等。
在一个实施例中,该方法进一步包括:对所述业务请求进行限流,获取预设存储区域中预先存储的全局状态数据,所述全局状态数据用于表征业务的全局状态,全局状态数据是服务器对不被限流的业务请求进行处理得到的,将所述业务请求中未被限流的业务请求发送给应用服务器。
通过限流措施对应用服务器的业务请求进行限流时,进一步分散高并发业务时的负载压力,在另一些实施例,还可以通过业务请求承载能力比应用服务器强的网关来接收客户端发送的请求,比如,应用服务器的业务请求承载能力为每秒承载100万次业务请求,那么可以选择每秒承载1000万次业务请求的网关。在另一些实施例,还以设置缓存数据,用于更进一步分散高并发业务时的负载压力,可以是传统的网络存储系统采用集中的存储服务器存放所有数据。也可以采用分布式内存缓存,就是将数据分散存储在多台独立的设备上,分布式内存缓存采用可扩展的系统结构,利用多台存储服务器分担存储负荷,利用 位置服务器定位存储信息,常见的包括REDIS缓存、MHMCACHED缓存等。
在一个实施例中,当应用服务器为多台时,业务接口根据负载均衡,通过负载均衡服务器将接收到的业务请求分配给各台应用服务器。
如图2所示,负载均衡服务器(Load-Balancing Server,LBS)采用负载均衡技术,将高并发的业务请求分配到各应用服务器中,例如负载均衡服务器将用户PC发来的业务请求分配到应用服务器1~3中。业务接口根据各应用服务器的业务处理状态,确定将高并发业务请求具体分配到哪台应用服务器中。需要说明的是,具体使用的负载均衡技术可以根据实际需求而进行选择,本申请不以此为限。通过将高并发的业务请求均衡分配到各应用服务器中,可以保证整个系统的高并发响应速度。
在一个实施例中,每台负载均衡服务器定时统计自己的状态信息,并定时向其他负载均衡服务器发送自己的状态信息。
预先为解析服务器有一个设置好的负载情况的阈值:若其负载情况超过该阈值则当接收到客户端的请求后,选择同组其他应用服务器来处理;若其负载情况小于该阈值则自行直接处理客户端的请求,开始解析客户端发送来的请求,并重新统计自身的状态信息更新列表,发送给其他解析服务器,其他解析服务器收到状态信息后更新列表。如果当前负载情况大于阈值,这该应用服务器查看其维护的解析服务器负载情况列表,先选出满足条件的应用服务器,如负载情况小于设定阈值的解析服务器,再在所有满足条件的解析服务器中通过赌轮盘的方式选择一个解析服务器。所谓赌轮盘的方式就是指负载情况越小越容易被选择到,但负载情况最低的应用服务器并非绝对会被选择到。赌轮盘的决策算法为现有技术,在此不再赘述。当前的解析服务器选择得到解析服务器St后,将其路径返回给客户端,客户端重新向解析服务器St发送请求消息,并附带标识表示其为重定向消息,如此解析服务器St将直接执行该请求而不再进行负载重定向。
在一个实施例中,每台应用服务器定时统计自己的状态信息,并定时向所有负载均衡服务器发送自己的状态信息。
分配给客户端的解析服务器解析客户端的请求,并为客户端选择一台负载 情况较低的应用服务器。
如图3所示,在一个实施例中,步骤S201,每台应用服务器定时统计自己的状态信息,并定时向负载均衡服务器组的至少一台负载均衡服务器发送自己的状态信息;步骤S202,负载均衡服务器再定时向其它负载均衡服务器发送所有应用服务器的状态信息。
解析服务器查看其维护的应用服务器状态信息列表,选出满足条件的应用服务器,如负载情况小于设定阈值的应用服务器,在所有满足条件的应用服务器中使用一定的方法进行决策,选出最终的应用服务器。如赌轮盘决策法,应用服务器负载情况越小者越容易被选择到。
如图4所示,在一个实施例中,提出了一种高并发业务请求处理装置,该处理装置包括:
接收单元,设置为接收多个客户端发送的业务请求;
分配单元,设置为将所述业务请求由负载均衡服务器按照状态信息对业务请求进行分配,所述负载均衡服务器选择合适的应用服务器并分配给负责业务处理的其中一台应用服务器;
返回单元,设置为如所述应用服务器中已存在业务数据,则将当前待处理的业务请求与所述业务数据进行耦合,按照先进先出的顺序取出业务数据与客户端的业务请求进行耦合,并通过业务接口向发送所述待处理的业务请求的客户端返回所述业务数据;如当前所述应用服务器中没有业务数据,则向发送当前待处理的业务请求的客户端返回无业务数据信息。
在一个实施例中,还包括:限流单元,设置为对所述业务请求进行限流,获取预设存储区域中预先存储的全局状态数据,所述全局状态数据用于表征业务的全局状态,全局状态数据是服务器对不被限流的业务请求进行处理得到的,将所述业务请求中未被限流的业务请求发送给应用服务器。
在一个实施例中,所述分配单元还设置为当所述应用服务器为多台时,所述业务接口根据负载均衡,当所述负载均衡服务器与所述地址解析协议请求相匹配时,负载均衡服务器将物理地址反馈至所述客户端,客户端根据所述负载均衡服务器的物理地址,将业务请求发送至所述负载均衡服务器,负载均衡服 务器将所述业务请求调度给所述负载均衡服务器所连接的负载值最小的节点服务器进行处理。
在一个实施例中,所述分配单元还设置为每台负载均衡服务器定时统计自己的状态信息,并定时向其他负载均衡服务器发送自己的状态信息。
在一个实施例中,所述分配单元还设置为每台应用服务器定时统计自己的状态信息,并定时向所有负载均衡服务器发送自己的状态信息。
在一个实施例中,所述分配单元还设置为每台应用服务器定时统计自己的状态信息,并定时向负载均衡服务器组的至少一台负载均衡服务器发送自己的状态信息;所述负载均衡服务器再定时向其它负载均衡服务器发送所有应用服务器的状态信息。
在一个实施例中,提出了一种计算机设备,所述计算机设备包括存储器和处理器,存储器中存储有计算机可读指令,计算机可读指令被处理器执行时,使得处理器执行所述计算机程序时实现上述各实施例里高并发业务请求处理方法中的步骤。
在一个实施例中,提出了一种存储有计算机可读指令的存储介质,计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器执行上述各实施例里高并发业务请求处理方法中的步骤。其中,存储介质可以为非易失性存储介质。
本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序可以存储于一计算机可读存储介质中,存储介质可以包括:只读存储器(ROM,Read Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁盘或光盘等。
以上所述实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上所述实施例仅表达了本申请一些示例性实施例,其描述较为具体和详细,但并不能因此而理解为对本申请专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变 形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。

Claims (20)

  1. 一种高并发业务请求处理方法,包括:
    接收多个客户端发送的业务请求;
    将所述业务请求由负载均衡服务器按照状态信息对业务请求进行分配,所述负载均衡服务器选择合适的应用服务器并分配给负责业务处理的其中一台应用服务器;
    如所述应用服务器中已存在业务数据,则将当前待处理的业务请求与所述业务数据进行耦合,按照先进先出的顺序取出业务数据与客户端的业务请求进行耦合,并通过业务接口向发送所述待处理的业务请求的客户端返回所述业务数据;如当前所述应用服务器中没有业务数据,则向发送当前待处理的业务请求的客户端返回无业务数据信息。
  2. 根据权利要求1所述的高并发业务请求处理方法,其中,还包括:对所述业务请求进行限流,获取预设存储区域中预先存储的全局状态数据,所述全局状态数据用于表征业务的全局状态,全局状态数据是服务器对不被限流的业务请求进行处理得到的,将所述业务请求中未被限流的业务请求发送给应用服务器。
  3. 根据权利要求1所述的高并发业务请求处理方法,其中,由负载均衡服务器按照状态信息对业务请求进行分配包括:
    当所述应用服务器为多台时,所述业务接口根据负载均衡,通过负载均衡服务器将接收到的所述业务请求分配给各台应用服务器,当所述负载均衡服务器与地址解析协议请求相匹配时,负载均衡服务器将物理地址反馈至所述客户端,客户端根据所述负载均衡服务器的物理地址,将业务请求发送至所述负载均衡服务器,负载均衡服务器将所述业务请求调度给所述负载均衡服务器所连接的负载值最小的节点服务器进行处理。
  4. 根据权利要求3所述的高并发业务请求处理方法,其中,每台负载均衡服务器定时统计自己的状态信息,并定时向其他负载均衡服务器发送自己的状态信息。
  5. 根据权利要求4所述的高并发业务请求处理方法,其中,每台应用服务器定时统计自己的状态信息,并定时向所有负载均衡服务器发送自己的状态信 息。
  6. 根据权利要求4所述的高并发业务请求处理方法,其中,每台应用服务器定时统计自己的状态信息,并定时向负载均衡服务器组的至少一台负载均衡服务器发送自己的状态信息;所述负载均衡服务器再定时向其它负载均衡服务器发送所有应用服务器的状态信息。
  7. 一种高并发业务请求处理装置,包括:
    接收单元,设置为接收多个客户端发送的业务请求;
    分配单元,设置为将所述业务请求由负载均衡服务器按照状态信息对业务请求进行分配,所述负载均衡服务器选择合适的应用服务器并分配给负责业务处理的其中一台应用服务器;
    返回单元,设置为如所述应用服务器中已存在业务数据,则将当前待处理的业务请求与所述业务数据进行耦合,按照先进先出的顺序取出业务数据与客户端的业务请求进行耦合,并通过业务接口向发送所述待处理的业务请求的客户端返回所述业务数据;如当前所述应用服务器中没有业务数据,则向发送当前待处理的业务请求的客户端返回无业务数据信息。
  8. 根据权利要求7所述的高并发业务请求处理装置,其中,还包括:
    限流单元,设置为对所述业务请求进行限流,获取预设存储区域中预先存储的全局状态数据,所述全局状态数据用于表征业务的全局状态,全局状态数据是服务器对不被限流的业务请求进行处理得到的,将所述业务请求中未被限流的业务请求发送给应用服务器。
  9. 根据权利要求7所述的高并发业务请求处理装置,其中,所述分配单元还设置为当所述应用服务器为多台时,所述业务接口根据负载均衡,当所述负载均衡服务器与所述地址解析协议请求相匹配时,负载均衡服务器将物理地址反馈至所述客户端,客户端根据所述负载均衡服务器的物理地址,将业务请求发送至所述负载均衡服务器,负载均衡服务器将所述业务请求调度给所述负载均衡服务器所连接的负载值最小的节点服务器进行处理。
  10. 根据权利要求9所述的高并发业务请求处理装置,其中,所述分配单元还设置为每台负载均衡服务器定时统计自己的状态信息,并定时向其他负载 均衡服务器发送自己的状态信息。
  11. 根据权利要求10所述的高并发业务请求处理装置,其中,所述分配单元还设置为每台应用服务器定时统计自己的状态信息,并定时向所有负载均衡服务器发送自己的状态信息。
  12. 根据权利要求10所述的高并发业务请求处理装置,其中,所述分配单元还设置为每台应用服务器定时统计自己的状态信息,并定时向负载均衡服务器组的至少一台负载均衡服务器发送自己的状态信息;所述负载均衡服务器再定时向其它负载均衡服务器发送所有应用服务器的状态信息。
  13. 一种计算机设备,包括存储器和处理器,所述存储器中存储有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得所述处理器执行以下步骤:
    接收多个客户端发送的业务请求;
    将所述业务请求由负载均衡服务器按照状态信息对业务请求进行分配,所述负载均衡服务器选择合适的应用服务器并分配给负责业务处理的其中一台应用服务器;
    如所述应用服务器中已存在业务数据,则将当前待处理的业务请求与所述业务数据进行耦合,按照先进先出的顺序取出业务数据与客户端的业务请求进行耦合,并通过业务接口向发送所述待处理的业务请求的客户端返回所述业务数据;如当前所述应用服务器中没有业务数据,则向发送当前待处理的业务请求的客户端返回无业务数据信息。
  14. 根据权利要求13所述的计算机设备,其中,所述计算机可读指令被所述处理器执行时,还使得所述处理器执行以下步骤:
    对所述业务请求进行限流,获取预设存储区域中预先存储的全局状态数据,所述全局状态数据用于表征业务的全局状态,全局状态数据是服务器对不被限流的业务请求进行处理得到的,将所述业务请求中未被限流的业务请求发送给应用服务器。
  15. 根据权利要求13所述的计算机设备,其中,由负载均衡服务器按照状态信息对业务请求进行分配时,使得所述处理器执行以下步骤:
    当所述应用服务器为多台时,所述业务接口根据负载均衡,通过负载均衡服务器将接收到的所述业务请求分配给各台应用服务器,当所述负载均衡服务器与地址解析协议请求相匹配时,负载均衡服务器将物理地址反馈至所述客户端,客户端根据所述负载均衡服务器的物理地址,将业务请求发送至所述负载均衡服务器,负载均衡服务器将所述业务请求调度给所述负载均衡服务器所连接的负载值最小的节点服务器进行处理。
  16. 根据权利要求15所述的计算机设备,其中,所述计算机可读指令被所述处理器执行时,还使得所述处理器执行以下步骤:
    每台负载均衡服务器定时统计自己的状态信息,并定时向其他负载均衡服务器发送自己的状态信息。
  17. 一种存储有计算机可读指令的存储介质,所述计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器执行以下步骤:
    接收多个客户端发送的业务请求;
    将所述业务请求由负载均衡服务器按照状态信息对业务请求进行分配,所述负载均衡服务器选择合适的应用服务器并分配给负责业务处理的其中一台应用服务器;
    如所述应用服务器中已存在业务数据,则将当前待处理的业务请求与所述业务数据进行耦合,按照先进先出的顺序取出业务数据与客户端的业务请求进行耦合,并通过业务接口向发送所述待处理的业务请求的客户端返回所述业务数据;如当前所述应用服务器中没有业务数据,则向发送当前待处理的业务请求的客户端返回无业务数据信息。
  18. 根据权利要求17所述的存储介质,其中,所述计算机可读指令被所述处理器执行时,还使得一个或多个处理器执行以下步骤:
    对所述业务请求进行限流,获取预设存储区域中预先存储的全局状态数据,所述全局状态数据用于表征业务的全局状态,全局状态数据是服务器对不被限流的业务请求进行处理得到的,将所述业务请求中未被限流的业务请求发送给应用服务器。
  19. 根据权利要求17所述的存储介质,其中,由负载均衡服务器按照状态 信息对业务请求进行分配时,使得一个或多个处理器执行以下步骤:
    当所述应用服务器为多台时,所述业务接口根据负载均衡,通过负载均衡服务器将接收到的所述业务请求分配给各台应用服务器,当所述负载均衡服务器与地址解析协议请求相匹配时,负载均衡服务器将物理地址反馈至所述客户端,客户端根据所述负载均衡服务器的物理地址,将业务请求发送至所述负载均衡服务器,负载均衡服务器将所述业务请求调度给所述负载均衡服务器所连接的负载值最小的节点服务器进行处理。
  20. 根据权利要求19所述的存储介质,其中,所述计算机可读指令被所述处理器执行时,还使得一个或多个处理器执行以下步骤:
    每台负载均衡服务器定时统计自己的状态信息,并定时向其他负载均衡服务器发送自己的状态信息。
PCT/CN2018/104151 2018-04-22 2018-09-05 高并发业务请求处理方法、装置、计算机设备和存储介质 WO2019205406A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810363678.4A CN108881368A (zh) 2018-04-22 2018-04-22 高并发业务请求处理方法、装置、计算机设备和存储介质
CN201810363678.4 2018-04-22

Publications (1)

Publication Number Publication Date
WO2019205406A1 true WO2019205406A1 (zh) 2019-10-31

Family

ID=64326865

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/104151 WO2019205406A1 (zh) 2018-04-22 2018-09-05 高并发业务请求处理方法、装置、计算机设备和存储介质

Country Status (2)

Country Link
CN (1) CN108881368A (zh)
WO (1) WO2019205406A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112422450A (zh) * 2020-05-09 2021-02-26 上海哔哩哔哩科技有限公司 计算机设备、服务请求的流量控制方法及装置
CN113268360A (zh) * 2021-05-14 2021-08-17 北京三快在线科技有限公司 请求处理方法、装置、服务器及存储介质
CN114095574A (zh) * 2022-01-20 2022-02-25 恒生电子股份有限公司 数据处理方法、装置、电子设备及存储介质
CN114244902A (zh) * 2022-02-28 2022-03-25 北京金堤科技有限公司 高并发服务请求处理方法和装置、以及电子设备和存储介质

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111258768B (zh) * 2018-11-30 2023-09-19 中国移动通信集团湖南有限公司 一种优惠抽奖活动的并发处理方法和装置
CN109743303B (zh) * 2018-12-25 2021-10-01 中国移动通信集团江苏有限公司 应用保护方法、装置、系统和存储介质
CN109976920A (zh) * 2019-02-20 2019-07-05 深圳点猫科技有限公司 一种用于教育操作系统的并发式控制的实现方法及装置
CN110086881A (zh) * 2019-05-07 2019-08-02 网易(杭州)网络有限公司 业务处理方法、装置及设备
CN110503484A (zh) * 2019-08-27 2019-11-26 中国工商银行股份有限公司 基于分布式缓存的电子券数据匹配方法及装置
CN111147916A (zh) * 2019-12-31 2020-05-12 北京比利信息技术有限公司 跨平台服务系统、方法、设备和存储介质
CN113535675A (zh) * 2020-12-04 2021-10-22 高慧军 基于大数据的数据维护方法及大数据服务器
CN112751945B (zh) * 2021-04-02 2021-08-06 人民法院信息技术服务中心 一种分布式云服务的实现方法、装置、设备和存储介质
CN113992685B (zh) * 2021-10-26 2023-09-22 新华三信息安全技术有限公司 一种服务控制器确定方法、系统及装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008009185A1 (fr) * 2006-07-12 2008-01-24 Huawei Technologies Co., Ltd. Procédé, système et côté serveur de système de découverte d'élément de calcul de chemin d'accès
CN103220354A (zh) * 2013-04-18 2013-07-24 广东宜通世纪科技股份有限公司 一种实现服务器集群负载均衡的方法
CN105072182A (zh) * 2015-08-10 2015-11-18 北京佳讯飞鸿电气股份有限公司 一种负载均衡方法、负载均衡器和用户终端
CN105791370A (zh) * 2014-12-26 2016-07-20 华为技术有限公司 一种数据处理方法及相关服务器
CN107277088A (zh) * 2016-04-06 2017-10-20 泰康保险集团股份有限公司 高并发业务请求处理系统及方法

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101827013A (zh) * 2009-03-05 2010-09-08 华为技术有限公司 多网关负载均衡的方法、装置和系统
CN102143046B (zh) * 2010-08-25 2015-03-11 华为技术有限公司 负载均衡的方法、设备和系统
US9621509B2 (en) * 2014-05-06 2017-04-11 Citrix Systems, Inc. Systems and methods for achieving multiple tenancy using virtual media access control (VMAC) addresses
CN107528885B (zh) * 2017-07-17 2021-01-26 创新先进技术有限公司 一种业务请求处理方法及装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008009185A1 (fr) * 2006-07-12 2008-01-24 Huawei Technologies Co., Ltd. Procédé, système et côté serveur de système de découverte d'élément de calcul de chemin d'accès
CN103220354A (zh) * 2013-04-18 2013-07-24 广东宜通世纪科技股份有限公司 一种实现服务器集群负载均衡的方法
CN105791370A (zh) * 2014-12-26 2016-07-20 华为技术有限公司 一种数据处理方法及相关服务器
CN105072182A (zh) * 2015-08-10 2015-11-18 北京佳讯飞鸿电气股份有限公司 一种负载均衡方法、负载均衡器和用户终端
CN107277088A (zh) * 2016-04-06 2017-10-20 泰康保险集团股份有限公司 高并发业务请求处理系统及方法

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112422450A (zh) * 2020-05-09 2021-02-26 上海哔哩哔哩科技有限公司 计算机设备、服务请求的流量控制方法及装置
CN113268360A (zh) * 2021-05-14 2021-08-17 北京三快在线科技有限公司 请求处理方法、装置、服务器及存储介质
CN114095574A (zh) * 2022-01-20 2022-02-25 恒生电子股份有限公司 数据处理方法、装置、电子设备及存储介质
CN114244902A (zh) * 2022-02-28 2022-03-25 北京金堤科技有限公司 高并发服务请求处理方法和装置、以及电子设备和存储介质
CN114244902B (zh) * 2022-02-28 2022-05-17 北京金堤科技有限公司 高并发服务请求处理方法和装置、以及电子设备和存储介质

Also Published As

Publication number Publication date
CN108881368A (zh) 2018-11-23

Similar Documents

Publication Publication Date Title
WO2019205406A1 (zh) 高并发业务请求处理方法、装置、计算机设备和存储介质
US10411956B2 (en) Enabling planned upgrade/downgrade of network devices without impacting network sessions
CN101150421B (zh) 一种分布式内容分发方法、边缘服务器和内容分发网
US20200296185A1 (en) Service request management
CN103442030B (zh) 发送和处理业务请求信息的方法和系统以及客户端装置
US8898313B2 (en) Relay devices cooperating to distribute a message from same terminal to same server while session continues
US20090327079A1 (en) System and method for a delivery network architecture
US8732258B2 (en) Method and system for transporting telemetry data across a network
US20110040892A1 (en) Load balancing apparatus and load balancing method
CN103108008B (zh) 一种下载文件的方法及文件下载系统
JP4398354B2 (ja) 中継システム
CN108259603B (zh) 一种负载均衡方法及装置
CN103338252A (zh) 一种分布式数据库并发存储虚拟请求机制
CN109691035A (zh) 消息传送系统的多速消息通道
CN104202386B (zh) 一种高并发量分布式文件系统及其二次负载均衡方法
US20140025838A1 (en) System and method of streaming data over a distributed infrastructure
JP5620881B2 (ja) トランザクション処理システム、トランザクション処理方法、および、トランザクション処理プログラム
US9736082B2 (en) Intelligent high-volume cloud application programming interface request caching
US10862956B2 (en) Client-server communication
CN110365749B (zh) 消息推送方法、消息推送系统和一种存储介质
US7647401B1 (en) System and method for managing resources of a network load balancer via use of a presence server
CN114500418B (zh) 数据统计方法及相关装置
KR20200080416A (ko) 모바일 엣지 컴퓨팅을 위한 데이터처리장치
CN107277088B (zh) 高并发业务请求处理系统及方法
CN112449012B (zh) 数据资源调度方法、系统、服务器及读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18916618

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18916618

Country of ref document: EP

Kind code of ref document: A1