WO2019052225A1 - Open platform control method and system, computer device, and storage medium - Google Patents

Open platform control method and system, computer device, and storage medium Download PDF

Info

Publication number
WO2019052225A1
WO2019052225A1 PCT/CN2018/088835 CN2018088835W WO2019052225A1 WO 2019052225 A1 WO2019052225 A1 WO 2019052225A1 CN 2018088835 W CN2018088835 W CN 2018088835W WO 2019052225 A1 WO2019052225 A1 WO 2019052225A1
Authority
WO
WIPO (PCT)
Prior art keywords
resource
execution server
server
identifier
execution
Prior art date
Application number
PCT/CN2018/088835
Other languages
French (fr)
Chinese (zh)
Inventor
汪小波
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2019052225A1 publication Critical patent/WO2019052225A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1023Server selection for load balancing based on a hash applied to IP addresses or costs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources

Definitions

  • the application relates to an open platform control method, system, computer device and storage medium.
  • the Open Platform refers to a software system that allows an external program to increase the function or use of the software system by exposing its Application Programming Interface (API) or function.
  • API Application Programming Interface
  • the resources of the software system do not need to change the source code of the software system.
  • the open platform bears tens of millions of calls per day.
  • Traditional open platform control methods control the invocation of resources by setting up queues.
  • the resource pool of the open platform is abnormally occupied by some caller systems, which affects the normal resource call of other caller systems. Thereby reducing the stability of the open platform.
  • an open platform control method, system, computer device, and storage medium are provided.
  • An open platform control method includes:
  • the load balancing server receives a resource invocation request initiated by the caller, where the resource invoking request carries the server name of the calling party resource;
  • the load balancing server allocates the resource invoking request to the execution server according to a preset load balancing algorithm
  • the execution server searches for a service party identifier corresponding to the service provider resource call interface name
  • the execution server collects the number of concurrent task threads corresponding to the service party identifier
  • the execution server creates a task thread corresponding to the resource call request
  • the execution server returns an execution result of the resource call request to the caller.
  • An open platform control system comprising:
  • a load balancing server configured to receive a resource invocation request initiated by the caller, where the resource invocation request carries a service party resource to invoke an interface name, and allocates the resource invoking request to the execution server according to a preset load balancing algorithm;
  • An execution server configured to receive the resource call request allocated by the load server, search for a service party identifier corresponding to the service provider resource call interface name, and count the number of concurrent task threads corresponding to the service party identifier, if the service is The number of the concurrent task threads corresponding to the party identifier is smaller than the number of the first preset maximum concurrent task threads corresponding to the service party identifier, and the task thread corresponding to the service party identifier is created, and the call is performed by using the task thread.
  • a resource invocation request initiated by the party returns the execution result of the resource invoking request to the calling party.
  • a computer apparatus comprising a memory and one or more processors having stored therein computer readable instructions that, when executed by a processor, implement the steps of the open platform control method provided in any one of the embodiments of the present application.
  • One or more non-volatile storage media storing computer readable instructions, when executed by one or more processors, causing one or more processors to implement the opening provided in any one embodiment of the present application The steps of the platform control method.
  • FIG 1 is an application environment diagram of an open platform control method in accordance with one or more embodiments.
  • FIG. 2 is a flow diagram of an open platform control method in accordance with one or more embodiments.
  • FIG. 3 is a flow diagram of a resource invocation request allocation method in accordance with one or more embodiments.
  • FIG. 4 is a flow chart of a hotspot resource call request control method in another embodiment.
  • FIG. 5 is a flow diagram of a resource invocation request execution method in accordance with one or more embodiments.
  • FIG. 6 is a block diagram of an open platform control system in accordance with one or more embodiments.
  • FIG. 7 is a block diagram of a computer device in accordance with one or more embodiments.
  • the open platform control method provided by the present application can be applied to an application environment as shown in FIG. 1.
  • the load balancing server 102 communicates with the caller 104 over the network, the caller 104 communicates with the execution server 106 over the network, and the load balancing server 102 communicates with the execution server 106 over the network.
  • the load balancing server 102 receives the resource invocation request initiated by the caller 104, and allocates a resource invocation request to the execution server 106 according to the preset load balancing algorithm, so that the execution server 106 executes the resource invocation request, and returns the execution result to the calling party 104.
  • the load balancing server 102 receives the resource invocation request initiated by the caller 104, and the resource invoking request carries the service provider resource to invoke the interface name, and the load balancing server 102 allocates the resource invoking request to the execution server 106 according to the preset load balancing algorithm, and executes the server.
  • the execution server 106 finds a service party identifier corresponding to the service provider resource call interface name, and execute a server task to count the number of concurrent task threads corresponding to the service party identifier, and if the number of concurrent task threads corresponding to the service party identifier is less than the number corresponding to the service party identifier A predetermined maximum number of concurrent task threads, the execution server 106 creates a task thread corresponding to the resource call request, and executes the resource call request initiated by the caller 104 with the task thread, and returns the execution result to the caller 104.
  • the load balancing server 102 and the execution server 106 can be implemented by a separate server or a server cluster composed of a plurality of servers.
  • an open platform control method including the following steps:
  • Step 202 The load balancing server receives the resource invocation request initiated by the caller, and the resource invoking request carries the name of the interface of the server resource.
  • the resource call of the open platform involves the caller, the service party and the open platform.
  • the service party publishes the system interface on the open platform for the caller to call.
  • the back end of the open platform is a server cluster composed of several servers, including a load balancing server and an execution server.
  • the load balancing server is a load balancing server. Through the load balancing server, the service requests are evenly distributed to the execution server, thus ensuring the response speed of the entire open platform.
  • the execution server is the server that actually performs the service request.
  • a call-initiated resource call request so as to allocate the resource call request to the execution server, and the resource call request carries the server-side resource call interface name, and the server-side resource call
  • the interface name is used to uniquely identify the servant resource provided by the open platform.
  • Step 204 The load balancing server allocates a resource invoking request to the execution server according to a preset load balancing algorithm.
  • the preset load balancing algorithm is a preset resource call request allocation algorithm, which may be a polling algorithm, a random algorithm, a session holding algorithm, and a weighting algorithm.
  • the polling algorithm assigns the received resource call request order to the execution server in a preset allocation order, and the method treats each execution server in a balanced manner.
  • the random algorithm randomly assigns the received resource invocation request to any execution server in the server cluster.
  • the session hold algorithm also known as source address hashing
  • the source address hashing method is used for load balancing.
  • the weight algorithm forms a load balancing multiple priority queue according to the priority of each execution server or the current load status (ie, weight), and each waiting connection in the queue has the same processing level, between the queues.
  • the equalization process is performed in the order of priority.
  • the weight is an estimate based on the capabilities of each node.
  • the weight algorithm itself is an auxiliary algorithm that needs to be used in conjunction with other algorithms. For example, in combination with the polling algorithm, a weighted rounding algorithm is formed, that is, each execution server is assigned a corresponding weight coefficient in advance.
  • the resource call request is allocated according to a preset allocation order and a weight coefficient.
  • a preset allocation order is execution server C, execution server A, and execution server B.
  • the corresponding weight coefficients are 1, 2, and 3, respectively, and the execution server is assigned in the order of execution server C, execution server A, execution server A, execution server B, execution server B, and execution server B.
  • Step 206 The execution server searches for a service party identifier corresponding to the service provider resource call interface name.
  • the servant ID is used to uniquely identify the servant. It can be the servant IP address, the servant domain name, or a custom string.
  • Step 208 Perform server statistics on the number of concurrent task threads corresponding to the service party identifier.
  • the execution server uses a concurrent processing mechanism for resource call requests to improve the execution efficiency of the open platform system.
  • the execution server finds the corresponding service party identifier according to the received service provider resource call interface name, and then counts the number of concurrent task threads corresponding to the service party identifier in the current concurrent task thread to complete the subsequent operations.
  • Step 210 If the number of concurrent task threads corresponding to the service party identifier is less than the number of the first preset maximum concurrent task threads corresponding to the service party identifier, the execution server creates a task thread corresponding to the resource call request.
  • the first preset maximum number of concurrent task threads is a preset maximum value of concurrent task threads corresponding to the service party identifier.
  • the first preset maximum concurrent task thread has different corresponding values according to the difference between the servant resource call amount and the servant resource call frequency.
  • the servant resource carrying the interface name carried by the resource call request is the servant A. Since the history call of the servant A is relatively frequent, the preset maximum concurrent task thread processing number of the servant A is 6 (the maximum number of concurrent task threads of the execution server is 10); the other party that the resource call request carries The service party corresponding to the resource call interface name is the service party B. Since the duration of the service party B is relatively small, the preset maximum concurrent task thread processing number of the service party B is 2.
  • the default maximum number of concurrent task thread processing is for a single server that receives a resource invocation request.
  • the number of the first preset maximum concurrent task threads may be the maximum number of concurrent task threads corresponding to all execution servers in the server cluster corresponding to the open platform system, and correspondingly, the service party identifier is collected in step 208.
  • the number of corresponding concurrent task threads that is, the execution server that receives the resource call request, counts the number of concurrent task threads corresponding to the service party identifier among all the execution servers in the server cluster corresponding to the open platform system.
  • the execution server that receives the resource call request still has processing corresponding to the servant identifier.
  • the resource invokes the request's ability, so the execution server creates a new task thread that is used to execute the resource call request corresponding to the server resource call interface name.
  • Step 212 The execution server executes the resource invocation request initiated by the caller by using the task thread.
  • the created task thread is used to execute the caller initiated resource call request.
  • step 214 the execution server returns the execution result of the resource call request to the caller.
  • the execution server executes the caller-initiated resource call request by using the created task thread, the execution result is sent to the caller.
  • the open platform forwards the received resource call request to the execution server through the load balancing server, so that the execution server performs the resource call request, separates the allocation and execution functions of the resource call request, and improves the execution efficiency of the open platform.
  • the above step 202 includes:
  • Step 302 The load balancing server counts the number of current concurrent task threads of each execution server.
  • the load balancing server After receiving the resource invocation request initiated by the caller, the load balancing server collects the number of current concurrent task threads of each execution server to perform subsequent resource call request allocation.
  • Step 304 If the number of current concurrent task threads of the execution server is less than the number of the second preset maximum concurrent task threads corresponding to the execution server, calculate the task thread availability rate of the execution server.
  • the second preset maximum number of concurrent task threads is a preset number of concurrent task threads corresponding to the execution server. If the performance of the server A is good, the number of the second preset maximum concurrent task threads corresponding to the execution server A is set to 20. If the performance of the execution server B is slightly weak, the number of the second preset maximum concurrent task threads corresponding to the execution server B is set to 16.
  • the task thread availability rate is the ratio of the number of available threads in the execution server to the preset maximum number of task threads. For example, the number of current concurrent task threads of a certain execution server is 4, and the preset maximum task corresponding to the execution server The number of threads is 10, the number of currently available task threads is 6, and the task thread availability is 0.6.
  • the execution server has the capability of processing the resource call request, and then performs the execution by calculating The task thread availability rate of the server prepares for the subsequent execution of the execution server corresponding to the maximum task thread availability rate.
  • Step 306 Determine an execution server corresponding to the maximum task thread availability rate according to the task thread availability rate of each execution server.
  • the execution server corresponding to the maximum task thread availability rate is filtered out.
  • Step 308 assigning a resource call request to an execution server corresponding to the maximum task thread availability rate.
  • a resource call request is assigned to the execution server.
  • an execution server set having a capability of processing a resource call request is determined by counting the number of current concurrent task threads of each execution server in the server cluster, and then each of the execution server sets is calculated.
  • the task thread availability rate corresponding to the execution server is allocated to the execution server with the highest availability rate of the task thread, which improves the response speed of the open platform.
  • the method further includes:
  • Step 402 Perform the server to count the number of resource call requests corresponding to the service party identifier in the resource call request received within the preset time range.
  • the execution server After receiving the resource call request, the execution server first counts the number of resource call requests corresponding to the service party identifier in the resource call request received by all the execution servers corresponding to the open platform in the preset time range. If the execution server statistics of the resource call request are received within 24 hours, all the execution servers corresponding to the open platform receive the same resource call request number as 10000 corresponding to the service party identifier corresponding to the resource call request.
  • the execution server after receiving the resource invocation request, the execution server first counts the number of resource invocation requests corresponding to the service party identifier in the resource invocation request received by the server in the preset time range. If the execution server statistics of the resource call request is received within one month, the execution server receives the same number of resource call requests as the service party identifier corresponding to the resource call request, and is 500.
  • Step 404 If the number of resource invocation requests corresponding to the service party identifier is less than the maximum number of calls of the preset resource invocation request corresponding to the service provider identifier, the execution server searches for the service party identifier corresponding to the service provider resource invocation interface name.
  • the maximum number of calls of the preset resource call request is the preset maximum number of call times of the resource call request in the preset time range corresponding to the service party identifier. For example, the maximum number of calls for a preset resource call request is 20000.
  • the execution server may currently process the resource invocation request corresponding to the service party identifier, and then continue to perform server search and The servant resource invokes the servant ID corresponding to the interface name.
  • the resource call request corresponding to the service party identifier in the resource call request received in the preset preset time range is counted.
  • the number of the resource call requests is compared with the maximum number of calls of the preset resource call request corresponding to the service party identifier, and the control of the hot resource call request is implemented.
  • the resource call request is further controlled, thereby further improving the stability of the open platform.
  • step 212 includes:
  • Step 502 The execution server forwards the resource call request to the service party corresponding to the service party identifier by using the task thread.
  • the execution server determines the service party identifier corresponding to the received resource call request and creates a task thread corresponding to the resource call request
  • the task thread forwards the received resource call request carrying the server resource call interface name to the task thread.
  • the servant corresponding to the servant.
  • Step 504 The execution server acquires an execution result of the interface function corresponding to the service provider resource call interface name.
  • the service party After receiving the resource invocation request sent by the open platform and carrying the interface name of the service provider resource, the service party executes the interface function corresponding to the interface name of the service party resource, and returns the execution result to the open platform.
  • an open platform control system including a load balancing server 602 and an execution server 604, wherein:
  • the load balancing server 602 is configured to receive a resource invocation request initiated by the caller, where the resource invocation request carries the name of the interface of the service party resource, and allocates a resource invoking request to the execution server according to the preset load balancing algorithm;
  • the execution server 604 is configured to receive a resource call request allocated by the load server, search for a service party identifier corresponding to the service provider resource call interface name, and count the number of concurrent task threads corresponding to the service party identifier, and if the service party identifier corresponds to the concurrent The number of task threads is less than the number of the first preset maximum concurrent task threads corresponding to the service party identifier, and the task thread corresponding to the service party identifier is created, and the task call is executed by the task thread to execute the resource call request initiated by the caller. The result is returned to the caller.
  • the load balancing server 602 is further configured to count the number of current concurrent task threads of each execution server; if the number of current concurrent threads of the execution server is less than the second preset maximum task thread corresponding to the execution server Count, calculate the task thread availability rate of the execution server; determine the execution server corresponding to the maximum task thread availability rate according to the task thread availability rate of each execution server; and allocate the resource call request to the task thread maximum rate corresponding execution server.
  • the execution server 604 is further configured to count the number of resource call requests corresponding to the service party identifier in the resource call request received within the preset time range; and if the resource identifier corresponding to the service party identifier is requested by the server If the number of times is less than the maximum number of calls of the preset resource call request corresponding to the servant identifier, the servant identifier corresponding to the servant resource call interface name is searched for.
  • the servant identification is a domain name or an IP address.
  • the execution server 604 is further configured to forward the resource invocation request to the server corresponding to the server identifier by using the task thread; and obtain the execution result of the interface function corresponding to the server resource invocation interface name by the server. .
  • a computer device which may be a server, and its internal structure diagram may be as shown in FIG.
  • the computer device includes a processor, memory, and network interface coupled by a system bus.
  • the processor is used to provide computing and control capabilities to support the operation of the entire server.
  • the memory includes a nonvolatile storage medium and an internal memory.
  • the non-volatile storage medium can be a non-transitory computer readable storage medium.
  • An operating system and computer readable instructions are stored in the non-volatile storage medium, the computer readable instructions being executed by the processor to implement an open platform control method.
  • the internal memory provides an environment for the operation of operating systems and computer readable instructions in a non-volatile storage medium.
  • the network interface is used for communicating with an external server or terminal through a network connection, for example, receiving a resource call request initiated by the caller, forwarding the resource call request to the service party, and receiving the execution result of the resource call by the service party, and returning the execution result.
  • the server can be implemented with a stand-alone server or a server cluster consisting of multiple servers. It will be understood by those skilled in the art that the structure shown in FIG. 7 is only a block diagram of a part of the structure related to the solution of the present application, and does not constitute a limitation on the server to which the solution of the present application is applied.
  • the specific server may include a ratio. More or fewer components are shown in the figures, or some components are combined, or have different component arrangements.
  • a computer device comprising a memory and one or more processors having stored therein computer readable instructions, the computer readable instructions being executed by the processor such that the one or more processors perform the following steps:
  • the load balancing server receives the resource invocation request initiated by the caller, and the resource invoking request carries the name of the interface of the server resource;
  • the load balancing server allocates a resource call request to the execution server according to a preset load balancing algorithm
  • the execution server searches for a service party identifier corresponding to the service provider resource call interface name
  • the execution server creates a task thread corresponding to the resource call request
  • the execution server utilizes the task thread to execute the resource invocation request initiated by the caller
  • the execution server returns the execution result of the resource call request to the caller.
  • the processor performs a step of the load balancing server to allocate a resource call request to the execution server according to a preset load balancing algorithm, including:
  • the load balancing server counts the number of current concurrent task threads of each execution server
  • the processor further implements the following steps when executing the computer readable instructions:
  • the processor further implements the following steps when executing the computer readable instructions:
  • the execution server searches for the service party identifier corresponding to the service provider resource invocation interface name.
  • the processor further implements the following steps when executing the computer readable instructions:
  • the execution server forwards the resource invocation request to the service party corresponding to the service party identifier by using the task thread;
  • the execution server acquires the execution result of the interface function corresponding to the service provider resource call interface name.
  • One or more non-volatile storage media storing computer readable instructions, when executed by one or more processors, cause one or more processors to perform the following steps:
  • the load balancing server receives the resource invocation request initiated by the caller, and the resource invoking request carries the name of the interface of the server resource;
  • the load balancing server allocates a resource call request to the execution server according to a preset load balancing algorithm
  • the execution server searches for a service party identifier corresponding to the service provider resource call interface name
  • the execution server creates a task thread corresponding to the resource call request
  • the execution server utilizes the task thread to execute the resource invocation request initiated by the caller
  • the execution server returns the execution result of the resource call request to the caller.
  • the load balancing server counts the number of current concurrent task threads of each execution server
  • the computer readable instructions when executed by the processor, also implement the following steps:
  • the computer readable instructions when executed by the processor, also implement the following steps:
  • the execution server searches for the service party identifier corresponding to the service provider resource invocation interface name.
  • the computer readable instructions when executed by the processor, also implement the following steps:
  • the execution server forwards the resource invocation request to the service party corresponding to the service party identifier by using the task thread;
  • the execution server acquires the execution result of the interface function corresponding to the service provider resource call interface name.
  • Non-volatile memory can include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory can include random access memory (RAM) or external cache memory.
  • RAM is available in a variety of formats, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronization chain.
  • SRAM static RAM
  • DRAM dynamic RAM
  • SDRAM synchronous DRAM
  • DDRSDRAM double data rate SDRAM
  • ESDRAM enhanced SDRAM
  • Synchlink DRAM SLDRAM
  • Memory Bus Radbus
  • RDRAM Direct RAM
  • DRAM Direct Memory Bus Dynamic RAM
  • RDRAM Memory Bus Dynamic RAM

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer And Data Communications (AREA)
  • Multi Processors (AREA)

Abstract

Provided is an open platform control method. The method comprises: a load balancing server receiving a resource invoking request, which carries a resource invoking interface name of a serving party and which is initiated by an invoking party, and allocating the resource invoking request to an execution server according to a pre-set load balancing algorithm; the execution server searching for a serving party identifier corresponding to the resource invoking interface name of the serving party, and counting the number of concurrent task threads corresponding to the service party identifier; if the number of concurrent task threads corresponding to the serving party identifier is less than a first pre-set maximum number of concurrent task threads corresponding to the serving party identifier, creating a task thread corresponding to the resource invoking request; and using the task thread to execute the resource invoking request initiated by the invoking party, and returning an execution result to the invoking party.

Description

开放平台控制方法、系统、计算机设备和存储介质Open platform control method, system, computer device and storage medium
本申请要求于2017年09月13日提交中国专利局,申请号为2017108233681,申请名称为“开放平台控制方法和系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。The present application claims priority to Chinese Patent Application Serial No. JP-A-A-------------
技术领域Technical field
本申请涉及一种开放平台控制方法、系统、计算机设备和存储介质。The application relates to an open platform control method, system, computer device and storage medium.
背景技术Background technique
在软件行业和网络中,开放平台(Open Platform)是指软件系统通过公开其应用程序编程接口(API,Application Programming Interface)或函数(function)来使外部的程序可以增加该软件系统的功能或使用该软件系统的资源,而不需要更改该软件系统的源代码。In the software industry and the network, the Open Platform refers to a software system that allows an external program to increase the function or use of the software system by exposing its Application Programming Interface (API) or function. The resources of the software system do not need to change the source code of the software system.
开放平台作为资源调用平台,每日承担着数千万级的调用量。传统的开放平台控制方法通过设置队列来控制资源的调用。然而对于某一服务方系统调用量的突然爆发,或者某一服务方系统响应时间过长,导致开放平台的资源池被某些调用方系统异常占用,影响了其他调用方系统的正常资源调用,从而降低了开放平台的稳定性。As an resource invoking platform, the open platform bears tens of millions of calls per day. Traditional open platform control methods control the invocation of resources by setting up queues. However, for a sudden burst of the call volume of a certain servant system, or a response time of a certain servant system is too long, the resource pool of the open platform is abnormally occupied by some caller systems, which affects the normal resource call of other caller systems. Thereby reducing the stability of the open platform.
发明内容Summary of the invention
根据本申请公开的各种实施例,提供一种开放平台控制方法、系统、计算机设备和存储介质。In accordance with various embodiments disclosed herein, an open platform control method, system, computer device, and storage medium are provided.
一种开放平台控制方法,包括:An open platform control method includes:
负载均衡服务器接收调用方发起的资源调用请求,所述资源调用请求携带服务方资源调用接口名;The load balancing server receives a resource invocation request initiated by the caller, where the resource invoking request carries the server name of the calling party resource;
所述负载均衡服务器根据预设负载均衡算法,分配所述资源调用请求至执行服务器;The load balancing server allocates the resource invoking request to the execution server according to a preset load balancing algorithm;
所述执行服务器查找与所述服务方资源调用接口名对应的服务方标识;The execution server searches for a service party identifier corresponding to the service provider resource call interface name;
所述执行服务器统计与所述服务方标识对应的并发任务线程个数;The execution server collects the number of concurrent task threads corresponding to the service party identifier;
若与所述服务方标识对应的并发任务线程个数小于与所述服务方标识对应的第一预设最大并发任务线程个数,所述执行服务器创建与所述资源调用请求对应的任务线程;If the number of concurrent task threads corresponding to the service provider identifier is less than the number of the first preset maximum concurrent task threads corresponding to the service provider identifier, the execution server creates a task thread corresponding to the resource call request;
所述执行服务器利用所述任务线程执行所述调用方发起的资源调用请求;及Executing, by the execution server, the resource invoke request initiated by the caller by using the task thread; and
所述执行服务器将所述资源调用请求的执行结果返回给所述调用方。The execution server returns an execution result of the resource call request to the caller.
一种开放平台控制系统,包括:An open platform control system comprising:
负载均衡服务器,用于接收调用方发起的资源调用请求,所述资源调用请求携带服务方资源调用接口名,并根据预设负载均衡算法,分配所述资源调用请求至执行服务器;及a load balancing server, configured to receive a resource invocation request initiated by the caller, where the resource invocation request carries a service party resource to invoke an interface name, and allocates the resource invoking request to the execution server according to a preset load balancing algorithm;
执行服务器,用于接收负载服务器分配的所述资源调用请求,查找与所述服务方资源调用接口名对应的服务方标识,统计与服务方标识对应的并发任务线程个数,若与所述服务方标识对应的并发任务线程个数小于与所述服务方标识对应的第一预设最大并发任务线程个数,创建与所述服务方标识对应的任务线程,利用所述任务线程执行所述调用方发起的资源调用请求,将所述资源调用请求的执行结果返回给所述调用方。An execution server, configured to receive the resource call request allocated by the load server, search for a service party identifier corresponding to the service provider resource call interface name, and count the number of concurrent task threads corresponding to the service party identifier, if the service is The number of the concurrent task threads corresponding to the party identifier is smaller than the number of the first preset maximum concurrent task threads corresponding to the service party identifier, and the task thread corresponding to the service party identifier is created, and the call is performed by using the task thread. A resource invocation request initiated by the party returns the execution result of the resource invoking request to the calling party.
一种计算机设备,包括存储器和一个或多个处理器,存储器中存储有计算机可读指令,计算机可读指令被处理器执行时实现本申请任意一个实施例中提供的开放平台控制方法的步骤。A computer apparatus comprising a memory and one or more processors having stored therein computer readable instructions that, when executed by a processor, implement the steps of the open platform control method provided in any one of the embodiments of the present application.
一个或多个存储有计算机可读指令的非易失性存储介质,计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器实现本申请任意一个实施例中提供的开放平台控制方法的步骤。One or more non-volatile storage media storing computer readable instructions, when executed by one or more processors, causing one or more processors to implement the opening provided in any one embodiment of the present application The steps of the platform control method.
本申请的一个或多个实施例的细节在下面的附图和描述中提出。本申请的其它特征和优点将从说明书、附图以及权利要求书变得明显。Details of one or more embodiments of the present application are set forth in the accompanying drawings and description below. Other features and advantages of the present invention will be apparent from the description, drawings and claims.
附图说明DRAWINGS
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其它的附图。In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings to be used in the embodiments will be briefly described below. Obviously, the drawings in the following description are only some embodiments of the present application, Those skilled in the art can also obtain other drawings based on these drawings without any creative work.
图1为根据一个或多个实施例中开放平台控制方法的应用环境图。1 is an application environment diagram of an open platform control method in accordance with one or more embodiments.
图2为根据一个或多个实施例中开放平台控制方法的流程图。2 is a flow diagram of an open platform control method in accordance with one or more embodiments.
图3为根据一个或多个实施例中资源调用请求分配方法的流程图。3 is a flow diagram of a resource invocation request allocation method in accordance with one or more embodiments.
图4为另一个实施例中热点资源调用请求控制方法的流程图。4 is a flow chart of a hotspot resource call request control method in another embodiment.
图5为根据一个或多个实施例中资源调用请求执行方法的流程图。FIG. 5 is a flow diagram of a resource invocation request execution method in accordance with one or more embodiments.
图6为根据一个或多个实施例中开放平台控制系统的框图。6 is a block diagram of an open platform control system in accordance with one or more embodiments.
图7为根据一个或多个实施例中计算机设备的框图。FIG. 7 is a block diagram of a computer device in accordance with one or more embodiments.
具体实施方式Detailed ways
为了使本申请的技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。In order to make the technical solutions and advantages of the present application more clear, the present application will be further described in detail below with reference to the accompanying drawings and embodiments. It is understood that the specific embodiments described herein are merely illustrative of the application and are not intended to be limiting.
本申请提供的开放平台控制方法,可应用于如图1所示的应用环境中。负载均衡服务器102与调用方104通过网络进行通信,调用方104与执行服务器106通过网络进行通信,负载均衡服务器102与执行服务器106通过网络进行通信。负载均衡服务器102接收调用方104发起的资源调用请求,并根据预设负载均衡算法分配资源调用请求至执行服务器106,以使执行服务器106执行资源调用请求,并将执行结果返回给调用方104。具体的,负载均衡服务器102接收调用方104发起的资源调用请求,资源调用请求携带服务方资源调用接口名,负载均衡服务器102根据预设负载均衡算法,分配资源调用请求至执行服务器106,执行服务器106查找与服务方资源调用接口名对应的服务方标识,执行服务器统计与服务方标识对应的并发任务线程个数,若与服务方标识对应的并发任务线程个数小于与服务方标识对应的第一预设最大并发任务线程个数,执行服务器106创建与资源调用请求对应的任务线程,并利用任务线程执行调用方104发起的资源调用请求,并将执行结果返回给调用方104。负载均衡服务器102和执行服务器106可以用独立的服务器或者是多个服务器组成的服务器集群来实现。The open platform control method provided by the present application can be applied to an application environment as shown in FIG. 1. The load balancing server 102 communicates with the caller 104 over the network, the caller 104 communicates with the execution server 106 over the network, and the load balancing server 102 communicates with the execution server 106 over the network. The load balancing server 102 receives the resource invocation request initiated by the caller 104, and allocates a resource invocation request to the execution server 106 according to the preset load balancing algorithm, so that the execution server 106 executes the resource invocation request, and returns the execution result to the calling party 104. Specifically, the load balancing server 102 receives the resource invocation request initiated by the caller 104, and the resource invoking request carries the service provider resource to invoke the interface name, and the load balancing server 102 allocates the resource invoking request to the execution server 106 according to the preset load balancing algorithm, and executes the server. 106: Find a service party identifier corresponding to the service provider resource call interface name, and execute a server task to count the number of concurrent task threads corresponding to the service party identifier, and if the number of concurrent task threads corresponding to the service party identifier is less than the number corresponding to the service party identifier A predetermined maximum number of concurrent task threads, the execution server 106 creates a task thread corresponding to the resource call request, and executes the resource call request initiated by the caller 104 with the task thread, and returns the execution result to the caller 104. The load balancing server 102 and the execution server 106 can be implemented by a separate server or a server cluster composed of a plurality of servers.
在其中一个实施例中,如图2所示,提供了一种开放平台控制方法,包括以下步骤:In one embodiment, as shown in FIG. 2, an open platform control method is provided, including the following steps:
步骤202,负载均衡服务器接收调用方发起的资源调用请求,资源调用请求 携带服务方资源调用接口名。Step 202: The load balancing server receives the resource invocation request initiated by the caller, and the resource invoking request carries the name of the interface of the server resource.
开放平台的资源调用涉及到调用方、服务方和开放平台三方,服务方将系统接口发布在开放平台上,以供调用方调用。其中,开放平台的后端是由若干个服务器构成的服务器集群,包括负载均衡服务器和执行服务器。负载均衡服务器是进行负载均衡分配的服务器,通过负载均衡服务器,将服务请求均衡分配到执行服务器,从而保证整个开放平台的响应速度。执行服务器即实际执行服务请求的服务器。The resource call of the open platform involves the caller, the service party and the open platform. The service party publishes the system interface on the open platform for the caller to call. The back end of the open platform is a server cluster composed of several servers, including a load balancing server and an execution server. The load balancing server is a load balancing server. Through the load balancing server, the service requests are evenly distributed to the execution server, thus ensuring the response speed of the entire open platform. The execution server is the server that actually performs the service request.
由开放平台后端服务器集群中的负载均衡服务器接收调用方发起的资源调用请求,以便将该资源调用请求分配给执行服务器,且该资源调用请求携带服务方资源调用接口名,该服务方资源调用接口名用于唯一标识开放平台提供的服务方资源。Receiving, by the load balancing server in the open platform back-end server cluster, a call-initiated resource call request, so as to allocate the resource call request to the execution server, and the resource call request carries the server-side resource call interface name, and the server-side resource call The interface name is used to uniquely identify the servant resource provided by the open platform.
步骤204,负载均衡服务器根据预设负载均衡算法,分配资源调用请求至执行服务器。Step 204: The load balancing server allocates a resource invoking request to the execution server according to a preset load balancing algorithm.
预设负载均衡算法是预先设置的资源调用请求分配算法,可以是轮询算法,随机算法,会话保持算法和权重算法等。The preset load balancing algorithm is a preset resource call request allocation algorithm, which may be a polling algorithm, a random algorithm, a session holding algorithm, and a weighting algorithm.
具体地,轮询算法是按照预设分配顺序将接收到的资源调用请求顺序分配给执行服务器,该方法均衡地对待每一台执行服务器。随机算法是将接收到的资源调用请求随机分配给服务器集群中的任一台执行服务器。会话保持算法(又称为源地址哈希法)首先获取发起资源调用请求的调用方的IP地址,再通过哈希函数计算对应于该调用方IP地址的映射数值。用该映射数值对执行服务器集群的大小(执行服务器集群中总共有5台执行服务器,则该执行服务器集群的大小为5)进行取模运算,得到的结果便是实际执行资源调用请求的服务器的序号。采用源地址哈希法进行负载均衡,当执行服务器集群的大小及顺序均不变时,同一IP地址的调用方发起的资源调用请求会被分配到同一台执行服务器。权重算法根据每一台执行服务器的优先级或当前的负载状况(即权值)来构成负载平衡的多个优先级队列,队列中的每个等待处理的连接都具有相同处理等级,队列之间按照优先级的先后顺序进行均衡处理。权值是基于各节点能力的一个估计值,权重算法本身是一种辅助算法,需要与其他算法结合使用。例如与轮询算法结合使用,形成加权轮询算法,即预先给每一个执行服务器分配对应的权重系数。当负载服务器接收到资源调用请求后,按照预设分配顺序和权 重系数分配该资源调用请求。例如执行服务器集群中有执行服务器A,执行服务器B和执行服务器C,且预设分配顺序是执行服务器C,执行服务器A,执行服务器B。对应的权重系数分别为1,2和3,则执行服务器被分配到的顺序依次是执行服务器C,执行服务器A,执行服务器A,执行服务器B,执行服务器B和执行服务器B。Specifically, the polling algorithm assigns the received resource call request order to the execution server in a preset allocation order, and the method treats each execution server in a balanced manner. The random algorithm randomly assigns the received resource invocation request to any execution server in the server cluster. The session hold algorithm (also known as source address hashing) first obtains the IP address of the caller that initiated the resource call request, and then calculates the mapped value corresponding to the caller's IP address through the hash function. Using the mapping value to perform the modulo operation on the size of the execution server cluster (the total number of execution server clusters is 5, the size of the execution server cluster is 5), and the result is the server that actually executes the resource call request. Serial number. The source address hashing method is used for load balancing. When the size and order of the execution server cluster are unchanged, the resource invocation requests initiated by the caller of the same IP address are allocated to the same execution server. The weight algorithm forms a load balancing multiple priority queue according to the priority of each execution server or the current load status (ie, weight), and each waiting connection in the queue has the same processing level, between the queues. The equalization process is performed in the order of priority. The weight is an estimate based on the capabilities of each node. The weight algorithm itself is an auxiliary algorithm that needs to be used in conjunction with other algorithms. For example, in combination with the polling algorithm, a weighted rounding algorithm is formed, that is, each execution server is assigned a corresponding weight coefficient in advance. After the load server receives the resource call request, the resource call request is allocated according to a preset allocation order and a weight coefficient. For example, in the execution server cluster, there are execution server A, execution server B, and execution server C, and the preset allocation order is execution server C, execution server A, and execution server B. The corresponding weight coefficients are 1, 2, and 3, respectively, and the execution server is assigned in the order of execution server C, execution server A, execution server A, execution server B, execution server B, and execution server B.
步骤206,执行服务器查找与服务方资源调用接口名对应的服务方标识。Step 206: The execution server searches for a service party identifier corresponding to the service provider resource call interface name.
服务方标识用于唯一标识服务方,可以是服务方IP地址,也可以是服务方域名,还可以是自定义字符串等。The servant ID is used to uniquely identify the servant. It can be the servant IP address, the servant domain name, or a custom string.
步骤208,执行服务器统计与服务方标识对应的并发任务线程个数。Step 208: Perform server statistics on the number of concurrent task threads corresponding to the service party identifier.
执行服务器中对于资源调用请求采用并发处理机制,以提高开放平台系统的执行效率。执行服务器根据接收到的服务方资源调用接口名找到与之对应的服务方标识后,再统计当前并发任务线程中与服务方标识对应的并发任务线程个数以完成后续操作。The execution server uses a concurrent processing mechanism for resource call requests to improve the execution efficiency of the open platform system. The execution server finds the corresponding service party identifier according to the received service provider resource call interface name, and then counts the number of concurrent task threads corresponding to the service party identifier in the current concurrent task thread to complete the subsequent operations.
步骤210,若与服务方标识对应的并发任务线程个数小于与服务方标识对应的第一预设最大并发任务线程个数,执行服务器创建与资源调用请求对应的任务线程。Step 210: If the number of concurrent task threads corresponding to the service party identifier is less than the number of the first preset maximum concurrent task threads corresponding to the service party identifier, the execution server creates a task thread corresponding to the resource call request.
第一预设最大并发任务线程个数是预先设置的与服务方标识对应的并发任务线程的最大值。第一预设最大并发任务线程根据服务方资源调用量和服务方资源调用频率的不同具有不同的对应值,比如资源调用请求携带的服务方资源调用接口名对应的服务方是服务方A。由于服务方A的历史调用较为频繁,所以设置服务方A的预设最大并发任务线程处理个数是6(该执行服务器的最大并发任务线程数是10);另一个资源调用请求携带的服务方资源调用接口名对应的服务方是服务方B,由于服务方B的历时调用频率较小,所以设置服务方B的预设最大并发任务线程处理个数是2。The first preset maximum number of concurrent task threads is a preset maximum value of concurrent task threads corresponding to the service party identifier. The first preset maximum concurrent task thread has different corresponding values according to the difference between the servant resource call amount and the servant resource call frequency. For example, the servant resource carrying the interface name carried by the resource call request is the servant A. Since the history call of the servant A is relatively frequent, the preset maximum concurrent task thread processing number of the servant A is 6 (the maximum number of concurrent task threads of the execution server is 10); the other party that the resource call request carries The service party corresponding to the resource call interface name is the service party B. Since the duration of the service party B is relatively small, the preset maximum concurrent task thread processing number of the service party B is 2.
预设最大并发任务线程处理个数是针对接收到资源调用请求的单个服务器而言的。在其他实施例中,第一预设最大并发任务线程个数可以是开放平台系统对应的服务器集群中的所有执行服务器对应的最大并发任务线程个数,相应地,步骤208统计的与服务方标识对应的并发任务线程个数即接收到资源调用请求的执行服务器统计开放平台系统对应的服务器集群中的所有执行服务器中与服务方标识对应的并发任务线程个数。The default maximum number of concurrent task thread processing is for a single server that receives a resource invocation request. In other embodiments, the number of the first preset maximum concurrent task threads may be the maximum number of concurrent task threads corresponding to all execution servers in the server cluster corresponding to the open platform system, and correspondingly, the service party identifier is collected in step 208. The number of corresponding concurrent task threads, that is, the execution server that receives the resource call request, counts the number of concurrent task threads corresponding to the service party identifier among all the execution servers in the server cluster corresponding to the open platform system.
当与服务方标识对应的并发任务线程个数小于与该服务方标识对应的第一预设最大并发任务线程个数,则说明接收到资源调用请求的执行服务器仍然有处理与该服务方标识对应的资源调用请求的能力,于是执行服务器创建一个新的任务线程,该新的任务线程用于执行与服务方资源调用接口名对应的资源调用请求。When the number of concurrent task threads corresponding to the servant identifier is less than the number of the first preset maximum concurrent task threads corresponding to the servant identifier, the execution server that receives the resource call request still has processing corresponding to the servant identifier. The resource invokes the request's ability, so the execution server creates a new task thread that is used to execute the resource call request corresponding to the server resource call interface name.
在其中一个实施例中,当与服务方标识对应的并发任务线程个数等于与该服务方标识对应的预设最大并发任务线程处理个数,则发送提示资源调用请求拒绝消息给调用方,告知调用方发送的资源调用请求当前没有可用线程来处理,以便调用方后续重新发送资源调用请求。In one embodiment, when the number of concurrent task threads corresponding to the service party identifier is equal to the preset maximum concurrent task thread processing number corresponding to the service party identifier, sending a prompt resource call request rejection message to the caller, to notify The resource call request sent by the caller currently has no available threads to process, so that the caller subsequently resends the resource call request.
步骤212,执行服务器利用任务线程执行调用方发起的资源调用请求。Step 212: The execution server executes the resource invocation request initiated by the caller by using the task thread.
在确定了执行服务器具有处理资源调用请求的能力后,利用创建的任务线程执行调用方发起的资源调用请求。After determining that the execution server has the ability to process the resource call request, the created task thread is used to execute the caller initiated resource call request.
步骤214,执行服务器将资源调用请求的执行结果返回给调用方。In step 214, the execution server returns the execution result of the resource call request to the caller.
执行服务器利用创建的任务线程执行调用方发起的资源调用请求后,将执行结果发送给调用方。After the execution server executes the caller-initiated resource call request by using the created task thread, the execution result is sent to the caller.
上述开放平台控制中,通过给服务方标识对应的并发任务线程个数加以限定,使得不同服务方的资源调用请求的处理得到控制,有效避免了因某一服务方系统调用量的突然爆发,或者某一服务方系统响应时间过长,导致开放平台的资源池被某些调用方系统异常占用甚至发生瘫痪的问题,从而提高了开放平台的稳定性。同时,开放平台通过负载均衡服务器将接收到的资源调用请求转发给执行服务器,以使执行服务器执行资源调用请求,分离了资源调用请求的分配和执行功能,提高了开放平台的执行效率。In the above-mentioned open platform control, the number of concurrent task threads corresponding to the servant identifier is limited, so that the processing of the resource call request of different servants is controlled, thereby effectively avoiding a sudden burst of the call amount of a certain servant system, or The response time of a certain servant system is too long, which causes the resource pool of the open platform to be abnormally occupied or even embarrassed by some caller systems, thereby improving the stability of the open platform. At the same time, the open platform forwards the received resource call request to the execution server through the load balancing server, so that the execution server performs the resource call request, separates the allocation and execution functions of the resource call request, and improves the execution efficiency of the open platform.
在其中一个实施例中,如图3所示,上述步骤202包括:In one embodiment, as shown in FIG. 3, the above step 202 includes:
步骤302,负载均衡服务器统计每一台执行服务器的当前并发任务线程个数。Step 302: The load balancing server counts the number of current concurrent task threads of each execution server.
负载均衡服务器在接收到调用方发起的资源调用请求后,通过统计每一台执行服务器的当前并发任务线程个数,以进行后续的资源调用请求分配。After receiving the resource invocation request initiated by the caller, the load balancing server collects the number of current concurrent task threads of each execution server to perform subsequent resource call request allocation.
步骤304,若执行服务器的当前并发任务线程个数小于执行服务器对应的第二预设最大并发任务线程个数,则计算执行服务器的任务线程可用率。Step 304: If the number of current concurrent task threads of the execution server is less than the number of the second preset maximum concurrent task threads corresponding to the execution server, calculate the task thread availability rate of the execution server.
第二预设最大并发任务线程个数是预先设置的对应于执行服务器的并发任 务线程个数。如执行服务器甲的性能较好,则设置执行服务器甲对应的第二预设最大并发任务线程个数为20。执行服务器乙的性能稍弱,则设置执行服务器乙对应的第二预设最大并发任务线程个数为16。任务线程可用率为执行服务器中但钱可用线程个数与预设最大任务线程个数的比例,如某一台执行服务器的当前并发任务线程个数为4,该执行服务器对应的预设最大任务线程个数是10,则当前可用任务线程数为6,任务线程可用率为0.6。The second preset maximum number of concurrent task threads is a preset number of concurrent task threads corresponding to the execution server. If the performance of the server A is good, the number of the second preset maximum concurrent task threads corresponding to the execution server A is set to 20. If the performance of the execution server B is slightly weak, the number of the second preset maximum concurrent task threads corresponding to the execution server B is set to 16. The task thread availability rate is the ratio of the number of available threads in the execution server to the preset maximum number of task threads. For example, the number of current concurrent task threads of a certain execution server is 4, and the preset maximum task corresponding to the execution server The number of threads is 10, the number of currently available task threads is 6, and the task thread availability is 0.6.
若开放平台对应的服务器集群中的执行服务器的当前并发线程个数小于该执行服务器对应的第二预设最大线程个数,则说明该执行服务器具有处理资源调用请求的能力,再通过计算该执行服务器的任务线程可用率,为后续筛选出任务线程可用率最大值对应的执行服务器做准备。If the number of current concurrent threads of the execution server in the server cluster corresponding to the open platform is smaller than the second preset maximum number of threads corresponding to the execution server, the execution server has the capability of processing the resource call request, and then performs the execution by calculating The task thread availability rate of the server prepares for the subsequent execution of the execution server corresponding to the maximum task thread availability rate.
步骤306,根据每一台执行服务器的任务线程可用率确定任务线程可用率最大值对应的执行服务器。Step 306: Determine an execution server corresponding to the maximum task thread availability rate according to the task thread availability rate of each execution server.
根据步骤304计算的执行服务器的任务线程可用率,在具有处理资源调用请求能力的执行服务器集群中,筛选出任务线程可用率最大值对应的执行服务器。According to the task thread availability rate of the execution server calculated in step 304, in the execution server cluster having the processing resource call request capability, the execution server corresponding to the maximum task thread availability rate is filtered out.
步骤308,分配资源调用请求至任务线程可用率最大值对应的执行服务器。 Step 308, assigning a resource call request to an execution server corresponding to the maximum task thread availability rate.
在成功筛选出具有处理资源调用请求能力且任务线程可用率最高的执行服务器后,将资源调用请求分配给该执行服务器。After successfully filtering out an execution server having the ability to process resource call requests and having the highest task thread availability, a resource call request is assigned to the execution server.
上述开放平台控制方法中,首先通过统计执行服务器集群中每一台执行服务器的当前并发任务线程个数确定具有处理资源调用请求能力的执行服务器集合,再通过计算该执行服务器集合中的每一台执行服务器对应的任务线程可用率,将接收到的资源调用请求分配给任务线程可用率最高的执行服务器,提高了开放平台的响应速度。In the above open platform control method, first, an execution server set having a capability of processing a resource call request is determined by counting the number of current concurrent task threads of each execution server in the server cluster, and then each of the execution server sets is calculated. The task thread availability rate corresponding to the execution server is allocated to the execution server with the highest availability rate of the task thread, which improves the response speed of the open platform.
在其中一个实施例中,如图4所示,在步骤206之前,还包括:In one embodiment, as shown in FIG. 4, before step 206, the method further includes:
步骤402,执行服务器统计预设时间范围内接收到的资源调用请求中与服务方标识对应的资源调用请求的个数。Step 402: Perform the server to count the number of resource call requests corresponding to the service party identifier in the resource call request received within the preset time range.
执行服务器在接收到资源调用请求后,先统计预设时间范围内开放平台对应的所有执行服务器接收到的资源调用请求中与服务方标识对应的资源调用请求的个数。如接收到资源调用请求的执行服务器统计在24小时内开放平台对应的所有执行服务器接收到与该资源调用请求对应的服务方标识相同的资源调用 请求个数为10000。After receiving the resource call request, the execution server first counts the number of resource call requests corresponding to the service party identifier in the resource call request received by all the execution servers corresponding to the open platform in the preset time range. If the execution server statistics of the resource call request are received within 24 hours, all the execution servers corresponding to the open platform receive the same resource call request number as 10000 corresponding to the service party identifier corresponding to the resource call request.
在其中一个实施例中,执行服务器在接收到资源调用请求后,先统计预设时间范围内该服务器接收到的资源调用请求中与服务方标识对应的资源调用请求的个数。如接收到资源调用请求的执行服务器统计在一个月内该执行服务器接收到与该资源调用请求对应的服务方标识相同的资源调用请求个数为500。In one embodiment, after receiving the resource invocation request, the execution server first counts the number of resource invocation requests corresponding to the service party identifier in the resource invocation request received by the server in the preset time range. If the execution server statistics of the resource call request is received within one month, the execution server receives the same number of resource call requests as the service party identifier corresponding to the resource call request, and is 500.
步骤404,若服务方标识对应的资源调用请求的个数小于服务方标识对应的预设资源调用请求最大调用次数,则执行服务器查找与服务方资源调用接口名对应的服务方标识。Step 404: If the number of resource invocation requests corresponding to the service party identifier is less than the maximum number of calls of the preset resource invocation request corresponding to the service provider identifier, the execution server searches for the service party identifier corresponding to the service provider resource invocation interface name.
预设资源调用请求最大调用次数是预先设置的对应于服务方标识的预设时间范围内资源调用请求的最大调用次数。如预设资源调用请求最大调用次数为20000。The maximum number of calls of the preset resource call request is the preset maximum number of call times of the resource call request in the preset time range corresponding to the service party identifier. For example, the maximum number of calls for a preset resource call request is 20000.
当服务方标识对应的资源调用请求的个数小于服务方标识对应的预设资源调用请求最大调用次数,则说明执行服务器当前可以处理该服务方标识对应的资源调用请求,于是继续执行服务器查找与服务方资源调用接口名对应的服务方标识。When the number of resource invocation requests corresponding to the service provider identifier is smaller than the maximum number of invocations of the preset resource invocation request corresponding to the service provider identifier, the execution server may currently process the resource invocation request corresponding to the service party identifier, and then continue to perform server search and The servant resource invokes the servant ID corresponding to the interface name.
上述开放平台控制方法中,在通过给服务方标识对应的任务线程个数加以限定来控制资源调用请求之前,统计预设时间范围内接收到的资源调用请求中与服务方标识对应的资源调用请求的个数,将该统计到的资源调用请求的个数与服务方标识对应的预设资源调用请求最大调用次数进行对比,实现了对于热点资源调用请求的控制。再结合通过给服务方标识对应的任务线程个数加以限定的方法,进一步对资源调用请求进行控制,从而实现进一步的提高开放平台的稳定性。In the above-mentioned open platform control method, before the resource call request is controlled by limiting the number of task threads corresponding to the service provider identifier, the resource call request corresponding to the service party identifier in the resource call request received in the preset preset time range is counted. The number of the resource call requests is compared with the maximum number of calls of the preset resource call request corresponding to the service party identifier, and the control of the hot resource call request is implemented. Combined with the method of limiting the number of task threads corresponding to the service party identifier, the resource call request is further controlled, thereby further improving the stability of the open platform.
在其中一个实施例中,如图5所示,步骤212包括:In one of the embodiments, as shown in FIG. 5, step 212 includes:
步骤502,执行服务器利用任务线程将资源调用请求转发给与服务方标识对应的服务方。Step 502: The execution server forwards the resource call request to the service party corresponding to the service party identifier by using the task thread.
执行服务器在确定接收到的资源调用请求对应的服务方标识以及创建与该资源调用请求对应的任务线程后,利用该任务线程将接收到的携带服务方资源调用接口名的资源调用请求转发给与服务方标识对应的服务方。After the execution server determines the service party identifier corresponding to the received resource call request and creates a task thread corresponding to the resource call request, the task thread forwards the received resource call request carrying the server resource call interface name to the task thread. The servant corresponding to the servant.
步骤504,执行服务器获取服务方执行与服务方资源调用接口名对应的接口函数的执行结果。Step 504: The execution server acquires an execution result of the interface function corresponding to the service provider resource call interface name.
服务方在接收到开放平台发送的携带服务方资源调用接口名的资源调用请求后,执行该服务方资源调用接口名对应的接口函数,并将执行结果返回给开放平台。After receiving the resource invocation request sent by the open platform and carrying the interface name of the service provider resource, the service party executes the interface function corresponding to the interface name of the service party resource, and returns the execution result to the open platform.
应该理解的是,虽然图2-5的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,图2-5中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些子步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。It should be understood that although the various steps in the flowcharts of FIGS. 2-5 are sequentially displayed as indicated by the arrows, these steps are not necessarily performed in the order indicated by the arrows. Except as explicitly stated herein, the execution of these steps is not strictly limited, and the steps may be performed in other orders. Moreover, at least some of the steps in Figures 2-5 may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, these sub-steps or stages The order of execution is not necessarily performed sequentially, but may be performed alternately or alternately with at least a portion of other steps or sub-steps or stages of other steps.
在其中一个实施例中,如图6所示,提供了一种开放平台控制系统,该系统包括负载均衡服务器602和执行服务器604,其中:In one of the embodiments, as shown in FIG. 6, an open platform control system is provided, the system including a load balancing server 602 and an execution server 604, wherein:
负载均衡服务器602,用于接收调用方发起的资源调用请求,资源调用请求携带服务方资源调用接口名,并根据预设负载均衡算法,分配资源调用请求至执行服务器;The load balancing server 602 is configured to receive a resource invocation request initiated by the caller, where the resource invocation request carries the name of the interface of the service party resource, and allocates a resource invoking request to the execution server according to the preset load balancing algorithm;
执行服务器604,用于接收负载服务器分配的资源调用请求,查找与服务方资源调用接口名对应的服务方标识,统计与服务方标识对应的并发任务线程个数,若与服务方标识对应的并发任务线程个数小于与服务方标识对应的第一预设最大并发任务线程个数,创建与服务方标识对应的任务线程,利用任务线程执行调用方发起的资源调用请求,将资源调用请求的执行结果返回给调用方。The execution server 604 is configured to receive a resource call request allocated by the load server, search for a service party identifier corresponding to the service provider resource call interface name, and count the number of concurrent task threads corresponding to the service party identifier, and if the service party identifier corresponds to the concurrent The number of task threads is less than the number of the first preset maximum concurrent task threads corresponding to the service party identifier, and the task thread corresponding to the service party identifier is created, and the task call is executed by the task thread to execute the resource call request initiated by the caller. The result is returned to the caller.
在其中一个实施例中,负载均衡服务器602还用于统计每一台执行服务器的当前并发任务线程个数;若执行服务器的当前并发线程个数小于执行服务器对应的第二预设最大任务线程个数,则计算执行服务器的任务线程可用率;根据每一台执行服务器的任务线程可用率确定任务线程可用率最大值对应的执行服务器;及分配资源调用请求至任务线程可用率最大值对应的执行服务器。In one embodiment, the load balancing server 602 is further configured to count the number of current concurrent task threads of each execution server; if the number of current concurrent threads of the execution server is less than the second preset maximum task thread corresponding to the execution server Count, calculate the task thread availability rate of the execution server; determine the execution server corresponding to the maximum task thread availability rate according to the task thread availability rate of each execution server; and allocate the resource call request to the task thread maximum rate corresponding execution server.
在其中一个实施例中,执行服务器604还用于统计预设时间范围内接收到的资源调用请求中与服务方标识对应的资源调用请求的个数;及若服务方标识对应的资源调用请求的个数小于服务方标识对应的预设资源调用请求最大调用次数,则执行查找与服务方资源调用接口名对应的服务方标识。In one embodiment, the execution server 604 is further configured to count the number of resource call requests corresponding to the service party identifier in the resource call request received within the preset time range; and if the resource identifier corresponding to the service party identifier is requested by the server If the number of times is less than the maximum number of calls of the preset resource call request corresponding to the servant identifier, the servant identifier corresponding to the servant resource call interface name is searched for.
在其中一个实施例中,服务方标识是域名或IP地址。In one of these embodiments, the servant identification is a domain name or an IP address.
在其中一个实施例中,执行服务器604还用于利用任务线程将资源调用请求转发给与服务方标识对应的服务方;及获取服务方执行与服务方资源调用接口名对应的接口函数的执行结果。In one embodiment, the execution server 604 is further configured to forward the resource invocation request to the server corresponding to the server identifier by using the task thread; and obtain the execution result of the interface function corresponding to the server resource invocation interface name by the server. .
关于开放平台控制系统的具体限定可以参见上文中对于开放平台控制方法的限定,在此不再赘述。For specific definitions of the open platform control system, reference may be made to the above definition of the open platform control method, and details are not described herein again.
在其中一个实施例中,提供了一种计算机设备,该计算机设备可以是服务器,其内部结构图可以如图7所示。该计算机设备包括通过系统总线连接的处理器、存储器和网络接口。其中,处理器用于提供计算和控制能力,支撑整个服务器的运行。存储器包括非易失性存储介质和内存储器。非易失性存储介质可以是非易失性计算机可读存储介质。非易失性存储介质中存储有操作系统和和计算机可读指令,该计算机可读指令被处理器执行时以实现一种开放平台控制方法。内存储器为非易失性存储介质中的操作系统和计算机可读指令的运行提供环境。网络接口用于与外部的服务器或终端通过网络连接通信,比如接收调用方发起的资源调用请求,将该资源调用请求转发给服务方以及接收服务方执行资源调用的执行结果,并将执行结果返回给发起方等。服务器可以用独立的服务器或者是多个服务器组成的服务器集群来实现。本领域技术人员可以理解,图7中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的服务器的限定,具体的服务器可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。In one embodiment, a computer device is provided, which may be a server, and its internal structure diagram may be as shown in FIG. The computer device includes a processor, memory, and network interface coupled by a system bus. Among them, the processor is used to provide computing and control capabilities to support the operation of the entire server. The memory includes a nonvolatile storage medium and an internal memory. The non-volatile storage medium can be a non-transitory computer readable storage medium. An operating system and computer readable instructions are stored in the non-volatile storage medium, the computer readable instructions being executed by the processor to implement an open platform control method. The internal memory provides an environment for the operation of operating systems and computer readable instructions in a non-volatile storage medium. The network interface is used for communicating with an external server or terminal through a network connection, for example, receiving a resource call request initiated by the caller, forwarding the resource call request to the service party, and receiving the execution result of the resource call by the service party, and returning the execution result. Give the initiator and so on. The server can be implemented with a stand-alone server or a server cluster consisting of multiple servers. It will be understood by those skilled in the art that the structure shown in FIG. 7 is only a block diagram of a part of the structure related to the solution of the present application, and does not constitute a limitation on the server to which the solution of the present application is applied. The specific server may include a ratio. More or fewer components are shown in the figures, or some components are combined, or have different component arrangements.
一种计算机设备,包括存储器和一个或多个处理器,存储器中储存有计算机可读指令,计算机可读指令被处理器执行时,使得一个或多个处理器执行以下步骤:A computer device comprising a memory and one or more processors having stored therein computer readable instructions, the computer readable instructions being executed by the processor such that the one or more processors perform the following steps:
负载均衡服务器接收调用方发起的资源调用请求,资源调用请求携带服务方资源调用接口名;The load balancing server receives the resource invocation request initiated by the caller, and the resource invoking request carries the name of the interface of the server resource;
负载均衡服务器根据预设负载均衡算法,分配资源调用请求至执行服务器;The load balancing server allocates a resource call request to the execution server according to a preset load balancing algorithm;
执行服务器查找与服务方资源调用接口名对应的服务方标识;The execution server searches for a service party identifier corresponding to the service provider resource call interface name;
执行服务器统计与服务方标识对应的并发任务线程个数;Execute server statistics and the number of concurrent task threads corresponding to the service party identifier;
若与服务方标识对应的并发任务线程个数小于与服务方标识对应的第一预设最大并发任务线程个数,执行服务器创建与资源调用请求对应的任务线程;If the number of concurrent task threads corresponding to the service party identifier is less than the number of the first preset maximum concurrent task threads corresponding to the service party identifier, the execution server creates a task thread corresponding to the resource call request;
执行服务器利用任务线程执行调用方发起的资源调用请求;及The execution server utilizes the task thread to execute the resource invocation request initiated by the caller; and
执行服务器将资源调用请求的执行结果返回给调用方。The execution server returns the execution result of the resource call request to the caller.
在其中一个实施例中,处理器执行负载均衡服务器根据预设负载均衡算法,分配资源调用请求至执行服务器的步骤,包括:In one embodiment, the processor performs a step of the load balancing server to allocate a resource call request to the execution server according to a preset load balancing algorithm, including:
负载均衡服务器统计每一台执行服务器的当前并发任务线程个数;The load balancing server counts the number of current concurrent task threads of each execution server;
若执行服务器的当前并发任务线程个数小于执行服务器对应的第二预设最大并发任务线程个数,则计算执行服务器的任务线程可用率;If the number of the current concurrent task threads of the execution server is less than the number of the second preset maximum concurrent task threads corresponding to the execution server, calculating the task thread availability rate of the execution server;
根据每一台执行服务器的任务线程可用率确定任务线程可用率最大值对应的执行服务器;及Determining an execution server corresponding to a maximum task thread availability rate according to a task thread availability rate of each execution server; and
分配资源调用请求至所述任务线程可用率最大值对应的执行服务器。Allocating a resource call request to an execution server corresponding to the highest task availability rate of the task thread.
在其中一个实施例中,处理器执行计算机可读指令时还实现以下步骤:In one of these embodiments, the processor further implements the following steps when executing the computer readable instructions:
执行服务器统计与服务方标识对应的并发任务线程个数;及Execute server statistics and the number of concurrent task threads corresponding to the service party identifier; and
若与服务方标识对应的并发任务线程个数等于与服务方标识对应的第一预设最大并发任务线程个数,则向调用方发送提示资源调用请求拒绝消息,以使得调用方重新发送资源调用请求。If the number of concurrent task threads corresponding to the servant identifier is equal to the number of the first preset maximum concurrent task threads corresponding to the servant identifier, sending a prompt resource call request rejection message to the caller, so that the caller resends the resource call request.
在其中一个实施例中,处理器执行计算机可读指令时还实现以下步骤:In one of these embodiments, the processor further implements the following steps when executing the computer readable instructions:
执行服务器统计预设时间范围内接收到的资源调用请求中与服务方标识对应的资源调用请求的个数;及Executing the number of resource invocation requests corresponding to the service party identifier in the resource call request received by the server in the preset time range; and
若服务方标识对应的资源调用请求的个数小于服务方标识对应的预设资源调用请求最大调用次数,则执行服务器查找与服务方资源调用接口名对应的服务方标识。If the number of resource invocation requests corresponding to the service provider identifier is less than the maximum number of invocations of the preset resource invocation request corresponding to the service provider identifier, the execution server searches for the service party identifier corresponding to the service provider resource invocation interface name.
在其中一个实施例中,处理器执行计算机可读指令时还实现以下步骤:In one of these embodiments, the processor further implements the following steps when executing the computer readable instructions:
执行服务器利用任务线程将资源调用请求转发给与所述服务方标识对应的服务方;及The execution server forwards the resource invocation request to the service party corresponding to the service party identifier by using the task thread; and
执行服务器获取服务方执行与服务方资源调用接口名对应的接口函数的执行结果。The execution server acquires the execution result of the interface function corresponding to the service provider resource call interface name.
一个或多个存储有计算机可读指令的非易失性存储介质,计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器执行以下步骤:One or more non-volatile storage media storing computer readable instructions, when executed by one or more processors, cause one or more processors to perform the following steps:
负载均衡服务器接收调用方发起的资源调用请求,资源调用请求携带服务方资源调用接口名;The load balancing server receives the resource invocation request initiated by the caller, and the resource invoking request carries the name of the interface of the server resource;
负载均衡服务器根据预设负载均衡算法,分配资源调用请求至执行服务器;The load balancing server allocates a resource call request to the execution server according to a preset load balancing algorithm;
执行服务器查找与服务方资源调用接口名对应的服务方标识;The execution server searches for a service party identifier corresponding to the service provider resource call interface name;
执行服务器统计与服务方标识对应的并发任务线程个数;Execute server statistics and the number of concurrent task threads corresponding to the service party identifier;
若与服务方标识对应的并发任务线程个数小于与服务方标识对应的第一预设最大并发任务线程个数,执行服务器创建与资源调用请求对应的任务线程;If the number of concurrent task threads corresponding to the service party identifier is less than the number of the first preset maximum concurrent task threads corresponding to the service party identifier, the execution server creates a task thread corresponding to the resource call request;
执行服务器利用任务线程执行调用方发起的资源调用请求;及The execution server utilizes the task thread to execute the resource invocation request initiated by the caller; and
执行服务器将资源调用请求的执行结果返回给调用方。The execution server returns the execution result of the resource call request to the caller.
在其中一个实施例中,计算机可读指令被处理器执行时还实现以下步骤:负载均衡服务器统计每一台执行服务器的当前并发任务线程个数;In one of the embodiments, when the computer readable instructions are executed by the processor, the following steps are further implemented: the load balancing server counts the number of current concurrent task threads of each execution server;
若执行服务器的当前并发任务线程个数小于执行服务器对应的第二预设最大并发任务线程个数,则计算执行服务器的任务线程可用率;If the number of the current concurrent task threads of the execution server is less than the number of the second preset maximum concurrent task threads corresponding to the execution server, calculating the task thread availability rate of the execution server;
根据每一台执行服务器的任务线程可用率确定任务线程可用率最大值对应的执行服务器;及Determining an execution server corresponding to a maximum task thread availability rate according to a task thread availability rate of each execution server; and
分配资源调用请求至所述任务线程可用率最大值对应的执行服务器。Allocating a resource call request to an execution server corresponding to the highest task availability rate of the task thread.
在其中一个实施例中,计算机可读指令被处理器执行时还实现以下步骤:In one of these embodiments, the computer readable instructions, when executed by the processor, also implement the following steps:
执行服务器统计与服务方标识对应的并发任务线程个数;及Execute server statistics and the number of concurrent task threads corresponding to the service party identifier; and
若与服务方标识对应的并发任务线程个数等于与服务方标识对应的第一预设最大并发任务线程个数,则向调用方发送提示资源调用请求拒绝消息,以使得调用方重新发送资源调用请求。If the number of concurrent task threads corresponding to the servant identifier is equal to the number of the first preset maximum concurrent task threads corresponding to the servant identifier, sending a prompt resource call request rejection message to the caller, so that the caller resends the resource call request.
在其中一个实施例中,计算机可读指令被处理器执行时还实现以下步骤:In one of these embodiments, the computer readable instructions, when executed by the processor, also implement the following steps:
执行服务器统计预设时间范围内接收到的资源调用请求中与服务方标识对应的资源调用请求的个数;及Executing the number of resource invocation requests corresponding to the service party identifier in the resource call request received by the server in the preset time range; and
若服务方标识对应的资源调用请求的个数小于服务方标识对应的预设资源调用请求最大调用次数,则执行服务器查找与服务方资源调用接口名对应的服务方标识。If the number of resource invocation requests corresponding to the service provider identifier is less than the maximum number of invocations of the preset resource invocation request corresponding to the service provider identifier, the execution server searches for the service party identifier corresponding to the service provider resource invocation interface name.
在其中一个实施例中,计算机可读指令被处理器执行时还实现以下步骤:In one of these embodiments, the computer readable instructions, when executed by the processor, also implement the following steps:
执行服务器利用任务线程将资源调用请求转发给与所述服务方标识对应的服务方;及The execution server forwards the resource invocation request to the service party corresponding to the service party identifier by using the task thread; and
执行服务器获取服务方执行与服务方资源调用接口名对应的接口函数的执行结果。The execution server acquires the execution result of the interface function corresponding to the service provider resource call interface name.
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程, 是可以通过计算机可读指令来指令相关的硬件来完成,所述的计算机可读指令可存储于一非易失性计算机可读取存储介质中,该计算机可读指令在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDRSDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)等。A person skilled in the art can understand that all or part of the process of implementing the above embodiment method can be completed by computer-readable instructions for instructing related hardware, and the computer readable instructions can be stored in a non-volatile computer. The readable storage medium, which when executed, may include the flow of an embodiment of the methods as described above. Any reference to a memory, storage, database or other medium used in the various embodiments provided herein may include non-volatile and/or volatile memory. Non-volatile memory can include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory. Volatile memory can include random access memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of formats, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronization chain. Synchlink DRAM (SLDRAM), Memory Bus (Rambus) Direct RAM (RDRAM), Direct Memory Bus Dynamic RAM (DRDRAM), and Memory Bus Dynamic RAM (RDRAM).
以上所述实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。The technical features of the above-described embodiments may be arbitrarily combined. For the sake of brevity of description, all possible combinations of the technical features in the above embodiments are not described. However, as long as there is no contradiction between the combinations of these technical features, All should be considered as the scope of this manual.
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对申请专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。The above-mentioned embodiments are merely illustrative of several embodiments of the present application, and the description thereof is more specific and detailed, but is not to be construed as limiting the scope of the claims. It should be noted that a number of variations and modifications may be made by those skilled in the art without departing from the spirit and scope of the present application. Therefore, the scope of the invention should be determined by the appended claims.

Claims (20)

  1. 一种开放平台控制方法,包括:An open platform control method includes:
    负载均衡服务器接收调用方发起的资源调用请求,所述资源调用请求携带服务方资源调用接口名;The load balancing server receives a resource invocation request initiated by the caller, where the resource invoking request carries the server name of the calling party resource;
    所述负载均衡服务器根据预设负载均衡算法,分配所述资源调用请求至执行服务器;The load balancing server allocates the resource invoking request to the execution server according to a preset load balancing algorithm;
    所述执行服务器查找与所述服务方资源调用接口名对应的服务方标识;The execution server searches for a service party identifier corresponding to the service provider resource call interface name;
    所述执行服务器统计与所述服务方标识对应的并发任务线程个数;The execution server collects the number of concurrent task threads corresponding to the service party identifier;
    若与所述服务方标识对应的并发任务线程个数小于与所述服务方标识对应的第一预设最大并发任务线程个数,所述执行服务器创建与所述资源调用请求对应的任务线程;If the number of concurrent task threads corresponding to the service provider identifier is less than the number of the first preset maximum concurrent task threads corresponding to the service provider identifier, the execution server creates a task thread corresponding to the resource call request;
    所述执行服务器利用所述任务线程执行所述调用方发起的资源调用请求;Executing, by the execution server, the resource invoke request initiated by the caller by using the task thread;
    及所述执行服务器将所述资源调用请求的执行结果返回给所述调用方。And the execution server returns an execution result of the resource call request to the caller.
  2. 根据权利要求1所述的方法,其特征在于,所述负载均衡服务器根据预设负载均衡算法,分配所述资源调用请求至执行服务器,包括:The method according to claim 1, wherein the load balancing server allocates the resource invoking request to the execution server according to a preset load balancing algorithm, including:
    所述负载均衡服务器统计每一台执行服务器的当前并发任务线程个数;The load balancing server collects the number of current concurrent task threads of each execution server;
    若所述执行服务器的当前并发任务线程个数小于所述执行服务器对应的第二预设最大并发任务线程个数,则计算所述执行服务器的任务线程可用率;及Calculating a task thread availability rate of the execution server if the number of current concurrent task threads of the execution server is less than a second preset maximum number of concurrent task threads corresponding to the execution server; and
    根据所述每一台执行服务器的任务线程可用率确定任务线程可用率最大值对应的执行服务器;Determining, according to the task thread availability rate of each execution server, an execution server corresponding to a maximum task thread availability rate;
    分配资源调用请求至所述任务线程可用率最大值对应的执行服务器。Allocating a resource call request to an execution server corresponding to the highest task availability rate of the task thread.
  3. 根据权利要求1所述的方法,其特征在于,所述方法还包括:The method of claim 1 further comprising:
    所述执行服务器统计与所述服务方标识对应的并发任务线程个数;及The execution server collects the number of concurrent task threads corresponding to the service party identifier; and
    若与所述服务方标识对应的并发任务线程个数等于与所述服务方标识对应的第一预设最大并发任务线程个数,则向所述调用方发送提示资源调用请求拒绝消息,以使得所述调用方重新发送资源调用请求。And if the number of concurrent task threads corresponding to the servant identifier is equal to the number of the first preset maximum concurrent task threads corresponding to the servant identifier, sending a prompt resource call request rejection message to the caller, so that The caller resends the resource call request.
  4. 根据权利要求1所述的方法,其特征在于,在所述执行服务器查找与所述服务方资源调用接口名对应的服务方标识之前,还包括:The method according to claim 1, wherein before the execution server searches for the service party identifier corresponding to the service provider resource call interface name, the method further includes:
    所述执行服务器统计预设时间范围内接收到的资源调用请求中与所述服务方标识对应的资源调用请求的个数;及The execution server counts the number of resource call requests corresponding to the service party identifier in the resource call request received within the preset time range; and
    若所述服务方标识对应的资源调用请求的个数小于所述服务方标识对应的预设资源调用请求最大调用次数,则所述执行服务器查找与所述服务方资源调用接口名对应的服务方标识。If the number of resource invocation requests corresponding to the service provider identifier is less than the maximum number of calls of the preset resource invocation request corresponding to the service provider identifier, the execution server searches for a service party corresponding to the service provider resource call interface name. Logo.
  5. 根据权利要求1所述的方法,其特征在于,所述执行服务器利用所述任务线程执行所述调用方发起的资源调用请求,包括:The method according to claim 1, wherein the execution server uses the task thread to execute the resource invocation request initiated by the caller, including:
    所述执行服务器利用所述任务线程将所述资源调用请求转发给与所述服务方标识对应的服务方;及The execution server forwards the resource invocation request to a service party corresponding to the service party identifier by using the task thread; and
    所述执行服务器获取所述服务方执行与所述服务方资源调用接口名对应的接口函数的执行结果。The execution server acquires an execution result of the interface function corresponding to the server resource call interface name by the server.
  6. 一种开放平台控制系统,包括:An open platform control system comprising:
    负载均衡服务器,用于接收调用方发起的资源调用请求,所述资源调用请求携带服务方资源调用接口名,并根据预设负载均衡算法,分配所述资源调用请求至执行服务器;及a load balancing server, configured to receive a resource invocation request initiated by the caller, where the resource invocation request carries a service party resource to invoke an interface name, and allocates the resource invoking request to the execution server according to a preset load balancing algorithm;
    执行服务器,用于接收负载服务器分配的所述资源调用请求,查找与所述服务方资源调用接口名对应的服务方标识,统计与服务方标识对应的并发任务线程个数,若与所述服务方标识对应的并发任务线程个数小于与所述服务方标识对应的第一预设最大并发任务线程个数,创建与所述服务方标识对应的任务线程,利用所述任务线程执行所述调用方发起的资源调用请求,将所述资源调用请求的执行结果返回给所述调用方。An execution server, configured to receive the resource call request allocated by the load server, search for a service party identifier corresponding to the service provider resource call interface name, and count the number of concurrent task threads corresponding to the service party identifier, if the service is The number of the concurrent task threads corresponding to the party identifier is smaller than the number of the first preset maximum concurrent task threads corresponding to the service party identifier, and the task thread corresponding to the service party identifier is created, and the call is performed by using the task thread. A resource invocation request initiated by the party returns the execution result of the resource invoking request to the calling party.
  7. 根据权利要求6所述的系统,其特征在于,所述负载均衡服务器还用于统计每一台执行服务器的当前并发任务线程个数;若所述执行服务器的当前并发线程个数小于所述执行服务器对应的第二预设最大任务线程个数,则计算所述执行服务器的任务线程可用率;及根据所述每一台执行服务器的任务线程可用率确定任务线程可用率最大值对应的执行服务器;分配资源调用请求至所述任务线程可用率最大值对应的执行服务器。The system according to claim 6, wherein the load balancing server is further configured to count the number of current concurrent task threads of each execution server; if the number of current concurrent threads of the execution server is smaller than the execution And calculating, by the server, a second preset maximum number of task threads, calculating a task thread availability rate of the execution server; and determining, according to the task thread availability rate of each execution server, an execution server corresponding to a maximum task thread availability rate Assigning a resource call request to an execution server corresponding to the maximum task rate availability.
  8. 根据权利要求6所述的系统,其特征在于,所述执行服务器还用于所述执行服务器统计与所述服务方标识对应的并发任务线程个数;及若与所述服务方标识对应的并发任务线程个数等于与所述服务方标识对应的第一预设最大并发任务线程个数,则向所述调用方发送提示资源调用请求拒绝消息,以使得所述调用方重新发送资源调用请求。The system according to claim 6, wherein the execution server is further configured to: use the execution server to count the number of concurrent task threads corresponding to the service party identifier; and concurrently correspond to the service party identifier If the number of task threads is equal to the number of the first preset maximum concurrent task threads corresponding to the service provider identifier, the prompting resource call request rejection message is sent to the caller, so that the caller resends the resource call request.
  9. 根据权利要求6所述的系统,其特征在于,所述执行服务器还用于统计预设时间范围内接收到的资源调用请求中与所述服务方标识对应的资源调用请求的个数;及若所述服务方标识对应的资源调用请求的个数小于所述服务方标识对应的预设资源调用请求最大调用次数,则执行查找与所述服务方资源调用接口名对应的服务方标识。The system according to claim 6, wherein the execution server is further configured to count the number of resource call requests corresponding to the service party identifier in the resource call request received within the preset time range; If the number of the resource call requests corresponding to the servant identifier is smaller than the maximum number of call times of the preset resource call request corresponding to the servant identifier, the servant identifier corresponding to the servant resource call interface name is searched for.
  10. 根据权利要求6所述的系统,其特征在于,所述执行服务器还用于利用所述任务线程将所述资源调用请求转发给与所述服务方标识对应的服务方;及获取所述服务方执行与所述服务方资源调用接口名对应的接口函数的执行结果。The system according to claim 6, wherein the execution server is further configured to forward the resource invocation request to a service party corresponding to the service party identifier by using the task thread; and acquire the service party Execute the execution result of the interface function corresponding to the servant resource call interface name.
  11. 一种计算机设备,包括存储器及一个或多个处理器,所述存储器中储存有计算机可读指令,所述计算机可读指令被所述一个或多个处理器执行时,使得所述一个或多个处理器执行以下步骤:A computer device comprising a memory and one or more processors having stored therein computer readable instructions, the computer readable instructions being executed by the one or more processors to cause the one or more The processors perform the following steps:
    负载均衡服务器接收调用方发起的资源调用请求,所述资源调用请求携带服务方资源调用接口名;The load balancing server receives a resource invocation request initiated by the caller, where the resource invoking request carries the server name of the calling party resource;
    所述负载均衡服务器根据预设负载均衡算法,分配所述资源调用请求至执行服务器;The load balancing server allocates the resource invoking request to the execution server according to a preset load balancing algorithm;
    所述执行服务器查找与所述服务方资源调用接口名对应的服务方标识;The execution server searches for a service party identifier corresponding to the service provider resource call interface name;
    所述执行服务器统计与所述服务方标识对应的并发任务线程个数;The execution server collects the number of concurrent task threads corresponding to the service party identifier;
    若与所述服务方标识对应的并发任务线程个数小于与所述服务方标识对应的第一预设最大并发任务线程个数,所述执行服务器创建与所述资源调用请求对应的任务线程;If the number of concurrent task threads corresponding to the service provider identifier is less than the number of the first preset maximum concurrent task threads corresponding to the service provider identifier, the execution server creates a task thread corresponding to the resource call request;
    所述执行服务器利用所述任务线程执行所述调用方发起的资源调用请求;Executing, by the execution server, the resource invoke request initiated by the caller by using the task thread;
    及所述执行服务器将所述资源调用请求的执行结果返回给所述调用方。And the execution server returns an execution result of the resource call request to the caller.
  12. 根据权利要求11所述的计算机设备,其特征在于,所述处理器执行所述负载均衡服务器根据预设负载均衡算法,分配所述资源调用请求至执行服务器的步骤,包括:The computer device according to claim 11, wherein the step of the processor performing the load balancing server to allocate the resource invoking request to the execution server according to a preset load balancing algorithm comprises:
    所述执行服务器统计与所述服务方标识对应的并发任务线程个数;The execution server collects the number of concurrent task threads corresponding to the service party identifier;
    若所述执行服务器的当前并发任务线程个数小于所述执行服务器对应的第二预设最大并发任务线程个数,则计算所述执行服务器的任务线程可用率;及Calculating a task thread availability rate of the execution server if the number of current concurrent task threads of the execution server is less than a second preset maximum number of concurrent task threads corresponding to the execution server; and
    根据所述每一台执行服务器的任务线程可用率确定任务线程可用率最大值 对应的执行服务器;Determining an execution server corresponding to a maximum task thread availability rate according to the task thread availability rate of each execution server;
    分配资源调用请求至所述任务线程可用率最大值对应的执行服务器。Allocating a resource call request to an execution server corresponding to the highest task availability rate of the task thread.
  13. 根据权利要求11所述的计算机设备,其特征在于,所述处理器执行所述计算机可读指令时还执行以下步骤:The computer apparatus according to claim 11, wherein said processor further performs the following steps when said computer readable instructions are executed:
    所述执行服务器统计与所述服务方标识对应的并发任务线程个数;及The execution server collects the number of concurrent task threads corresponding to the service party identifier; and
    若与所述服务方标识对应的并发任务线程个数等于与所述服务方标识对应的第一预设最大并发任务线程个数,则向所述调用方发送提示资源调用请求拒绝消息,以使得所述调用方重新发送资源调用请求。And if the number of concurrent task threads corresponding to the servant identifier is equal to the number of the first preset maximum concurrent task threads corresponding to the servant identifier, sending a prompt resource call request rejection message to the caller, so that The caller resends the resource call request.
  14. 根据权利要求11所述的计算机设备,其特征在于,所述处理器执行所述计算机可读指令时还执行以下步骤:The computer apparatus according to claim 11, wherein said processor further performs the following steps when said computer readable instructions are executed:
    所述执行服务器统计预设时间范围内接收到的资源调用请求中与所述服务方标识对应的资源调用请求的个数;及The execution server counts the number of resource call requests corresponding to the service party identifier in the resource call request received within the preset time range; and
    若所述服务方标识对应的资源调用请求的个数小于所述服务方标识对应的预设资源调用请求最大调用次数,则所述执行服务器查找与所述服务方资源调用接口名对应的服务方标识。If the number of resource invocation requests corresponding to the service provider identifier is less than the maximum number of calls of the preset resource invocation request corresponding to the service provider identifier, the execution server searches for a service party corresponding to the service provider resource call interface name. Logo.
  15. 根据权利要求11所述的计算机设备,其特征在于,所述处理器执行所述执行服务器利用所述任务线程执行所述调用方发起的资源调用请求的步骤,包括:The computer device according to claim 11, wherein the step of executing, by the processor, the execution of the resource call request initiated by the caller by the task thread comprises:
    所述执行服务器利用所述任务线程将所述资源调用请求转发给与所述服务方标识对应的服务方;及The execution server forwards the resource invocation request to a service party corresponding to the service party identifier by using the task thread; and
    所述执行服务器获取所述服务方执行与所述服务方资源调用接口名对应的接口函数的执行结果。The execution server acquires an execution result of the interface function corresponding to the server resource call interface name by the server.
  16. 一个或多个存储有计算机可读指令的非易失性计算机可读存储介质,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行以下步骤:One or more non-transitory computer readable storage mediums storing computer readable instructions, when executed by one or more processors, cause the one or more processors to perform the following steps:
    负载均衡服务器接收调用方发起的资源调用请求,所述资源调用请求携带服务方资源调用接口名;The load balancing server receives a resource invocation request initiated by the caller, where the resource invoking request carries the server name of the calling party resource;
    所述负载均衡服务器根据预设负载均衡算法,分配所述资源调用请求至执行服务器;The load balancing server allocates the resource invoking request to the execution server according to a preset load balancing algorithm;
    所述执行服务器查找与所述服务方资源调用接口名对应的服务方标识;The execution server searches for a service party identifier corresponding to the service provider resource call interface name;
    所述执行服务器统计与所述服务方标识对应的并发任务线程个数;The execution server collects the number of concurrent task threads corresponding to the service party identifier;
    若与所述服务方标识对应的并发任务线程个数小于与所述服务方标识对应的第一预设最大并发任务线程个数,所述执行服务器创建与所述资源调用请求对应的任务线程;If the number of concurrent task threads corresponding to the service provider identifier is less than the number of the first preset maximum concurrent task threads corresponding to the service provider identifier, the execution server creates a task thread corresponding to the resource call request;
    所述执行服务器利用所述任务线程执行所述调用方发起的资源调用请求;Executing, by the execution server, the resource invoke request initiated by the caller by using the task thread;
    及所述执行服务器将所述资源调用请求的执行结果返回给所述调用方。And the execution server returns an execution result of the resource call request to the caller.
  17. 根据权利要求16所述的存储介质,其特征在于,所述计算机可读指令被所述处理器执行时还执行以下步骤:The storage medium of claim 16 wherein said computer readable instructions are further executed by said processor to perform the following steps:
    所述执行服务器统计与所述服务方标识对应的并发任务线程个数;及The execution server collects the number of concurrent task threads corresponding to the service party identifier; and
    若所述执行服务器的当前并发任务线程个数小于所述执行服务器对应的第二预设最大并发任务线程个数,则计算所述执行服务器的任务线程可用率;及Calculating a task thread availability rate of the execution server if the number of current concurrent task threads of the execution server is less than a second preset maximum number of concurrent task threads corresponding to the execution server; and
    根据所述每一台执行服务器的任务线程可用率确定任务线程可用率最大值对应的执行服务器;Determining, according to the task thread availability rate of each execution server, an execution server corresponding to a maximum task thread availability rate;
    分配资源调用请求至所述任务线程可用率最大值对应的执行服务器。Allocating a resource call request to an execution server corresponding to the highest task availability rate of the task thread.
  18. 根据权利要求16所述的存储介质,其特征在于,所述计算机可读指令被所述处理器执行时还执行以下步骤:The storage medium of claim 16 wherein said computer readable instructions are further executed by said processor to perform the following steps:
    所述执行服务器统计与所述服务方标识对应的并发任务线程个数;及The execution server collects the number of concurrent task threads corresponding to the service party identifier; and
    若与所述服务方标识对应的并发任务线程个数等于与所述服务方标识对应的第一预设最大并发任务线程个数,则向所述调用方发送提示资源调用请求拒绝消息,以使得所述调用方重新发送资源调用请求。And if the number of concurrent task threads corresponding to the servant identifier is equal to the number of the first preset maximum concurrent task threads corresponding to the servant identifier, sending a prompt resource call request rejection message to the caller, so that The caller resends the resource call request.
  19. 根据权利要求16所述的存储介质,其特征在于,所述计算机可读指令被所述处理器执行时还执行以下步骤:The storage medium of claim 16 wherein said computer readable instructions are further executed by said processor to perform the following steps:
    所述执行服务器统计预设时间范围内接收到的资源调用请求中与所述服务方标识对应的资源调用请求的个数;及The execution server counts the number of resource call requests corresponding to the service party identifier in the resource call request received within the preset time range; and
    若所述服务方标识对应的资源调用请求的个数小于所述服务方标识对应的预设资源调用请求最大调用次数,则所述执行服务器查找与所述服务方资源调用接口名对应的服务方标识。If the number of resource invocation requests corresponding to the service provider identifier is less than the maximum number of calls of the preset resource invocation request corresponding to the service provider identifier, the execution server searches for a service party corresponding to the service provider resource call interface name. Logo.
  20. 根据权利要求16所述的存储介质,其特征在于,所述计算机可读指令被所述处理器执行时还执行以下步骤:The storage medium of claim 16 wherein said computer readable instructions are further executed by said processor to perform the following steps:
    所述执行服务器利用所述任务线程将所述资源调用请求转发给与所述服务 方标识对应的服务方;及Executing, by the execution server, the resource thread to forward the resource invocation request to a service party corresponding to the service party identifier; and
    所述执行服务器获取所述服务方执行与所述服务方资源调用接口名对应的接口函数的执行结果。The execution server acquires an execution result of the interface function corresponding to the server resource call interface name by the server.
PCT/CN2018/088835 2017-09-13 2018-05-29 Open platform control method and system, computer device, and storage medium WO2019052225A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710823368.1 2017-09-13
CN201710823368.1A CN107800768B (en) 2017-09-13 2017-09-13 Open platform control method and system

Publications (1)

Publication Number Publication Date
WO2019052225A1 true WO2019052225A1 (en) 2019-03-21

Family

ID=61532388

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/088835 WO2019052225A1 (en) 2017-09-13 2018-05-29 Open platform control method and system, computer device, and storage medium

Country Status (2)

Country Link
CN (1) CN107800768B (en)
WO (1) WO2019052225A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113742084A (en) * 2021-09-13 2021-12-03 城云科技(中国)有限公司 Method and apparatus for allocating computing resources based on interface characteristics
CN114124797A (en) * 2021-11-19 2022-03-01 中国电信集团系统集成有限责任公司 Server routing method and device, electronic equipment and storage medium

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107800768B (en) * 2017-09-13 2020-01-10 平安科技(深圳)有限公司 Open platform control method and system
CN108512666A (en) * 2018-04-08 2018-09-07 苏州犀牛网络科技有限公司 Encryption method, data interactive method and the system of API request
CN108984321B (en) * 2018-06-29 2021-03-19 Oppo(重庆)智能科技有限公司 Mobile terminal, limiting method for interprocess communication of mobile terminal and storage medium
CN109002364B (en) * 2018-06-29 2021-03-30 Oppo(重庆)智能科技有限公司 Method for optimizing inter-process communication, electronic device and readable storage medium
CN109032813B (en) * 2018-06-29 2021-01-26 Oppo(重庆)智能科技有限公司 Mobile terminal, limiting method for interprocess communication of mobile terminal and storage medium
CN109165165A (en) * 2018-09-04 2019-01-08 中国平安人寿保险股份有限公司 Interface test method, device, computer equipment and storage medium
CN111209060A (en) * 2018-11-21 2020-05-29 中国移动通信集团广东有限公司 Capability development platform processing method and device
CN109840142B (en) * 2018-12-15 2024-03-15 平安科技(深圳)有限公司 Thread control method and device based on cloud monitoring, electronic equipment and storage medium
CN109710402A (en) * 2018-12-17 2019-05-03 平安普惠企业管理有限公司 Method, apparatus, computer equipment and the storage medium of process resource acquisition request
CN109981731B (en) * 2019-02-15 2021-06-15 联想(北京)有限公司 Data processing method and equipment
CN110716796B (en) * 2019-09-02 2024-05-28 中国平安财产保险股份有限公司 Intelligent task scheduling method and device, storage medium and electronic equipment
CN110958217B (en) * 2019-10-12 2022-02-08 平安科技(深圳)有限公司 Method and device for remotely controlling server, computer equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040139440A1 (en) * 2003-01-09 2004-07-15 International Business Machines Corporation Method and apparatus for thread-safe handlers for checkpoints and restarts
CN102325148A (en) * 2011-05-25 2012-01-18 重庆新媒农信科技有限公司 WebService service calling method
CN102681889A (en) * 2012-04-27 2012-09-19 电子科技大学 Scheduling method of cloud computing open platform
CN103516536A (en) * 2012-06-26 2014-01-15 重庆新媒农信科技有限公司 Server service request parallel processing method based on thread number limit and system thereof
CN104281489A (en) * 2013-07-12 2015-01-14 携程计算机技术(上海)有限公司 Multithreading request method and system under SOA (service oriented architecture)
US20150326499A1 (en) * 2014-05-06 2015-11-12 International Business Machines Corporation Clustering requests and prioritizing workmanager threads based on resource performance and/or availability
CN107800768A (en) * 2017-09-13 2018-03-13 平安科技(深圳)有限公司 Open platform control method and system

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101753461B (en) * 2010-01-14 2012-07-25 中国建设银行股份有限公司 Method for realizing load balance, load balanced server and group system
CN101882161B (en) * 2010-06-23 2012-07-04 中国工商银行股份有限公司 Application level asynchronous task scheduling system and method
CN103379040B (en) * 2012-04-24 2016-08-31 阿里巴巴集团控股有限公司 A kind of high concurrent system controls the apparatus and method of number of concurrent
JP6269257B2 (en) * 2014-03-31 2018-01-31 富士通株式会社 Information processing apparatus, information processing system, information processing apparatus control program, and information processing apparatus control method
US9473365B2 (en) * 2014-05-08 2016-10-18 Cisco Technology, Inc. Collaborative inter-service scheduling of logical resources in cloud platforms

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040139440A1 (en) * 2003-01-09 2004-07-15 International Business Machines Corporation Method and apparatus for thread-safe handlers for checkpoints and restarts
CN102325148A (en) * 2011-05-25 2012-01-18 重庆新媒农信科技有限公司 WebService service calling method
CN102681889A (en) * 2012-04-27 2012-09-19 电子科技大学 Scheduling method of cloud computing open platform
CN103516536A (en) * 2012-06-26 2014-01-15 重庆新媒农信科技有限公司 Server service request parallel processing method based on thread number limit and system thereof
CN104281489A (en) * 2013-07-12 2015-01-14 携程计算机技术(上海)有限公司 Multithreading request method and system under SOA (service oriented architecture)
US20150326499A1 (en) * 2014-05-06 2015-11-12 International Business Machines Corporation Clustering requests and prioritizing workmanager threads based on resource performance and/or availability
CN107800768A (en) * 2017-09-13 2018-03-13 平安科技(深圳)有限公司 Open platform control method and system

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113742084A (en) * 2021-09-13 2021-12-03 城云科技(中国)有限公司 Method and apparatus for allocating computing resources based on interface characteristics
CN114124797A (en) * 2021-11-19 2022-03-01 中国电信集团系统集成有限责任公司 Server routing method and device, electronic equipment and storage medium
CN114124797B (en) * 2021-11-19 2023-08-04 中电信数智科技有限公司 Server routing method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN107800768B (en) 2020-01-10
CN107800768A (en) 2018-03-13

Similar Documents

Publication Publication Date Title
WO2019052225A1 (en) Open platform control method and system, computer device, and storage medium
US9984013B2 (en) Method, controller, and system for service flow control in object-based storage system
US11431794B2 (en) Service deployment method and function management platform under serverless architecture
WO2018059222A1 (en) File slice uploading method and apparatus, and cloud storage system
JP6881575B2 (en) Resource allocation systems, management equipment, methods and programs
WO2014194869A1 (en) Request processing method, device and system
US11768706B2 (en) Method, storage medium storing instructions, and apparatus for implementing hardware resource allocation according to user-requested resource quantity
CN108933829A (en) A kind of load-balancing method and device
WO2019170011A1 (en) Task allocation method and device, and distributed storage system
US11316916B2 (en) Packet processing method, related device, and computer storage medium
WO2022111313A1 (en) Request processing method and micro-service system
KR101402367B1 (en) Efficient and cost-effective distributed call admission control
WO2016173452A1 (en) Method and apparatus for processing resolution task, and server
CN108388409B (en) Print request processing method, apparatus, computer device and storage medium
CN108124021B (en) Method, device and system for obtaining Internet Protocol (IP) address and accessing website
CN113760549A (en) Pod deployment method and device
WO2019034091A1 (en) Distribution method for distributed data computing, device, server and storage medium
CN112148426A (en) Bandwidth allocation method and device
US10154414B2 (en) Resource allocation
WO2020024207A1 (en) Service request processing method, device and storage system
US9537742B2 (en) Automatic adjustment of application launch endpoints
US10992517B1 (en) Dynamic distributed execution budget management system
CN114489463A (en) Method and device for dynamically adjusting QOS (quality of service) of storage volume and computing equipment
CN111782364A (en) Service calling method and device, electronic equipment and storage medium
CN110955522A (en) Resource management method and system for coordination performance isolation and data recovery optimization

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18855432

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 28/09/2020)

122 Ep: pct application non-entry in european phase

Ref document number: 18855432

Country of ref document: EP

Kind code of ref document: A1