CN115640146A - Back-end service calling method and device, electronic equipment and program product - Google Patents

Back-end service calling method and device, electronic equipment and program product Download PDF

Info

Publication number
CN115640146A
CN115640146A CN202211217820.7A CN202211217820A CN115640146A CN 115640146 A CN115640146 A CN 115640146A CN 202211217820 A CN202211217820 A CN 202211217820A CN 115640146 A CN115640146 A CN 115640146A
Authority
CN
China
Prior art keywords
event
executed
thread
central server
api gateway
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211217820.7A
Other languages
Chinese (zh)
Inventor
黄丽芳
雷嘉健
周贤舜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lakala Payment Co ltd
Original Assignee
Lakala Payment Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lakala Payment Co ltd filed Critical Lakala Payment Co ltd
Priority to CN202211217820.7A priority Critical patent/CN115640146A/en
Publication of CN115640146A publication Critical patent/CN115640146A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The embodiment of the disclosure discloses a method, a device, electronic equipment and a program product for calling back-end service, wherein the method comprises the following steps: receiving a calling request of a client to a target service; registering the call request of the target service as an IO event and then putting the IO event into a synchronous thread queue; polling IO events in the synchronous thread queue by using a synchronous thread started on the CPU kernel, and judging whether the current polled IO events are synchronously executed IO events or asynchronously executed IO events; and after the IO event which is polled currently is an asynchronously executed IO event, the synchronously executed IO event is processed by the synchronous thread, and after the IO event which is polled currently is an asynchronously executed IO event, the asynchronously executed IO event is sent to the corresponding asynchronous thread by the synchronous thread for processing, and the asynchronously executed IO event which is executed by the asynchronous thread is put back to the synchronous thread queue.

Description

Back-end service calling method and device, electronic equipment and program product
Technical Field
The embodiment of the disclosure relates to the technical field of communication, in particular to a method and a device for calling a back-end service, electronic equipment and a program product.
Background
An API (Application Program Interface) gateway can take over all the ingress traffic of a system or a server, and forward the requests of all users to the backend corresponding services. For example, an e-commerce system involves many backend microservices such as members, goods, recommendation services, etc.; there may be a problem with how users access these backend services through clients. If the service is simple, an independent domain name can be allocated to each backend service, but this method causes that each backend service needs to repeatedly implement some same logics, such as authentication, current limiting, and authority verification, so that the implementation of the backend service becomes complicated, each backend service is on-line, operation and maintenance participation is needed, a domain name is applied, a Nginx is configured, and the like, the process is complicated, and the labor cost is high. Therefore, by introducing the API gateway, the client only needs to interact with the API gateway without communicating with each backend service interface, but introducing one more component introduces one more potential failure point, so how to implement a high-performance and stable API gateway is one of the technical problems that needs to be solved at present.
Disclosure of Invention
The embodiment of the disclosure provides a method and a device for calling a back-end service, electronic equipment and a program product.
In a first aspect, a method for calling a backend service is provided in an embodiment of the present disclosure, where the method is executed on a current API gateway; the method comprises the following steps:
receiving a calling request of a client to a target service;
registering the call request of the target service as an IO event and then putting the IO event into a synchronous thread queue corresponding to a CPU kernel; the CPU inner core is positioned at the node where the current API gateway is positioned;
polling IO events in the synchronous thread queue by using a synchronous thread started on the CPU kernel, and judging whether the current polled IO events are synchronously executed IO events or asynchronously executed IO events;
and after the IO event which is polled currently is an IO event which is executed synchronously, the synchronous thread processes the IO event which is executed synchronously, and after the IO event which is polled currently is an IO event which is executed asynchronously, the synchronous thread sends the IO event which is executed asynchronously to a corresponding asynchronous thread for processing, and the IO event which is executed asynchronously and completed by the asynchronous thread is put back to the synchronous thread queue.
In a second aspect, an embodiment of the present disclosure provides a multi-node deployment method for an API gateway, where the method is executed on a management and control server, and includes:
receiving registration requests of one or more API gateways based on a synchronous locking mode;
determining the API gateway which requests registration as the current central server;
determining the API gateway requesting registration as a next central server; the next central server is switched to the current central server after the current central server is abnormal;
after the registration is successful, the API gateway executes the following steps:
receiving a calling request of a client to a target service;
registering the call request of the target service as an IO event and then putting the IO event into a synchronous thread queue corresponding to a CPU kernel;
polling IO events in the synchronous thread queue by using a synchronous thread, and judging whether the current polled IO event is a synchronously executed IO event or an asynchronously executed IO event;
and after the IO event which is polled currently is an IO event which is executed synchronously, the IO event which is executed synchronously is executed in the synchronous thread, and after the IO event which is polled currently is an IO event which is executed asynchronously, the IO event which is executed asynchronously is sent to the corresponding asynchronous thread for processing, and the IO event which is executed asynchronously and completed by the asynchronous thread is put back to the synchronous thread queue.
In a third aspect, the present disclosure provides a back-end service calling method, where the method is executed on an API gateway cluster system, where the API gateway cluster system includes multiple API gateways and a management and control server; the method comprises the following steps:
the management and control server receives registration requests of one or more API gateways based on a synchronous locking mode;
the management and control server determines the API gateway which requests registration as a current central server;
the management and control server determines the API gateway which requests registration for the second time as a next central server; the next central server is switched to the current central server after the current central server is abnormal;
after the API gateway successfully registers, receiving a call request of a client to a target service, registering the call request of the target service as an IO event, then placing the IO event into a synchronous thread queue corresponding to a CPU kernel, polling the IO event in the synchronous thread queue by using a synchronous thread, judging whether the currently polled IO event is the synchronously executed IO event or the asynchronously executed IO event, executing the synchronously executed IO event in the synchronous thread after the currently polled IO event is the synchronously executed IO event, sending the asynchronously executed IO event to a corresponding asynchronous thread for processing after the currently polled IO event is the asynchronously executed IO event, and placing the asynchronously executed IO event which is executed by the asynchronous thread back into the synchronous thread queue.
In a fourth aspect, an embodiment of the present disclosure provides a backend service invoking device, where the device executes on a current API gateway; the method comprises the following steps:
the first receiving module is configured to receive a calling request of a client to a target service;
the first registration module is configured to register the call request of the target service as an IO event and then place the IO event into a synchronous thread queue corresponding to a CPU kernel; the CPU inner core is positioned at the node where the current API gateway is positioned;
the first polling module is configured to poll the IO events in the synchronous thread queue by using a synchronous thread started on the CPU kernel, and judge whether the currently polled IO events are synchronously executed IO events or asynchronously executed IO events;
and the first execution module is configured to, after the IO event polled currently is a synchronously executed IO event, process the synchronously executed IO event by the synchronous thread, and after the IO event polled currently is an asynchronously executed IO event, send the asynchronously executed IO event to the corresponding asynchronous thread by the synchronous thread for processing, and return the asynchronously executed IO event which is completed by the asynchronous thread to the synchronous thread queue.
In a fifth aspect, an embodiment of the present disclosure provides a multi-node deployment apparatus for an API gateway, where the apparatus is executed on a management and control server, and includes:
a second receiving module configured to receive registration requests of one or more API gateways based on a synchronous locking manner;
a first determining module configured to determine the API gateway requesting registration as a current central server;
a second determination module configured to determine the API gateway requesting registration as a next central server; the next central server is switched to the current central server after the current central server is abnormal;
after the registration is successful, the API gateway is implemented as the following modules:
the third receiving module is configured to receive a calling request of a client to the target service;
the second registration module is configured to register the call request of the target service as an IO event and then place the IO event into a synchronous thread queue corresponding to a CPU kernel;
the second polling module is configured to poll the IO events in the synchronous thread queue by using a synchronous thread and judge whether the currently polled IO events are synchronously executed IO events or asynchronously executed IO events;
and the second execution module is configured to execute the synchronously executed IO event in the synchronous thread after the currently polled IO event is the synchronously executed IO event, send the asynchronously executed IO event to the corresponding asynchronous thread for processing after the currently polled IO event is the asynchronously executed IO event, and return the asynchronously executed IO event which is executed by the asynchronous thread to the synchronous thread queue.
In a sixth aspect, an embodiment of the present disclosure provides an API gateway cluster system, including multiple API gateways and a management and control server;
the management and control server receives registration requests of one or more API gateways based on a synchronous locking mode;
the management and control server determines the API gateway which requests registration as a current central server;
the management and control server determines the API gateway which requests registration for the second time as a next central server; the next central server is switched to the current central server after the current central server is abnormal;
the API gateway receives a call request of a client to a target service after the call request of the target service is successfully registered, the call request of the target service is registered as an IO event and then is placed into a synchronous thread queue corresponding to a CPU kernel, the synchronous thread is used for polling the IO event in the synchronous thread queue, whether the current polled IO event is a synchronously executed IO event or an asynchronously executed IO event is judged, the synchronously executed IO event is executed in the synchronous thread after the current polled IO event is the synchronously executed IO event, the asynchronously executed IO event is sent to a corresponding asynchronous thread for processing after the current polled IO event is the asynchronously executed IO event, and the asynchronously executed IO event which is executed by the asynchronous thread is placed back to the synchronous thread queue.
The functions can be realized by hardware, and the functions can also be realized by executing corresponding software by hardware. The hardware or software includes one or more modules corresponding to the functions described above.
In one possible design, the apparatus includes a structure including a memory for storing one or more computer instructions that enable the apparatus to perform the corresponding method described above, and a processor configured to execute the computer instructions stored in the memory. The apparatus may also include a communication interface for the apparatus to communicate with other devices or a communication network.
In a seventh aspect, an embodiment of the present disclosure provides an electronic device, including a memory and a processor, where the memory is used to store one or more computer instructions that support any of the above apparatuses to perform the corresponding methods described above, and the processor is configured to execute the computer instructions stored in the memory. Any of the above may also include a communication interface for communicating with other devices or a communication network.
In an eighth aspect, the present disclosure provides a computer-readable storage medium for storing computer instructions for use by any one of the apparatuses above, which includes computer instructions for performing any one of the methods described above.
In a ninth aspect, the disclosed embodiments provide a computer program product comprising computer instructions for implementing the steps of the method of any one of the above aspects when executed by a processor.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
in the embodiment of the disclosure, when the API gateway is implemented, a synchronous thread is used to poll an IO event registered in a synchronous thread queue, and it is determined based on a preset script configuration that the current polled IO event is a synchronously executed IO event or an asynchronously executed IO event, when the current polled IO event is the synchronously executed IO event, the synchronous thread processes the IO event, and when the current polled IO event is the asynchronously executed IO event, the synchronous thread is handed over to an asynchronous thread to process, so that the problems of slow response speed and the like of a system due to the fact that a service needing to be processed for a long time is blocked at the synchronous thread can be avoided, and meanwhile, the synchronous thread and the asynchronous thread are allocated by polling the synchronous thread and the asynchronous thread returns a processing result to the queue, so that the cooperative work of the synchronous thread and the asynchronous thread is realized, and an efficient API gateway is realized.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of embodiments of the disclosure.
Drawings
Other features, objects, and advantages of embodiments of the disclosure will become more apparent from the following detailed description of non-limiting embodiments when taken in conjunction with the accompanying drawings. In the drawings:
FIG. 1 illustrates a flow diagram of a method of back-end service invocation in accordance with an embodiment of the present disclosure;
FIG. 2 illustrates a flow chart of a method for multi-node deployment of an API gateway, according to an embodiment of the present disclosure;
FIG. 3 illustrates a flow diagram of a method of back-end service invocation according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a design architecture of an API gateway, according to an embodiment of the present disclosure;
fig. 5 is a block diagram illustrating a structure of a backend service invoking apparatus according to an embodiment of the present disclosure;
FIG. 6 illustrates a block diagram of a multi-node deployment apparatus of an API gateway, according to an embodiment of the present disclosure;
FIG. 7 shows a block diagram of an API gateway cluster system, according to an embodiment of the present disclosure;
FIG. 8 is a schematic block diagram of a multi-node deployed computer system suitable for use to implement a backend service invocation method and/or API gateway according to an embodiment of the present disclosure.
Detailed Description
Hereinafter, exemplary embodiments of the present disclosure will be described in detail with reference to the accompanying drawings so that those skilled in the art can easily implement them. Also, for the sake of clarity, parts not relevant to the description of the exemplary embodiments are omitted in the drawings.
In the disclosed embodiments, it is to be understood that terms such as "including" or "having," etc., are intended to indicate the presence of the disclosed features, numbers, steps, behaviors, components, parts, or combinations thereof, and are not intended to preclude the possibility that one or more other features, numbers, steps, behaviors, components, parts, or combinations thereof may be present or added.
It should also be noted that the embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict. Embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 shows a flowchart of a back-end service invoking method according to an embodiment of the present disclosure, as shown in fig. 1, the back-end service invoking method includes the following steps:
in step S101, a request for calling a target service from a client is received;
in step S102, the call request of the target service is registered as an IO event and then placed into a synchronous thread queue corresponding to a CPU core; the CPU inner core is positioned at the node where the current API gateway is positioned;
in step S103, polling IO events in the synchronous thread queue by using a synchronous thread started on the CPU core, and determining whether the currently polled IO event is a synchronously executed IO event or an asynchronously executed IO event;
in step S104, after the IO event that is polled currently is a synchronous executed IO event, the synchronous thread processes the synchronous executed IO event, and after the IO event that is polled currently is an asynchronous executed IO event, the synchronous thread sends the asynchronous executed IO event to a corresponding asynchronous thread for processing, and the asynchronous executed IO event that is executed by the asynchronous thread is put back to the synchronous thread queue.
As mentioned above, an API (Application Program Interface) gateway can take over all the ingress traffic of a system or a server, and forward all the user's requests to the backend corresponding services. For example, an e-commerce system involves many backend microservices such as members, goods, recommendation services, and the like; there may be a problem with how users access these backend services through clients. If the service is simple, an independent domain name can be allocated to each backend service, but this method causes that each backend service needs to repeatedly implement some same logics, such as authentication, current limiting, and authority verification, so that the implementation of the backend service becomes complicated, each backend service is on-line, operation and maintenance participation is needed, a domain name is applied, a Nginx is configured, and the like, the process is complicated, and the labor cost is high. Therefore, by introducing the API gateway, the client only needs to interact with the API gateway without communicating with interfaces of each backend service respectively, but one more component is introduced, so that a potential fault point is introduced, and therefore how to realize a high-performance and stable API gateway is one of the technical problems to be solved at present.
In the microservice architecture, large services are typically split into individual microservices, each of which typically provides services externally in the form of RESTFUL APIs. However, in the aspect of human-computer interaction, data from different micro services need to be displayed on one page, and a uniform entry is needed to call the API corresponding to the micro service. The API gateway serves as a unified inlet of a plurality of services in the scene, encapsulates the internal complex structure of the system, and may have other general functions of API management/calling, such as authentication, flow limiting, flow control and the like.
From the aspect of a deployment structure, in a micro-service deployment mode without adopting an API gateway, a client side directly interacts with a load balancer to complete the calling of a service. However, this mode does not support dynamic expansion, and the system needs to deploy or modify a load balancer every time a service is online, and cannot achieve dynamic switching service, and if a certain service is offline, operation and maintenance personnel need to remove a service address from the load balancer; and for the control of current limitation, safety and the like of interface calling, each micro-service is required to be realized, the complexity of the micro-service is increased, and simultaneously, the single responsibility principle of micro-service design is violated.
And the API gateway is used as a unified entry of the system, so that integration of all micro services is realized, the client side is friendly, and the complexity and the difference of the system are shielded. The API gateway can be used for realizing the non-perception dynamic expansion of micro-services, the API gateway can automatically fuse the services which cannot be accessed without manual participation, and the API gateway is used as a unified inlet of a system and can put public functions of all the micro-services into the API gateway to realize the functions so as to reduce the responsibilities of all the services as far as possible.
Prior art API gateways are typically in synchronous blocking mode. In the API gateway of the synchronous blocking mode, every request is allocated with a thread which is specially responsible for processing the request, and the thread is not released to return to the container thread pool until a response returns to the client. If the background service call is time-consuming, the thread is blocked, and during the blocking period, the thread resource is occupied, and other things cannot be done, which easily causes the problem that the service call is slow in response or non-responsive.
The prior art also proposes an asynchronous blocking mode, but the API gateway of the current asynchronous blocking mode is still limited in theory and is relatively complex and difficult to implement.
The embodiment of the disclosure realizes a simple and easily realized API gateway in an asynchronous mode, and the backend service calling method provided by the embodiment of the disclosure can be executed on the API gateway. The API gateway can start a synchronous thread for each CPU core, the synchronous thread polls the IO events in the synchronous thread queue and judges whether the polled IO events are the synchronously executed IO events or the asynchronously executed IO events, when the IO events are the synchronously executed IO events, the synchronous thread can process the IO events, and when the IO events are the asynchronously executed IO events, the synchronous thread delivers the asynchronously executed IO events to the corresponding asynchronous threads for processing.
In some embodiments, the current API gateway also initiates a receiving thread for polling the receiving client for a call request to the target service, and delegates the call request polling to the worker thread, which registers an IO event based on the call request, and places the IO event in a synchronous thread queue for call processing by the synchronous thread.
In some embodiments, whether the currently polled IO event is a synchronously executed IO event or an asynchronously executed IO event depends mainly on a service corresponding to the event, i.e., a service requested to be invoked by the client. In some embodiments, the determination may be based on a configuration in a business script corresponding to the service.
When the synchronous thread is polled to be the IO event executed synchronously, the synchronous thread processes the IO event executed synchronously, and when the synchronous thread is polled to be the IO event executed asynchronously, the synchronous thread is handed over to the corresponding asynchronous thread for processing. It should be noted that different services may correspond to different asynchronous threads. When the asynchronous thread corresponding to the service which the asynchronously executed IO event needs to request to access is not started, the synchronous thread may also start the asynchronous thread, so that the asynchronously executed IO event is processed by the asynchronous thread.
After the asynchronous thread processes the asynchronously executed IO event, the execution completion task of the asynchronously executed IO event is placed in the synchronous thread queue, so that the synchronous thread polls the execution completion task from the synchronous thread queue, and performs corresponding processing, for example, returning the processing result to the client, or performing the next processing, and the like.
In the embodiment of the disclosure, when the API gateway is implemented, a synchronous thread is used to poll an IO event registered in a synchronous thread queue, and it is determined based on a preset script configuration that the current polled IO event is a synchronously executed IO event or an asynchronously executed IO event, when the current polled IO event is the synchronously executed IO event, the synchronous thread processes the IO event, and when the current polled IO event is the asynchronously executed IO event, the synchronous thread is handed over to an asynchronous thread to process, so that the problems of slow response speed and the like of a system due to the fact that a service needing to be processed for a long time is blocked at the synchronous thread can be avoided, and meanwhile, the synchronous thread and the asynchronous thread are allocated by polling the synchronous thread and the asynchronous thread returns a processing result to the queue, so that the cooperative work of the synchronous thread and the asynchronous thread is realized, and an efficient API gateway is realized.
In an embodiment of the present disclosure, the method further includes:
receiving a registration request for at least one or more target services; the registration request comprises a service script corresponding to the target service; the service script is configured with a service identifier and a service address of the target service, a calling mode of the target service, and an identifier of whether the target service corresponds to an IO event executed synchronously or asynchronously;
and storing the service script of the target service under a specified directory so that the current API gateway can support the calling of the newly registered target service by the client.
In this optional implementation manner, the API gateway may support the call of multiple target services, different target services may register with the API gateway first, a registration request may carry a service script of the target service, and the service script writes relevant configuration information of the target service, such as a service identifier, a service address, a call mode of the target service, and an identifier of an IO event that is executed synchronously or asynchronously corresponding to the target service. The service script can be stored in a designated directory and can take effect immediately, that is, the client can request to call the target service through the API gateway, and the API gateway can support the client to call the target service through the relevant configuration information in the service script. For example, a client may send a call request for a target service to an API gateway through an access layer http socket, after the call request enters a polling queue of a receiving thread, the receiving thread delegates the polled call request to a worker thread, the worker thread registers an IO event based on the call request, and synchronizes the IO event in a thread queue, it should be noted that the IO event put in the thread queue includes a service script corresponding to the target service, that is, the worker thread takes out a service script corresponding to the target service from an assigned directory, registers the service script as the IO event, and puts the service script in the thread queue for the synchronous thread to poll and take out.
In an embodiment of the present disclosure, the synchronous thread determines, based on a service script of a target service corresponding to the IO event, whether the IO event is a synchronously executed IO event or an asynchronously executed IO event.
In an embodiment of the present disclosure, the method further includes:
after the current API gateway is started, submitting a registration request to a management and control server; the registration request comprises address information of the current API gateway;
and receiving registration confirmation information returned by the management and control server, wherein the registration confirmation information comprises information of whether the node where the current API gateway is located is registered as the current central server.
In this optional implementation manner, the API gateway may be implemented in a cluster node, that is, the API gateway may be started on each of a plurality of nodes, and the client may send a call request of the target service to any API gateway node in the cluster through the cluster management node, and the API gateway on the node processes the call request.
One central server may be selected from a cluster in which a plurality of API gateways are located, and the central server may collectively manage and allocate resources, such as traffic restrictions.
In the embodiment of the present disclosure, a self-election policy is adopted with respect to the setting of the central server, a node that requests the management and control server to register as the API gateway first is determined as the current central server, a node that requests the management and control server to register as the API gateway second is determined as the next central server, and other API gateways are ordinary API gateways. The management and control server may determine, based on the existing registration order of the API gateways, a node where the API gateway is located as a current central server and a next central server, and then send information on whether the node is the current central server to the corresponding API gateway, and if the node is not the current central server, may also return relevant information of the current central server to the API gateway, so that the API gateway that is not the central server requests allocation of the lock resource to the current central server based on the relevant information. It can be understood that the lock resource is a resource that needs to be uniformly allocated to a plurality of API gateways, and needs to be allocated in a locking manner.
The current central server and the next central server can be understood as main and standby servers, the current central server is a main server, the next central server is a backup server, a long link can be established between the current central server and the next central server, and the central server synchronizes the relevant information of resource allocation performed on the central server to the next central server, so that when the current central server cannot normally provide services, the next central server is switched to the current central server. It should be noted that, the current central server, the next central server and other API gateways are used as one of the API gateways, and also provide normal API gateway services, and when resource allocation needs to be locked, a lock is requested from the central server, and the central server allocates corresponding lock resources.
In an embodiment of the present disclosure, when the node where the current API gateway is located is a first API gateway registered with the management and control server, the current API gateway is determined as a current central server, and when the node where the current API gateway is located is a second API gateway registered with the management and control server, the current API gateway is determined as a next central server; and the next central server is switched to the current central server after the current central server is abnormal.
In an embodiment of the present disclosure, when the node where the current API gateway is located is a current central server, the method further includes:
receiving distributed resource locking requests sent by other nodes; other distributed API gateways run on the other nodes;
allocating the requested lock resources based on the distributed lock resource request and synchronizing the allocated lock resource information to a next central server; and the next central server is switched to the current central server after the current central server is abnormal.
In this alternative implementation, as described above, the current central server receives requests of other API gateways for the distributed lock resource, and uniformly allocates the distributed lock resource to each API gateway. Meanwhile, the current central server can synchronize the lock resource information which is currently distributed to the next central server, so that the relevant information about the lock resource distribution on the next central server keeps synchronous with the current central server, and even if the current central server is abnormal, the next central server can be switched to the current central server to continue to fulfill the corresponding responsibility.
In an embodiment of the present disclosure, when the node where the current API gateway is located is not the current central server, the method further includes:
and responding to a demand event of the distributed lock resource, and sending a distributed lock resource request to the current central server so that the current central server allocates the requested lock resource for the current API gateway.
In this optional implementation, as described above, after the API gateway of the current central server does not generate a demand for the distributed lock resource, it may request the current central server for allocation of the distributed lock resource, and receive allocation of the distributed lock resource by the current central server. The API gateway that is not the current central server includes the next central server and other nodes.
In an embodiment of the present disclosure, the number of the synchronization threads started on the current API gateway is related to the number of CPU cores on the node where the current API gateway is located.
In this optional implementation, the number of the synchronization threads may be related to the number of CPU cores of a node where the current API gateway is located, and if there is only one CPU core, only one synchronization thread is started, and if there are multiple CPU cores, multiple synchronization threads are started. While asynchronous threads may be based on the number of backend services, for example, one backend service may correspond to one asynchronous thread.
Fig. 2 is a flowchart illustrating a multi-node deployment method of an API gateway according to an embodiment of the present disclosure, where as shown in fig. 2, the multi-node deployment method of the API gateway includes the following steps:
in step S201, receiving registration requests of one or more API gateways based on a synchronous locking manner;
in step S202, determining the API gateway requesting registration as the current central server;
in step S203, determining the API gateway requesting registration as a next central server; the next central server is switched to the current central server after the current central server is abnormal;
after the registration is successful, the API gateway executes the following steps:
receiving a calling request of a client to a target service;
registering the call request of the target service as an IO event and then putting the IO event into a synchronous thread queue corresponding to a CPU kernel;
polling IO events in the synchronous thread queue by using a synchronous thread, and judging whether the currently polled IO events are synchronously executed IO events or asynchronously executed IO events;
and after the IO event which is polled currently is an IO event which is executed synchronously, the IO event which is executed synchronously is executed in the synchronous thread, and after the IO event which is polled currently is an IO event which is executed asynchronously, the IO event which is executed asynchronously is sent to the corresponding asynchronous thread for processing, and the IO event which is executed asynchronously and completed by the asynchronous thread is put back to the synchronous thread queue.
As mentioned above, an API (Application Program Interface) gateway can take over all the ingress traffic of a system or a server, and forward all the user's requests to the backend corresponding services. For example, an e-commerce system involves many backend microservices such as members, goods, recommendation services, etc.; there may be a problem with how users access these backend services through clients. If the service is simple, an independent domain name can be allocated to each backend service, but this method causes that each backend service needs to repeatedly implement some same logics, such as authentication, current limiting, and authority verification, so that the implementation of the backend service becomes complicated, each backend service is on-line, operation and maintenance participation is needed, a domain name is applied, a Nginx is configured, and the like, the process is complicated, and the labor cost is high. Therefore, by introducing the API gateway, the client only needs to interact with the API gateway without communicating with each backend service interface, but introducing one more component introduces one more potential failure point, so how to implement a high-performance and stable API gateway is one of the technical problems that needs to be solved at present.
In the microservice architecture, large services are typically split into individual microservices, each of which typically provides services externally in the form of RESTFUL APIs. However, in terms of human-computer interaction, data from different micro services need to be displayed on one page, and a uniform entry is needed to call the API corresponding to the micro service. The API gateway serves as a unified inlet of a plurality of services in the scene, encapsulates the internal complex structure of the system, and may have other general functions of API management/calling, such as authentication, flow limiting, flow control and the like.
In terms of a deployment structure, in a micro-service deployment mode without adopting an API gateway, a client side directly interacts with a load balancer to complete service calling. However, this mode does not support dynamic expansion, and a load balancer needs to be deployed or modified every time a service is online, and dynamic switching service cannot be achieved; and for the control of current limitation, safety and the like of interface calling, each micro-service needs to be realized, the complexity of the micro-service is increased, and simultaneously, the single responsibility principle of micro-service design is violated.
And the API gateway is used as a unified entry of the system, so that integration of all micro services is realized, the client side is friendly, and the complexity and the difference of the system are shielded. The API gateway can be used for realizing the non-perception dynamic capacity expansion of micro-services, the API gateway can automatically fuse services which cannot be accessed, manual participation is not needed, the API gateway serves as a unified inlet of a system, and public functions of all the micro-services can be put into the API gateway to be realized, so that the responsibility of all the services is reduced as far as possible.
Prior art API gateways are typically in synchronous blocking mode. In the API gateway of the synchronous blocking mode, every time a request comes, a thread is allocated to the request to be specially responsible for processing the request, and the thread is not released to return to the container thread pool until a response returns to a client. If the background service call is time-consuming, the thread is blocked, and during the blocking period, the thread resource is occupied, and other things cannot be done, so that the problem that the service call is slow in response or has no response is easily caused.
The prior art also proposes an asynchronous blocking mode, but the API gateway of the current asynchronous blocking mode is still limited by theory and is relatively complex and difficult to implement.
In an embodiment of the present disclosure, the multi-node deployment method of the API grid is executed on a management and control server.
The embodiment of the disclosure realizes a simple and easily-realized API gateway in an asynchronous mode, and the backend service calling method provided by the embodiment of the disclosure can be executed on the API gateway. The API gateway can start a synchronous thread for each CPU core, the synchronous thread polls the IO events in the synchronous thread queue and judges whether the polled IO events are the synchronously executed IO events or the asynchronously executed IO events, when the IO events are the synchronously executed IO events, the synchronous thread can process the IO events, and when the IO events are the asynchronously executed IO events, the synchronous thread delivers the asynchronously executed IO events to the corresponding asynchronous threads for processing.
In some embodiments, the current API gateway further initiates a receiving thread, configured to poll a received call request of the client to the target service, delegate the polled call request to the worker thread, register, by the worker thread, an IO event based on the call request, and place the IO event in a synchronous thread queue for call processing by the synchronous thread.
In some embodiments, whether the currently polled IO event is a synchronously executed IO event or an asynchronously executed IO event depends mainly on a service corresponding to the event, i.e., a service requested to be invoked by the client. In some embodiments, the determination may be based on a configuration in a business script corresponding to the service.
When the synchronous thread is polled to be the IO event executed synchronously, the synchronous thread processes the IO event executed synchronously, and when the synchronous thread is polled to be the IO event executed asynchronously, the synchronous thread is handed over to the corresponding asynchronous thread for processing. It should be noted that different services may correspond to different asynchronous threads. When the asynchronous thread corresponding to the service which the asynchronously executed IO event needs to request to access is not started, the synchronous thread may also start the asynchronous thread, so that the asynchronously executed IO event is processed by the asynchronous thread.
After the asynchronous thread processes the asynchronously executed IO event, the execution completion task of the asynchronously executed IO event is placed in the synchronous thread queue, so that the synchronous thread polls the execution completion task from the synchronous thread queue, and performs corresponding processing, for example, returning the processing result to the client, or performing the next processing, and the like.
In the embodiment of the disclosure, when the API gateway is implemented, a synchronous thread is used to poll an IO event registered in a synchronous thread queue, and a script written in advance for service invocation is configured to determine whether the currently polled IO event is a synchronously executed IO event or an asynchronously executed IO event, when the currently polled IO event is the synchronously executed IO event, the synchronous thread processes the IO event, and when the currently polled IO event is the asynchronously executed IO event, the synchronous thread is handed over to an asynchronous thread for processing, so that problems of slow response speed and the like of a system caused by blocking of a service which needs to be processed for a long time at the synchronous thread can be avoided, and meanwhile, cooperative work of the synchronous thread and the asynchronous thread is realized by using a mode that the synchronous thread polls the queue, the asynchronous thread listens to synchronous thread allocation, and the asynchronous thread returns a processing result to the queue, so that an efficient API gateway is realized.
The API gateway can be implemented in a cluster node, that is, the API gateway can be started on a plurality of nodes, and the client can send a call request of a target service to any API gateway node in the cluster through the cluster management node, and the API gateway on the node processes the call request.
One central server may be selected from a cluster in which a plurality of API gateways are located, and the central server may collectively manage and allocate resources, for example, limit traffic.
The current central server and the next central server can be understood as main and standby servers, the current central server is a main server, the next central server is a backup server, a long link can be established between the current central server and the next central server, and the central server synchronizes the relevant information of resource allocation performed on the central server to the next central server, so that when the current central server cannot normally provide services, the next central server is switched to the current central server. It should be noted that, the current central server, the next central server and other API gateways are used as one of the API gateways, and also provide normal API gateway services, and when resource allocation needs to be locked, a lock is requested from the central server, and the central server allocates corresponding lock resources.
In the embodiment of the present disclosure, a self-election policy is adopted for the central server, a node that requests the management and control server to register as the API gateway first is determined as the current central server, a node that requests the management and control server to register as the API gateway second is determined as the next central server, and other API gateways are ordinary API gateways. The management and control server may determine, based on the registration order of the API gateways, the node where the API gateway is located as the current central server and the next central server, and then send information on whether the node is the current central server to the corresponding API gateway, and if the node is not the current central server, may also return relevant information of the current central server to the API gateway, so that the API gateway that is not the central server requests allocation of the lock resource to the current central server based on the relevant information. It can be understood that the lock resource is a resource that needs to be uniformly allocated to a plurality of API gateways, and needs to be allocated in a locking manner.
In addition, the embodiment of the disclosure realizes the clustering of the API gateway, realizes a simple and efficient mode of central unified management resources, improves the exception handling capability, and improves the service efficiency of the API gateway of the cluster.
In an embodiment of the present disclosure, the method further comprises;
responding to an abnormal restart completion event, and clearing the registration information of the local existing API gateway; the API gateway registration information comprises information of a current central server and a next central server;
and updating the local API gateway registration information based on the actual current central server information and the information of the next central server.
In this alternative implementation, as described above, the management and control server determines the first registered API gateway as the current central server, determines the second registered API gateway as the next central server, and switches the next central server to the current central server after the current central server is abnormal.
After the management and control server is abnormal, the current central server is not affected, after the management and control server is restarted, all the registrations are provided with the corresponding fields of the current central server and the next central server, so that the management and control server can clear the registry of the API gateway after being restarted, and automatically fill the actual IP addresses of the current central server and the next central server after being restarted, thereby ensuring that the management and control server is irrelevant to the abnormality of the management and control server.
In an embodiment of the present disclosure, the method further includes:
receiving a registration request for at least one or more target services; the registration request comprises a service script corresponding to the target service; the service script is configured with a service identifier and a service address of the target service, a calling mode of the target service, and an identifier of whether the target service corresponds to an IO event executed synchronously or asynchronously;
and storing the service script of the target service under a specified directory so that the current API gateway can support the calling of the newly registered target service by the client.
In this optional implementation manner, the API gateway may support the call of multiple target services, different target services may register with the API gateway first, a registration request may carry a service script of the target service, and the service script writes relevant configuration information of the target service, such as a service identifier, a service address, a call mode of the target service, and an identifier of an IO event that is executed synchronously or asynchronously corresponding to the target service. The service script can be stored in a designated directory and can take effect immediately, that is, the client can request to call the target service through the API gateway, and the API gateway can support the call of the client to the target service through the relevant configuration information in the service script. For example, a client may send a call request for a target service to an API gateway through an access layer http socket, where after the call request enters a polling queue of a receiving thread, the receiving thread delegates the polled call request to a worker thread, the worker thread registers an IO event based on the call request, and synchronizes the IO event in a thread queue, where it needs to be noted that the IO event put into the thread queue includes a service script corresponding to the target service, that is, the worker thread takes out a service script corresponding to the target service from an assigned directory, registers the service script as an IO event, and puts the service script into the thread queue for polling and taking out by the synchronization thread.
In an embodiment of the present disclosure, the synchronous thread determines, based on a service script of a target service corresponding to the IO event, whether the IO event is a synchronously executed IO event or an asynchronously executed IO event.
In an embodiment of the present disclosure, when the node where the current API gateway is located is the current central server, the method further includes:
receiving distributed resource locking requests sent by other nodes; other distributed API gateways are operated on the other nodes;
allocating the requested lock resources based on the distributed lock resource request and synchronizing the allocated lock resource information to a next central server; and the next central server is switched to the current central server after the current central server is abnormal.
In this alternative implementation, as described above, the current central server receives requests of other API gateways for the distributed lock resource, and uniformly allocates the distributed lock resource to each API gateway. Meanwhile, the current central server can synchronize the lock resource information which is currently distributed to the next central server, so that the relevant information about the lock resource distribution on the next central server keeps synchronous with the current central server, and even if the current central server is abnormal, the next central server can be switched to the current central server to continue to fulfill the corresponding responsibility.
In an embodiment of the present disclosure, when the node where the current API gateway is located is not the current central server, the method further includes:
and responding to a demand event of the distributed lock resource, and sending a distributed lock resource request to the current central server so that the current central server allocates the requested lock resource for the current API gateway.
In this optional implementation, as described above, after the API gateway of the current central server does not generate a demand for the distributed lock resource, it may request the current central server for allocation of the distributed lock resource and receive allocation of the current central server for the distributed lock resource. The API gateway that is not the current central server includes the next central server and other nodes.
In an embodiment of the present disclosure, the number of the synchronization threads started on the current API gateway is related to the number of CPU cores on the node where the current API gateway is located.
In this optional implementation, the number of the synchronization threads may be related to the number of CPU cores of a node where the current API gateway is located, and if there is only one CPU core, only one synchronization thread is started, and if there are multiple CPU cores, multiple synchronization threads are started. While asynchronous threads may be based on the number of backend services, for example, one backend service may correspond to one asynchronous thread.
Fig. 3 shows a flowchart of a back-end service invoking method according to an embodiment of the present disclosure, as shown in fig. 3, the back-end service invoking method includes the following steps:
in step S301, the management and control server receives registration requests of one or more API gateways based on a synchronous locking manner;
in step S302, the management and control server determines the API gateway requesting registration as a current central server;
in step S303, the management and control server determines the API gateway requesting registration as a next central server; the next central server is switched to the current central server after the current central server is abnormal;
in step S304, after the API gateway successfully registers, the API gateway receives a call request of a client to a target service, registers the call request of the target service as an IO event, and then puts the IO event into a synchronous thread queue corresponding to a CPU core, polls the IO event in the synchronous thread queue by using a synchronous thread, and determines whether the currently polled IO event is a synchronously executed IO event or an asynchronously executed IO event, executes the synchronously executed IO event in the synchronous thread after the currently polled IO event is the synchronously executed IO event, and sends the asynchronously executed IO event to a corresponding asynchronous thread for processing after the currently polled IO event is the asynchronously executed IO event, and the asynchronously executed IO event that is executed by the asynchronous thread is put back to the synchronous thread queue.
As mentioned above, an API (Application Program Interface) gateway can take over all the ingress traffic of a system or a server, and forward all the user's requests to the backend corresponding services. For example, an e-commerce system involves many backend microservices such as members, goods, recommendation services, and the like; there may be a problem with how users access these backend services through clients. If the service is simple, an independent domain name can be allocated to each backend service, but this method causes that each backend service needs to repeatedly implement some same logics, such as authentication, current limiting, and authority verification, so that the implementation of the backend service becomes complicated, each backend service is on-line, operation and maintenance are required to participate, a domain name is applied, a Nginx is configured, and the like, the flow is complicated, and the labor cost is high. Therefore, by introducing the API gateway, the client only needs to interact with the API gateway without communicating with each backend service interface, but introducing one more component introduces one more potential failure point, so how to implement a high-performance and stable API gateway is one of the technical problems that needs to be solved at present.
In the microservice architecture, large services are usually split into separate microservices, each of which provides services to the outside, usually in the form of RESTFUL APIs. However, in terms of human-computer interaction, data from different micro services need to be displayed on one page, and a uniform entry is needed to call the API corresponding to the micro service. The API gateway serves as a unified inlet of a plurality of services in the scene, encapsulates the internal complex structure of the system, and may have other general functions of API management/calling, such as authentication, flow limiting, flow control and the like.
In terms of a deployment structure, in a micro-service deployment mode without adopting an API gateway, a client side directly interacts with a load balancer to complete service calling. However, this mode does not support dynamic expansion, and a load balancer needs to be deployed or modified every time a service is online, and dynamic switching service cannot be achieved; and for the control of current limitation, safety and the like of interface calling, each micro-service needs to be realized, the complexity of the micro-service is increased, and simultaneously, the single responsibility principle of micro-service design is violated.
And the API gateway is used as a unified inlet of the system, so that the integration of each micro service is realized, the client-side friendliness is realized, and the complexity and the difference of the system are shielded. The API gateway can be used for realizing the non-perception dynamic capacity expansion of micro-services, the API gateway can automatically fuse services which cannot be accessed, manual participation is not needed, the API gateway serves as a unified inlet of a system, and public functions of all the micro-services can be put into the API gateway to be realized, so that the responsibility of all the services is reduced as far as possible.
Prior art API gateways are typically in synchronous blocking mode. In the API gateway of the synchronous blocking mode, every request is allocated with a thread which is specially responsible for processing the request, and the thread is not released to return to the container thread pool until a response returns to the client. If the background service call is time-consuming, the thread is blocked, and during the blocking period, the thread resource is occupied, and other things cannot be done, which easily causes the problem that the service call is slow in response or non-responsive.
The prior art also proposes an asynchronous blocking mode, but the API gateway of the current asynchronous blocking mode is still limited by theory and is relatively complex and difficult to implement.
In an embodiment of the present disclosure, the multi-node deployment method of the API grid is executed on a management and control server.
The embodiment of the disclosure realizes a simple and easily realized API gateway in an asynchronous mode, and the backend service calling method provided by the embodiment of the disclosure can be executed on the API gateway. The API gateway can start a synchronous thread for each CPU core, the synchronous thread polls the IO events in the synchronous thread queue and judges whether the polled IO events are the synchronously executed IO events or the asynchronously executed IO events, when the IO events are the synchronously executed IO events, the synchronous thread can process the IO events, and when the IO events are the asynchronously executed IO events, the synchronous thread delivers the asynchronously executed IO events to the corresponding asynchronous threads for processing.
In some embodiments, the API gateway also initiates a receiving thread, configured to poll the received call request from the client to the target service, delegate the polled call request to the worker thread, register an IO event by the worker thread based on the call request, and place the IO event in a synchronous thread queue for call processing by the synchronous thread.
In some embodiments, whether the currently polled IO event is a synchronously executed IO event or an asynchronously executed IO event depends mainly on a service corresponding to the event, i.e., a service requested to be invoked by the client. In some embodiments, the determination may be based on a configuration in a business script corresponding to the service.
When the synchronous thread is polled to be the IO event executed synchronously, the synchronous thread processes the IO event executed synchronously, and when the synchronous thread is polled to be the IO event executed asynchronously, the synchronous thread is handed over to be processed by the corresponding asynchronous thread. It should be noted that different services may correspond to different asynchronous threads. When the asynchronous thread corresponding to the service which the asynchronously executed IO event needs to request to access is not started, the synchronous thread can also start the asynchronous thread so that the asynchronously executed IO event can be processed by the asynchronous thread.
After the asynchronous thread processes the asynchronously executed IO event, the execution completion task of the asynchronously executed IO event is placed in the synchronous thread queue, so that the synchronous thread polls the execution completion task from the synchronous thread queue, and performs corresponding processing, for example, returning the processing result to the client, or performing the next processing, and the like.
In the embodiment of the disclosure, when the API gateway is implemented, a synchronous thread is used for polling IO events registered in a synchronous thread queue, and a script written in advance for service call is configured to determine whether the currently polled IO event is a synchronously executed IO event or an asynchronously executed IO event, when the currently polled IO event is the synchronously executed IO event, the synchronous thread processes the IO event, and when the currently polled IO event is the asynchronously executed IO event, the synchronous thread processes an asynchronous thread to process the IO event, so that the problem of slow response speed and the like of a system caused by blocking of a service needing to be processed for a long time at the synchronous thread can be avoided, and meanwhile, the synchronous thread and the asynchronous thread are distributed by polling the synchronous thread, and a processing result is returned to the queue by the asynchronous thread, so that the synchronous thread and the asynchronous thread can cooperatively work, and an efficient API gateway is realized.
The API gateway can be implemented in a cluster node, that is, the API gateway can be started on multiple nodes, and the client can send a call request of a target service to any API gateway node in the cluster through the cluster management node, and the API gateway on the node processes the call request.
One central server may be selected from a cluster in which a plurality of API gateways are located, and the central server may collectively manage and allocate resources, such as traffic restrictions.
The current central server and the next central server can be understood as a master server and a standby server, the current central server is a master server, the next central server is a backup server, a long link can be established between the current central server and the next central server, and the central server synchronizes the relevant information of resource allocation performed on the central server to the next central server, so that when the current central server cannot normally provide service, the next central server is switched to the current central server. It should be noted that, the current central server, the next central server and other API gateways are used as one of the API gateways, and also provide normal API gateway services, and when resource allocation needs to be locked, a lock is requested from the central server, and the central server allocates corresponding lock resources.
In the embodiment of the present disclosure, a self-election policy is adopted for the central server, a node that requests the management and control server to register as the API gateway first is determined as the current central server, a node that requests the management and control server to register as the API gateway second is determined as the next central server, and other API gateways are ordinary API gateways. The management and control server may determine, based on the registration order of the API gateways, the node where the API gateway is located as the current central server and the next central server, and then send information on whether the node is the current central server to the corresponding API gateway, and if the node is not the current central server, may also return relevant information of the current central server to the API gateway, so that the API gateway that is not the central server requests allocation of the lock resource to the current central server based on the relevant information. It can be understood that the lock resource is a resource that needs to be uniformly allocated to a plurality of API gateways, and needs to be allocated in a locking manner.
In addition, the embodiment of the disclosure realizes the clustering of the API gateway, realizes a simple and efficient mode of central unified management resources, improves the exception handling capability, and improves the service efficiency of the API gateway of the cluster.
Technical terms and technical features related to the technical terms and technical features shown in fig. 3 and related embodiments are the same as or similar to those of the technical terms and technical features shown in fig. 1-2 and related embodiments, and for the explanation and description of the technical terms and technical features related to the technical terms and technical features shown in fig. 2 and related embodiments, the above explanation of the technical terms and technical features shown in fig. 1-2 and related embodiments can be referred to, and will not be repeated herein.
Fig. 4 shows a schematic design architecture diagram of an API gateway according to an embodiment of the present disclosure. As shown in fig. 4, the core of the API gateway is that a CPU core correspondingly starts a synchronous IO-thread, a received external request is polled by an accept thread, the polled request is delegated to a work thread to be processed, the work thread registers an IO event based on the external request and then is placed in a synchronous thread queue, and the IO-thread polls the synchronous thread queue and correspondingly processes the polled IO event. When an IO-thread is just started, the first one put into the synchronous thread queue is an entry script used for analyzing data in an external request, acquiring an IP address of a sending request, and the like. The authentication is used for verifying whether the external request has the authority; and the service script is used for specifically calling the service interface and can be handed to an asynchronous thread for processing. The conversion and routing are used for determining which cluster the outside should send to execute, including the IDC room, the ariloc, the container, and the like, and the return processing is to return the processing result of the request to the client. The above are all functions implemented by the API gateway. Each API gateway needs to register with the management and control server, relevant personnel can configure the API gateway through the configuration center, and configuration information can be stored in the database for permanence.
The following are embodiments of the disclosed apparatus that may be used to perform embodiments of the disclosed methods.
Fig. 5 shows a block diagram of a back-end service invocation apparatus according to an embodiment of the present disclosure, which may be implemented as part or all of an electronic device through software, hardware, or a combination of the two. As shown in fig. 5, the backend service invoking apparatus includes:
a first receiving module 501 configured to receive a call request of a client to a target service;
a registering module 502 configured to register the call request of the target service as an IO event and then place the IO event into a synchronous thread queue corresponding to a CPU core; the CPU inner core is positioned at the node where the current API gateway is positioned;
a polling module 503, configured to poll the IO event in the synchronous thread queue by using the synchronous thread started on the CPU core, and determine whether the IO event polled currently is a synchronously executed IO event or an asynchronously executed IO event;
the execution module 504 is configured to, after the IO event polled currently is a synchronously executed IO event, process the synchronously executed IO event by the synchronous thread, and after the IO event polled currently is an asynchronously executed IO event, send the asynchronously executed IO event to a corresponding asynchronous thread by the synchronous thread for processing, where the asynchronously executed IO event completed by the asynchronous thread is placed back to the synchronous thread queue.
Fig. 6 shows a block diagram of a multi-node deployment apparatus of an API gateway according to an embodiment of the present disclosure, which may be implemented as part or all of an electronic device by software, hardware, or a combination of both. As shown in fig. 6, the multi-node deployment apparatus of the API gateway includes:
a second receiving module 601 configured to receive registration requests of one or more API gateways based on a synchronous locking manner;
a first determining module 602, configured to determine the API gateway requesting registration as the current central server;
a second determining module 603 configured to determine the API gateway requesting registration as a next central server; the next central server is switched to the current central server after the current central server is abnormal;
wherein, after the registration is successful, the API gateway is implemented as:
receiving a calling request of a client to a target service;
registering the call request of the target service as an IO event and then putting the IO event into a synchronous thread queue corresponding to a CPU kernel;
polling IO events in the synchronous thread queue by using a synchronous thread, and judging whether the current polled IO event is a synchronously executed IO event or an asynchronously executed IO event;
and after the IO event which is polled currently is an IO event which is executed synchronously, the IO event which is executed synchronously is executed in the synchronous thread, and after the IO event which is polled currently is an IO event which is executed asynchronously, the IO event which is executed asynchronously is sent to the corresponding asynchronous thread for processing, and the IO event which is executed asynchronously and is finished by the asynchronous thread is put back to the synchronous thread queue.
The technical features related to the above device embodiments and the corresponding explanations and descriptions thereof are the same as, corresponding to or similar to the technical features related to the above method embodiments and the corresponding explanations and descriptions thereof, and for the technical features related to the above device embodiments and the corresponding explanations and descriptions thereof, reference may be made to the technical features related to the above method embodiments and the corresponding explanations and descriptions thereof, and details of the disclosure are not repeated herein.
Fig. 7 shows a block diagram of an API gateway cluster system according to an embodiment of the present disclosure, where the apparatus may be implemented as part or all of an electronic device through software, hardware, or a combination of both. As shown in fig. 7, the API gateway cluster system includes: a plurality of API gateways 701 and a management and control server 702;
the management and control server 702 receives registration requests of one or more API gateways 701 based on a synchronous locking mode;
the management and control server 702 determines the API gateway 701 which first requests registration as a current central server;
the management and control server 702 determines the API gateway 701 requesting registration as a next central server; the next central server is switched to the current central server after the current central server is abnormal;
after the API gateway 701 successfully registers, receiving a call request of a client to a target service, registering the call request of the target service as an IO event, then placing the IO event into a synchronous thread queue corresponding to a CPU core, polling the IO event in the synchronous thread queue by using a synchronous thread, determining whether the IO event polled currently is a synchronously executed IO event or an asynchronously executed IO event, executing the synchronously executed IO event in the synchronous thread after the IO event polled currently is the synchronously executed IO event, sending the asynchronously executed IO event to a corresponding asynchronous thread for processing after the IO event polled currently is the asynchronously executed IO event, and placing the asynchronously executed IO event executed by the asynchronous thread back into the synchronous thread queue.
The embodiment of the disclosure also discloses an electronic device, which comprises a memory and a processor; wherein,
the memory is configured to store one or more computer instructions, wherein the one or more computer instructions are executable by the processor to perform any of the method steps described above.
FIG. 8 is a schematic block diagram of a multi-node deployed computer system suitable for use to implement a back-end service invocation method and/or API gateway according to an embodiment of the present disclosure.
As shown in fig. 8, a computer system 800 includes a processing unit 801 which can execute various processes in the above-described embodiments according to a program stored in a Read Only Memory (ROM) 802 or a program loaded from a storage section 808 into a Random Access Memory (RAM) 803. In the RAM803, various programs and data necessary for the operation of the computer system 800 are also stored. The processing unit 801, ROM802, and RAM803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to bus 804.
The following components are connected to the I/O interface 805: an input portion 806 including a keyboard, a mouse, and the like; an output section 807 including components such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and a speaker; a storage section 808 including a hard disk and the like; and a communication section 809 including a network interface card such as a LAN card, a modem, or the like. The communication section 809 performs communication processing via a network such as the internet. A drive 810 is also connected to the I/O interface 805 as needed. A removable medium 811 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 810 as necessary, so that a computer program read out therefrom is mounted on the storage section 808 as necessary. The processing unit 801 may be implemented as a CPU, a GPU, a TPU, an FPGA, an NPU, or other processing units.
In particular, the methods described above may be implemented as computer software programs, according to embodiments of the present disclosure. For example, embodiments of the present disclosure include a computer program product comprising a computer program tangibly embodied on a machine-readable medium, the computer program comprising program code for performing the data transmission method. In such an embodiment, the computer program may be downloaded and installed from a network via the communication section 809 and/or installed from the removable medium 811.
Embodiments of the present disclosure also disclose a computer program product comprising a computer program/instructions which, when executed by a processor, implement any of the above method steps.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units or modules described in the embodiments of the present disclosure may be implemented by software or hardware. The units or modules described may also be provided in a processor, and the names of the units or modules do not in some cases constitute a limitation of the units or modules themselves.
As another aspect, the disclosed embodiment also provides a computer-readable storage medium, which may be the computer-readable storage medium included in the apparatus in the foregoing embodiment; or it may be a separate computer readable storage medium not incorporated into the device. The computer readable storage medium stores one or more programs for use by one or more processors in performing the methods described in the embodiments of the present disclosure.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is made without departing from the inventive concept. For example, the above features and (but not limited to) the features with similar functions disclosed in the embodiments of the present disclosure are mutually replaced to form the technical solution.

Claims (10)

1. A back-end service invocation method, wherein the method is executed on a current API gateway; the method comprises the following steps:
receiving a calling request of a client to a target service;
registering the call request of the target service as an IO event and then putting the IO event into a synchronous thread queue corresponding to a CPU kernel; the CPU inner core is positioned at the node where the current API gateway is positioned;
polling IO events in the synchronous thread queue by using a synchronous thread started on the CPU kernel, and judging whether the current polled IO events are synchronously executed IO events or asynchronously executed IO events;
and after the IO event which is polled currently is an IO event which is executed synchronously, the synchronous thread processes the IO event which is executed synchronously, and after the IO event which is polled currently is an IO event which is executed asynchronously, the synchronous thread sends the IO event which is executed asynchronously to a corresponding asynchronous thread for processing, and the IO event which is executed asynchronously and completed by the asynchronous thread is put back to the synchronous thread queue.
2. The method of claim 1, wherein the method further comprises:
receiving a registration request for at least one or more target services; the registration request comprises a business script corresponding to the target service; the service script is configured with a service identifier and a service address of the target service, a calling mode of the target service, and an identifier of whether the target service corresponds to an IO event executed synchronously or asynchronously;
and storing the service script of the target service under a specified directory so that the current API gateway can support the calling of the newly registered target service by the client.
3. A multi-node deployment method of an API gateway, the method being executed on a policing server, comprising:
receiving registration requests of one or more API gateways based on a synchronous locking mode;
determining the API gateway which requests registration as the current central server;
determining the API gateway requesting registration as the next central server; the next central server is switched to the current central server after the current central server is abnormal;
after the registration is successful, the API gateway executes the following steps:
receiving a calling request of a client to a target service;
registering the call request of the target service as an IO event and then putting the IO event into a synchronous thread queue corresponding to a CPU kernel;
polling IO events in the synchronous thread queue by using a synchronous thread, and judging whether the current polled IO event is a synchronously executed IO event or an asynchronously executed IO event;
and after the IO event which is polled currently is an IO event which is executed synchronously, the IO event which is executed synchronously is executed in the synchronous thread, and after the IO event which is polled currently is an IO event which is executed asynchronously, the IO event which is executed asynchronously is sent to the corresponding asynchronous thread for processing, and the IO event which is executed asynchronously and completed by the asynchronous thread is put back to the synchronous thread queue.
4. A back-end service calling method is executed on an API gateway cluster system, and the API gateway cluster system comprises a plurality of API gateways and a management and control server; the method comprises the following steps:
the management and control server receives registration requests of one or more API gateways based on a synchronous locking mode;
the management and control server determines the API gateway which requests registration as a current central server;
the management and control server determines the API gateway which requests registration for the second time as a next central server; the next central server is switched to the current central server after the current central server is abnormal;
the API gateway receives a call request of a client to a target service after the call request of the target service is successfully registered, the call request of the target service is registered as an IO event and then is placed into a synchronous thread queue corresponding to a CPU kernel, the synchronous thread is used for polling the IO event in the synchronous thread queue, whether the current polled IO event is a synchronously executed IO event or an asynchronously executed IO event is judged, the synchronously executed IO event is executed in the synchronous thread after the current polled IO event is the synchronously executed IO event, the asynchronously executed IO event is sent to a corresponding asynchronous thread for processing after the current polled IO event is the asynchronously executed IO event, and the asynchronously executed IO event which is executed by the asynchronous thread is placed back to the synchronous thread queue.
5. A backend service invocation device, wherein the device executes on a current API gateway; the method comprises the following steps:
the first receiving module is configured to receive a calling request of a client to a target service;
the first registration module is configured to register the call request of the target service as an IO event and then place the IO event into a synchronous thread queue corresponding to a CPU kernel; the CPU inner core is positioned at the node where the current API gateway is positioned;
the first polling module is configured to poll the IO events in the synchronous thread queue by using a synchronous thread started on the CPU kernel, and judge whether the current polled IO event is a synchronously executed IO event or an asynchronously executed IO event;
and the first execution module is configured to, after the IO event polled currently is a synchronously executed IO event, process the synchronously executed IO event by the synchronous thread, and after the IO event polled currently is an asynchronously executed IO event, send the asynchronously executed IO event to the corresponding asynchronous thread by the synchronous thread for processing, and return the asynchronously executed IO event which is completed by the asynchronous thread to the synchronous thread queue.
6. A multi-node deployment apparatus of an API gateway, the apparatus executing on a policing server, comprising:
a second receiving module configured to receive registration requests of one or more API gateways based on a synchronous locking manner;
a first determining module configured to determine the API gateway requesting registration as a current central server;
a second determination module configured to determine the API gateway requesting registration as a next central server; the next central server is switched to the current central server after the current central server is abnormal;
after the registration is successful, the API gateway is implemented as the following modules:
the third receiving module is configured to receive a calling request of a client to the target service;
the second registration module is configured to register the call request of the target service as an IO event and then place the IO event into a synchronous thread queue corresponding to a CPU kernel;
the second polling module is configured to poll the IO events in the synchronous thread queue by using a synchronous thread and judge whether the current polled IO event is a synchronously executed IO event or an asynchronously executed IO event;
and the second execution module is configured to execute the synchronously executed IO event in the synchronous thread after the currently polled IO event is the synchronously executed IO event, send the asynchronously executed IO event to the corresponding asynchronous thread for processing after the currently polled IO event is the asynchronously executed IO event, and return the asynchronously executed IO event which is executed by the asynchronous thread to the synchronous thread queue.
7. An API gateway cluster system comprises a plurality of API gateways and a control server;
the management and control server receives registration requests of one or more API gateways based on a synchronous locking mode;
the management and control server determines the API gateway which requests registration as a current central server;
the management and control server determines the API gateway which requests registration for the second time as a next central server; the next central server is switched to the current central server after the current central server is abnormal;
the API gateway receives a call request of a client to a target service after the call request of the target service is successfully registered, the call request of the target service is registered as an IO event and then is placed into a synchronous thread queue corresponding to a CPU kernel, the synchronous thread is used for polling the IO event in the synchronous thread queue, whether the current polled IO event is a synchronously executed IO event or an asynchronously executed IO event is judged, the synchronously executed IO event is executed in the synchronous thread after the current polled IO event is the synchronously executed IO event, the asynchronously executed IO event is sent to a corresponding asynchronous thread for processing after the current polled IO event is the asynchronously executed IO event, and the asynchronously executed IO event which is executed by the asynchronous thread is placed back to the synchronous thread queue.
8. An electronic device comprising a memory and a processor; wherein,
the memory is configured to store one or more computer instructions, wherein the one or more computer instructions are executed by the processor to implement the steps of the method of any one of claims 1-4.
9. A computer readable storage medium having computer instructions stored thereon, wherein the computer instructions, when executed by a processor, implement the steps of the method of any one of claims 1-4.
10. A computer program product comprising computer programs/instructions which, when executed by a processor, carry out the steps of the method of any one of claims 1 to 4.
CN202211217820.7A 2022-09-30 2022-09-30 Back-end service calling method and device, electronic equipment and program product Pending CN115640146A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211217820.7A CN115640146A (en) 2022-09-30 2022-09-30 Back-end service calling method and device, electronic equipment and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211217820.7A CN115640146A (en) 2022-09-30 2022-09-30 Back-end service calling method and device, electronic equipment and program product

Publications (1)

Publication Number Publication Date
CN115640146A true CN115640146A (en) 2023-01-24

Family

ID=84941704

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211217820.7A Pending CN115640146A (en) 2022-09-30 2022-09-30 Back-end service calling method and device, electronic equipment and program product

Country Status (1)

Country Link
CN (1) CN115640146A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117270831A (en) * 2023-11-17 2023-12-22 天津华来科技股份有限公司 Protocol class synchronization and cooperative program call compatible implementation method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117270831A (en) * 2023-11-17 2023-12-22 天津华来科技股份有限公司 Protocol class synchronization and cooperative program call compatible implementation method
CN117270831B (en) * 2023-11-17 2024-02-23 天津华来科技股份有限公司 Protocol class synchronization and cooperative program call compatible implementation method

Similar Documents

Publication Publication Date Title
WO2020177533A1 (en) Electronic invoice identifier allocation method, and electronic ticket generating method, device and system
CN110990047A (en) Fusion method and device for multiple microservice architectures
CN109783151B (en) Method and device for rule change
CN109213584B (en) Task execution method, device, electronic equipment and computer readable storage medium
CN113821268A (en) Kubernetes network plug-in method fused with OpenStack Neutron
US20110131288A1 (en) Load-Balancing In Replication Engine of Directory Server
CN111163140A (en) Method, apparatus and computer readable storage medium for resource acquisition and allocation
EP4369181A1 (en) Node for running container group, and management system and method of container group
CN115640146A (en) Back-end service calling method and device, electronic equipment and program product
WO2006051101A1 (en) Method and system for local authority partitioning of client resources
CN111078516A (en) Distributed performance test method and device and electronic equipment
CN111884844A (en) Message service access method and device based on zookeeper
CN112506647A (en) Method, system, device and storage medium for load balancing of stateful servers
US20030028640A1 (en) Peer-to-peer distributed mechanism
CN113660315B (en) Cloud computing service providing method, device, equipment and readable storage medium
CN113760447A (en) Service management method, device, equipment, storage medium and program product
CN117453357A (en) Node task scheduling method, device, computer equipment and storage medium
CN111835809B (en) Work order message distribution method, work order message distribution device, server and storage medium
CN111522664A (en) Service resource management and control method and device based on distributed service
CN111488248A (en) Control method, device and equipment for hosting private cloud system and storage medium
CN115225645A (en) Service updating method, device, system and storage medium
CN115883283A (en) Deployment method and device of containerization VNF
CN113656181A (en) Method and device for issuing real-time application cluster instance resources
CN113190624A (en) Asynchronous-to-synchronous calling method and device based on distributed cross-container
CN109257201B (en) License sending method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination