Disclosure of Invention
In order to solve the problem that the existing API gateway is not perfect in function, particularly in the prior art, calling relation chain data needs to be stored in a database of a corresponding product when an external API gateway product is used; in addition, the user-defined aggregation of a plurality of interface results is not met; access of an intranet interface is not supported; the conversion of protocols is not supported, and the use is very inconvenient. The distributed high-availability gateway system is provided, protocol conversion is supported, deployment is convenient and flexible, and access is efficient and flexible.
In order to achieve the technical effect, the invention adopts the following technical scheme.
A distributed high-availability gateway system comprises a background management subsystem and a core subsystem; the core subsystem receives a service request from a service request end and routes the service request to a back-end service interface; the core subsystem comprises a safety control component, a request checking component, a data acquisition component and a result aggregation component;
the safety control component is used for carrying out safety verification, current limitation and anti-brushing on the service request;
the request checking component is used for acquiring service request parameters;
the data acquisition component is used for routing the service request to a back-end service interface;
the structure aggregation component is used for aggregating results from the service end and returning the results to the service request end;
the background management subsystem and the core subsystem are both realized by adopting a distributed system.
In addition, the request checking component processes and checks the service request in a pipeline chain mode, each of a plurality of processing logics of the service request is regarded as a pipeline, each pipeline is executed in sequence according to a preset sequence, a processed or checked result is delivered to the next pipeline, and the sequence of the pipeline chain is adjusted along with the flow sequence of the service request.
In addition, the request verification component obtains the user information carried by the service request by one or more of the following three ways: a, acquiring a cookie value of an application, B, acquiring a cookie value designated by single sign-on, and C, acquiring a value of a preset position in request header information; and after the request checking component acquires the user information, the authentication and the certification related to the service request are carried out.
In addition, the safety control assembly includes a primary current limiting unit and a secondary current limiting unit, wherein,
the primary current limiting unit is arranged in a Nginx server, the Nginx server receives and forwards a service request, the primary current limiting unit limits the access amount of a single service request address in unit time, and when the access amount of the single service request address in unit time is larger than a preset threshold value, the primary current limiting unit does not forward the service request;
the secondary current limiting unit records the detail data of each forwarded service request by using an interceptor, caches the detail data by using a redis, and then acquires the allowed access amount of the same address in unit time and the allowed access total amount in unit time, which are set in a configuration system; calculating the single address access amount in unit time and the total request access amount in unit time; when the single address access amount in unit time exceeds the access amount allowed by the same address in unit time or the total requested access amount in unit time exceeds the total requested access amount allowed in unit time, the secondary current limiting unit blocks the service request from the address or blocks all the service requests until the single address access amount in unit time after recalculation is less than the access amount allowed by the same address in unit time or the total requested access amount in unit time is less than the total requested access amount allowed in unit time.
In addition, the core subsystem processes the plurality of service requests in an asynchronous mode, when each service request in the plurality of service requests reaches the core subsystem, the thread corresponding to the Nginx server of the core subsystem directly returns, then the thread corresponding to other components in the core subsystem is used for executing the relevant operation of the service request, and when the service request has a result returned, the request result is returned to the service request end.
In addition, the background management subsystem comprises an interface management unit and an interface release unit, wherein the interface management unit is used for inquiring, newly adding, modifying and offline relevant configuration of a rear-end service interface; the interface issuing unit is used for receiving an audit instruction of an administrator for newly adding or modifying the back-end service interface and transmitting the audit instruction to the interface management unit; the interface management unit stores the back-end service interface information in a local database, and issues a back-end service interface updating mark to a core subsystem through a distributed service system, and after the core subsystem subscribes the mark issued by the system for issuing the back-end service interface updating, the updated back-end service interface information is pulled from a background management subsystem.
In addition, the data acquisition component of the core subsystem comprises a protocol conversion unit, and the protocol conversion unit is used for realizing coordination between a protocol followed by the service request and a protocol followed by the service of the service system; the protocol conversion unit realizes the conversion from the Dubbo protocol to the hypertext transmission protocol in a generalization calling mode, and converts the Dubbo protocol to the hypertext transmission protocol into a common language so as to realize mutual conversion; and the conversion from the hypertext transfer protocol to the Dubbo protocol is realized by utilizing an Apache extensible interactive system.
In addition, the core subsystem executes a plurality of back-end service interface requests in parallel in a thread pool mode, the back-end service interface requests are organized in the thread pool mode, when the service requests are routed to one or a plurality of back-end service interfaces, the core subsystem executes the one or the plurality of back-end service interface requests in parallel through the thread pool so as to reduce the routing time from the service requests to the back-end service interfaces when the back-end service interface requests exist, when the back-end service interface requests are completed, the threads are released to the thread pool to be conveniently recycled in the next request, and the routing is performed by using an Okhttp3 client lightweight framework;
in addition, the core subsystem also monitors the request state and the request duration of each back-end service interface simultaneously, generates early warning information about the back-end service interface request according to the monitoring result, and sends the early warning information about the back-end service interface request to the back-end management system through rabbitmq.
In addition, when a service request end needs to call data from a plurality of service ends together, the service request end sends a single service request to a core subsystem, the core subsystem disassembles and routes the service request to a plurality of different back-end service interfaces, and aggregates results from the plurality of different back-end service interfaces and returns the aggregated results to the service request end, and the result aggregation is performed in a JavaScript and groovy engine mode.
In particular, the background management subsystem further comprises,
a group management unit: the system comprises a back-end service interface, a group management module and a service management module, wherein the back-end service interface is used for managing a back-end service interface in a group mode, and the management comprises group inquiry, addition and modification;
a recycle bin unit: the back-end service interface is used for storing the offline back-end service interface of the interface management unit and restoring and deleting the offline back-end service interface according to the interface management unit;
an application authorization unit: for configuring backend service interfaces to be accessible by those applications;
a document management unit: the method is used for generating and displaying the relevant documents of the back-end service interface;
software development kit unit: the system is used for generating a uniform tool kit for other systems to introduce;
an interface test unit: the system is used for supporting the configured back-end service interface on-line test;
a log statistic unit: the system is used for counting the calling frequency, calling source and calling time of the back-end service interface and outputting a report of the back-end service interface;
a monitoring and early warning unit: the method is used for providing early warning notification of multiple ways for the backend service interface with failed calling or poor performance.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings.
Detailed exemplary embodiments are disclosed below. However, specific structural and functional details disclosed herein are merely for purposes of describing example embodiments.
It should be understood, however, that the intention is not to limit the invention to the particular exemplary embodiments disclosed, but to cover all modifications, equivalents, and alternatives falling within the scope of the disclosure. Like reference numerals refer to like elements throughout the description of the figures.
Referring to the drawings, the structures, ratios, sizes, and the like shown in the drawings are only used for matching the disclosure of the present disclosure, so as to be understood and read by those skilled in the art, and are not used to limit the conditions that the present disclosure can be implemented, so that the present disclosure has no technical significance, and any structural modification, ratio relationship change, or size adjustment should still fall within the scope of the disclosure of the present disclosure without affecting the efficacy and the achievable purpose of the present disclosure. Meanwhile, the positional limitation terms used in the present specification are for clarity of description only, and are not intended to limit the scope of the present invention, and changes or modifications of the relative relationship therebetween may be regarded as the scope of the present invention without substantial changes in the technical content.
It will also be understood that the term "and/or" as used herein includes any and all combinations of one or more of the associated listed items. It will be further understood that when an element or unit is referred to as being "connected" or "coupled" to another element or unit, it can be directly connected or coupled to the other element or unit or intervening elements or units may also be present. Moreover, other words used to describe the relationship between components or elements should be understood in the same manner (e.g., "between" versus "directly between," "adjacent" versus "directly adjacent," etc.).
Fig. 1 is a schematic structural diagram of a distributed high availability gateway system according to an embodiment of the present invention. As shown in the figure, the invention discloses a distributed high-availability gateway system, which comprises a background management subsystem and a core subsystem; the core subsystem receives a service request from a service request end and routes the service request to a back-end service interface; the core subsystem comprises a safety control component, a request checking component, a data acquisition component and a result aggregation component;
the safety control component is used for limiting the current and preventing the brushing of the service request;
the request checking component is used for acquiring service request parameters;
the data acquisition component is used for routing the service request to a back-end service interface;
the structure aggregation component is used for aggregating results from the service end and returning the results to the service request end;
the background management subsystem and the core subsystem are both realized by adopting a distributed system.
In the invention, the gateway is designed into a background management subsystem and a core subsystem which are separated in function, and the background management subsystem and the core subsystem are both realized by adopting a distributed system, so that the technical effects that: firstly, the core subsystem does not directly access the server, so that the requests of the internal network and the external network and other external platform can be distinguished, the request addresses of the internal network and the external network are distinguished, the later expansion is convenient, for example, the interface address for prohibiting the external network from accessing the internal network is set through nginx, and meanwhile, the back-end verification is carried out on the server, so that the data security of the server can be ensured. Secondly, the distributed high-availability gateway system is beneficial to realizing functional upgrading, adopts a modular structure, adopts standard software interfaces for connection among modules, and can conveniently upgrade the functional components of the gateway system without the transformation of a whole gateway.
Especially for a background management subsystem, the system is overlapped with a functional module in the prior art, and the system also improves the functions of flow control, asynchronous processing request of pipeline chains, result aggregation and the like in the aspect of a core subsystem, so that the system can realize exchange with some existing gateway products or even replace the existing gateway products by adopting a modular design.
In the invention, the background management subsystem and the core subsystem are realized by using a distributed architecture, the processing capability and the cache function of the distributed system can be effectively utilized, and the response speed and the processing capability of the distributed high-availability gateway system are further improved.
Particularly, in the specific embodiment of the invention, the front end of the background management subsystem uses act to show a view, the back end uses spring mvc to provide a rest data interface for the front end, uses Rabbitmq to receive the early warning information of the back-end service interface, uses a service discovery tool zookeeper to send a cache update event, and uses a database Mysql to store the data of the back-end service interface. The back end of the core subsystem uses spring mvc to provide a rest data interface for the front end, uses a cafneine high-performance cache and a Redis distributed cache as caches and interface data acquisition sources, uses a service discovery tool zookeeper to update back-end service interface data in a local cache in real time, uses Rabbitmq to receive interface early warning information, and uses Okhttp3 to perform route forwarding, and the specific implementation modes are respectively described below.
Fig. 2 is a schematic structural diagram of a core subsystem of a distributed high availability gateway system according to an embodiment of the present invention. As shown in fig. 2, in a specific embodiment of the present invention, the request checking component processes and checks the service request in a pipeline chain manner, each of a plurality of processing logics of the service request is regarded as a pipeline, each pipeline is executed in sequence according to a preset sequence, a result after processing or checking is delivered to a next pipeline, and the sequence of the pipeline chain is adjusted according to a flow sequence of the service request.
In a specific embodiment of the invention, the request checking component processes and checks the service request in a pipeline chain manner, the processing and checking (parameter checking, authentication and authentication) of the request parameters are composed of Pipe chains, each Pipe is responsible for one function, the technical scheme is that the Pipe chains are sequentially executed by using a responsibility chain mode, and the realization corresponding to each Pipe chain is executed by using a strategy mode. A Chain of responsibilities mode, Chain of responsibilities (CoR) mode, also called Chain of responsibilities mode or Chain of responsibilities mode, is one of the behavior modes, which constructs a series of objects of classes respectively serving as different responsibilities to jointly complete a task, and the objects of the classes are closely connected like a Chain, and thus called Chain of responsibilities mode.
The advantage of the chain of responsibility mode is the sharing of responsibility. Each class only needs to process the work of the process (the process is not transmitted to the next object to be completed), the responsibility range of each object is defined, the minimum packaging principle of the object is met, and each object can perform its own job; and the workflows can be freely combined as required. If the workflow changes, the new workflow can be adapted by reallocating the object chain. The flows can be freely combined. Under the responsibility chain mode, each Pipe is executed in sequence according to a preset sequence, and the subsequent addition of new processing logic becomes simple, such as: and the parameter verification- > identity authentication- > … … is sequentially carried out.
In particular, in the embodiments of the present invention, the sequence of the pipe chain is adjusted according to the flow sequence of the service request, which is beneficial to the adjustment and improvement of the gateway system and the implementation of the customized service request and the customized service.
Further, for each Pipe, a policy mode is used for specific processing and checking, the Strategy mode is also called a policy mode, is one of behavior modes, encapsulates a series of algorithms, defines an abstract algorithm interface for all algorithms, encapsulates and implements all algorithms by inheriting the abstract algorithm interface, selects and hands over specific algorithms to a client (policy), and provides a method for managing related algorithm families. The hierarchy of policy classes defines a family of algorithms or actions. Proper use of inheritance allows common code to be moved into parent classes, thereby avoiding duplicate code.
In addition, in a specific embodiment of the present invention, the request checking component obtains the user information carried by the service request by using one or more of the following three ways: a, acquiring a cookie value of an application, B, acquiring a cookie value designated by single sign-on, and C, acquiring a value of a preset position in service request header information; and after the request checking component acquires the user information, the authentication and the certification related to the service request are carried out.
In addition, the safety control assembly includes a primary current limiting unit and a secondary current limiting unit, wherein the primary current limiting unit is disposed in a Nginx server, the Nginx server receives and forwards a service request, the primary current limiting unit limits an access amount per unit time from a single service request address, and the primary current limiting unit does not forward the service request when the access amount per unit time of the single service request address is greater than a predetermined threshold;
the secondary current limiting unit records the detail data of each forwarded service request by using an interceptor, caches the detail data by using a redis, and then acquires the allowed access amount of the same address in unit time and the allowed access total amount in unit time, which are set in a configuration system; calculating the single address access amount in unit time and the total request access amount in unit time; when the single address access amount in unit time exceeds the access amount allowed by the same address in unit time or the total requested access amount in unit time exceeds the total requested access amount allowed in unit time, the secondary current limiting unit blocks the service request from the address or blocks all the service requests until the single address access amount in unit time after recalculation is less than the access amount allowed by the same address in unit time or the total requested access amount in unit time is less than the total requested access amount allowed in unit time.
The invention adopts a two-stage current limiting mode, a primary current limiting unit limits the access amount of a single service request address in unit time according to hardware load or the configuration of a Nginx server, when the access amount of the single service request address in unit time is greater than a preset threshold value, the primary current limiting unit does not forward the service request, and the current limiting/anti-brushing directly faces the service request of a service request end (also called a client), so the response is rapid, and the current limiting/anti-brushing can be directly configured in a Nginx system, thereby the realization is simpler.
However, for primary current limiting, it cannot achieve current limiting at a global level (because service request information from all clients is not counted and cached), for example, in some cases, although the service request amount of a single IP address is not excessive, due to the number of other clients or too large service request amount, the clients from each IP address need to be "sacrificed" for the gateway device to work normally. Therefore, the invention further utilizes a secondary current limiting unit, the secondary current limiting unit uses an interceptor to record the detail data of each forwarded service request, uses a redis cache to cache the detail data, and then obtains the allowed access amount of the same address in unit time and the allowed access total amount in unit time which are set in a configuration system; and when the single address access amount in the unit time exceeds the access amount allowed by the same address in the unit time or the total requested access amount in the unit time exceeds the total requested access amount allowed in the unit time, the secondary current limiting unit intercepts the service request from the address or intercepts all the service requests until the recalculated single address access amount in the unit time is smaller than the access amount allowed by the same address in the unit time or the total requested access amount in the unit time is smaller than the total requested access amount allowed in the unit time. In this way, the secondary current limiting unit can comprehensively judge whether the gateway system needs to perform current limiting on a single IP or perform current limiting and anti-brushing on service requests from clients of all IPs.
As a further improvement of the present invention, the four parameters of the access amount of a single address in a unit time, the total access request amount in a unit time, the access amount allowed by the same address in a unit time, and the total access allowed amount in a unit time can be comprehensively considered, and a current limiting scheme can be selected according to the configuration of the core subsystem, for example, service requests of a current limiting single IP are set but not limited to the total service request, or the total service request is limited but not limited to the service requests of a single IP (for example, the request priority of a target IP is increased according to needs), so that the application of the present invention can be more flexible.
The condition that the current service request flow is stored by utilizing the distributed storage can be generally realized by adopting Redis, and the condition generally has some performance loss, but the reliability and the cache capacity of the storage can be greatly improved.
Specifically, the Redis is a key-value distributed storage system. Similar to Memcached, it supports relatively more stored value types, including string, list, set, zset, and hash. These data types all support push/pop, add/remove, and intersect union and difference, and richer operations, and these operations are all atomic. On this basis, Redis supports various different ways of ordering. Like memcached, data is cached in memory to ensure efficiency. The difference is that Redis periodically writes updated data to disk or writes modify operations to an additional recording file, and realizes master-slave synchronization based on the updated data or the modify operations.
In addition, as shown in fig. 2, in the embodiment of the present invention, the core subsystem processes the plurality of service requests in an asynchronous manner, when each of the plurality of service requests reaches the core subsystem, a thread corresponding to an Nginx server of the core subsystem directly returns, then a thread corresponding to another component inside the core subsystem is used to perform a service request related operation, and when a result of the service request is returned, a request result is returned to the service request end.
The core subsystem of the invention uses WebAsyncTask to process the request asynchronously at the Controller layer, because the number of the server threads is certain, when the number of the service requests reaches the total number of the server threads, the connection is refused, after the WebAsyncTask is used, when the request reaches the Controller, the server threads directly return, then the thread in the system is used to execute the service, when the service returns, the request returns to the client, and the throughput of the server can be greatly improved by an asynchronous mode.
For example, in a default flow, when a user opens a url address, the web server starts a thread to receive the request, and the thread is not released until page data is received. Corresponding callback processing can be set for the asynchronous task, such as when the task is overtime, abnormally thrown, and the like. Asynchronous tasks are often very practical, such as: after one order is paid, the asynchronous task is started to inquire the payment result of the order, and the like, so that the efficiency of the gateway system for processing the service request is greatly improved.
In addition, in a specific embodiment of the present invention, the background management subsystem includes an interface management unit and an interface publishing unit, and the interface management unit is configured to query, add, modify, and download a relevant configuration of a backend service interface; the interface issuing unit is used for receiving an audit instruction of an administrator for newly adding or modifying the back-end service interface and transmitting the audit instruction to the interface management unit; the interface management unit stores the back-end service interface information in a local database, and issues a back-end service interface updating mark to a core subsystem through a distributed service system, and after the core subsystem subscribes the mark issued by the system for issuing the back-end service interface updating, the updated back-end service interface information is pulled from a background management subsystem.
As shown in fig. 3, the back-end service interface information of the core subsystem is obtained through local storage and Redis caching, so that the problem of poor database performance under high concurrency is solved, the cache data is updated through a watch mechanism of the zookeeper, and after the back-end management system issues or logs off the line of the back-end service interface information, the cache value is updated in real time through the watch mechanism of the zookeeper, so that the real-time performance of updating the back-end service interface information is ensured.
zookeeper as a mature distributed coordination framework, the subscription-publication function is an important core function. The subscription and publication function is simply called a watcher model. The viewers subscribe to some topics of interest and are then automatically notified once they change. The zookeeper subscription publishing, i.e., watch, mechanism is a lightweight design. Because it employs a push-pull combination mode. Once the server side perceives that the theme is changed, only one event type and node information is sent to the concerned client side, and specific change content is not included, so that the event is light-weight, and the so-called 'push' part is formed. The client receiving the change notification then needs to pull the changed data itself, which is the "pull" part.
In the invention, the interface management unit stores service interface information in a local database, and issues a service interface update mark (push) to a core subsystem through a distributed service system, and after the core subsystem subscribes the service interface update mark issued by the system, the updated service interface information is pulled from a background management subsystem.
Compared with the cache data updating through a watch mechanism of zookeeper, the method and the system can also adopt the message middleware to realize the transmission of the service interface information between the core subsystem and the background management subsystem, and the message middleware is suitable for the distributed environment needing reliable data transmission. In the system adopting the message middleware mechanism, different objects activate the event of the other side by transmitting messages, and the corresponding operation is completed. The sender sends the message to the message server, and the message server stores the message in a plurality of queues and forwards the message to the receiver when appropriate. Message middleware can communicate between different platforms, is often used for shielding characteristics among various platforms and protocols and realizes cooperation among application programs, has the advantages of providing synchronous and asynchronous connection between a client and a server and transmitting or storing and forwarding messages at any time, but has the disadvantages of increasing the overhead, influencing the performance of a database when high concurrency occurs, and is relatively better than the message middleware in terms of ensuring data consistency by using the Watch mechanism of Zookeeper; the message middleware scheme may have problems of message loss or message repetition.
As shown in fig. 3, the present invention also adopts Caffeine to implement high performance cache, where Caffeine is a high performance cache framework built on java8, and it is a local high performance cache framework, but the present invention is used for the management of Redis cache. The Caffeine function is similar to the Guava cache, and three cache elimination strategies are provided and are respectively based on size, time and reference modes. In the specific embodiment of the invention, Caffeine is adopted to improve the response capability of the gateway system, and particularly, in the specific embodiment of the invention, Caffeine is utilized to realize the scheduling and management of Redis cache, which integrates the advantages of two schemes.
The gateway system in the embodiment of the invention caches the back-end service interface information through local storage and Redis, so that the data of the calling relationship chain of other platform databases is not needed.
In addition, as shown in fig. 2, in the embodiment of the present invention, the data acquisition component of the core subsystem includes a protocol conversion unit, where the protocol conversion unit is configured to implement coordination between a protocol followed by the service request and a protocol followed by the service of the service system; the protocol conversion unit realizes the conversion from the Dubbo protocol to the hypertext transmission protocol in a generalization calling mode, and converts the Dubbo protocol to the hypertext transmission protocol into a common language so as to realize mutual conversion; and the conversion from the hypertext transfer protocol to the Dubbo protocol is realized by utilizing an Apache extensible interactive system.
Most interfaces of the existing gateway system use a Dubbo protocol interactively, but some systems use a Webservice technology, the gateway needs to uniformly convert the interfaces into an http mode, and the coordination between the protocol followed by the service request and the protocol followed by the service of the service system is realized through protocol conversion (Dubbo and Webservice to http, or vice versa) so as to realize the technical effects that: A. the method only needs to call a back-end service interface of the gateway system in an http mode without concerning what protocol is used by the back-end service; B. other systems are compatible, and the system interface of the system is not required to be modified for being compatible with the gateway equipment of the invention; C. developers do not need to learn and know Dubbo and webservice protocols, the learning cost of the developers is reduced, and the research and development cost of the gateway system is correspondingly reduced. Other protocols, such as ftp, rmi, and thrift, etc., may also adopt a protocol conversion manner to realize compatibility between protocols.
In addition, as shown in fig. 2, in the specific embodiment of the present invention, the core subsystem executes a plurality of backend service interface requests in parallel by using a thread pool, organizes the backend service interface requests by using a thread pool, when a service request is routed to one or more backend service interfaces, the core subsystem executes one or more backend service interface requests in parallel by using the thread pool, so as to reduce the routing time from the backend service interface request to the backend service interface when there are a plurality of backend service interfaces, when the backend service interface request is completed, the thread is released back to the thread pool, so that the thread can be reused when a next request is made, and the routing is performed by using an Okhttp3 client lightweight framework;
in addition, the core subsystem also monitors the request state and the request duration of each back-end service interface simultaneously, generates early warning information about the back-end service interface request according to the monitoring result, and sends the early warning information about the back-end service interface request to the back-end management system through rabbitmq.
In the invention, a route from a service request to a back-end service interface is completed by using Okhttp3, namely the route uses an Okhttp3 client lightweight frame, Okhttp3 is an excellent http frame, supports get request and post request, supports file uploading and downloading based on http, supports loading pictures, supports transparent GZIP compression of downloaded files, supports response cache to avoid repeated network requests, and supports using a connection pool to reduce the problem of response delay.
Thread pools are a technique for creating and managing a buffer pool of threads that are ready for use by any request that requires them.
The thread pool can greatly improve the performance of the Java application program of the user and simultaneously reduce the use of all resources.
Reducing thread creation time: in contrast to other gateway systems, if such threads are used "round-robin", the overhead of recalling the backend service interface thread is avoided. Simplified implementation mode: when using thread pools, each individual thread can operate as if it had created an own JDBC connection, allowing the user to directly use JDBC programming techniques. Controlled resource usage: if the user does not use the thread pool, but whenever a request needs to be routed to a new backend service interface, then the resource usage of the user's application is very wasteful and can lead to exceptions under high load.
In addition, as shown in fig. 2, in the embodiment of the present invention, when a service request end needs to call data from multiple service ends together, the service request end sends a single service request to a core subsystem, the core subsystem disassembles and routes the service request to multiple different backend service interfaces, and aggregates results from the multiple different backend service interfaces back to the service request end, where the result aggregation is performed in a manner of JavaScript and groovy engine.
In the gateway system, the JavaScript and the groovy engine are used for result aggregation, and the self-defined result field can be supported, so that the flexibility is improved. As part of the gateway system, multiple client service requests (typically http requests) for multiple internal microservices may be aggregated into a single client service request. This mode is particularly convenient when the client page needs to invoke data from multiple microservices. Using this method, the client will send a service request to the gateway system, and then the gateway system will be responsible for sending multiple service requests to obtain internal microservices and then aggregate the results and send back to the client. The main advantage and goal of this design paradigm is to reduce the gap between client applications and backend APIs, which is especially important for remote applications outside of the data center where the microservice is located.
In addition, the gateway system of the invention also supports the user-defined result field, thus greatly expanding the application range of the patent scheme, aggregating the user-defined result field in the data provided for the client (service request end), and facilitating the improvement and customization of the client.
In particular, as shown in fig. 1, in the embodiment of the present invention, the back-end management subsystem further includes a unit,
a group management unit: the system comprises a back-end service interface, a group management module and a service management module, wherein the back-end service interface is used for managing a back-end service interface in a group mode, and the management comprises group inquiry, addition and modification;
a recycle bin unit: the back-end service interface is used for storing the offline back-end service interface of the interface management unit and restoring and deleting the offline back-end service interface according to the interface management unit;
an application authorization unit: for configuring backend service interfaces to be accessible by those applications;
a document management unit: the method is used for generating and displaying the relevant documents of the back-end service interface;
software development kit unit: the system is used for generating a uniform tool kit for other systems to introduce;
an interface test unit: the system is used for supporting the configured back-end service interface on-line test;
a log statistic unit: the system is used for counting the calling frequency, calling source and calling time of the back-end service interface and outputting a report of the back-end service interface;
a monitoring and early warning unit: the method is used for providing early warning notification of multiple ways for the backend service interface with failed calling or poor performance.
In addition, in the embodiment of the present invention, the data obtaining unit of the core subsystem further includes a parameter mapping unit, and the parameter mapping unit obtains a back-end service request parameter value and a conversion request mode through a parameter mapping relationship, where the supported request modes include get, post, body, header, cookie, and rest mode. The parameter mapping unit corresponds the service request parameters transmitted to the gateway by the user to the request parameters required by the back-end service interface, for example: the service request parameter transmitted to the gateway by the user is userId, and the request parameter required by the back-end service interface is boId, then the gateway system in the embodiment of the invention is responsible for acquiring the value of userId, then assigning the value to the boId, mapping the parameter, and realizing the effective transmission from the service request to the service interface.
Compared with the API gateway in the prior art, the gateway system in the embodiment of the present invention has the following advantages:
a is developed based on JAVA and a mainstream system, so that the deployment and installation are more convenient;
b, self-defining result aggregation by using a JavaScript and a groovy engine;
c, updating a local cache in real time by using a watch mechanism of the zookeeper;
d, checking and processing the service request parameters by using a responsibility chain mode and a strategy mode;
e supports conversion between protocols such as Dubbo, Webservice, http and the like.
While the foregoing description shows and describes several preferred embodiments of the invention, it is to be understood that the invention is not limited to the forms disclosed herein, but is not to be construed as excluding other embodiments and is capable of use in various other combinations, modifications, and environments and is capable of changes within the scope of the inventive concept as expressed herein, commensurate with the above teachings, or the skill or knowledge of the relevant art. And that modifications and variations may be effected by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.