CN113641410A - Netty-based high-performance gateway system processing method and system - Google Patents

Netty-based high-performance gateway system processing method and system Download PDF

Info

Publication number
CN113641410A
CN113641410A CN202110630084.7A CN202110630084A CN113641410A CN 113641410 A CN113641410 A CN 113641410A CN 202110630084 A CN202110630084 A CN 202110630084A CN 113641410 A CN113641410 A CN 113641410A
Authority
CN
China
Prior art keywords
task
executed
state
waiting
queue
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110630084.7A
Other languages
Chinese (zh)
Inventor
李怀根
丘佳成
吴亮
温祖辉
连宾雄
李行龙
吴浔
黄翠仪
王旭
周宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Guangfa Bank Co Ltd
Original Assignee
China Guangfa Bank Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Guangfa Bank Co Ltd filed Critical China Guangfa Bank Co Ltd
Priority to CN202110630084.7A priority Critical patent/CN113641410A/en
Publication of CN113641410A publication Critical patent/CN113641410A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/4401Bootstrapping
    • G06F9/4411Configuring for operating with peripheral devices; Loading of device drivers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/66Arrangements for connecting between networks having differing types of switching systems, e.g. gateways

Abstract

The invention adopts a fully asynchronous multi-task processing model, and when the task is subjected to time-consuming operations such as IO and the like during execution, an asynchronous waiting mode is adopted, so that a working thread cannot be blocked in the process of waiting the time-consuming operations, and the working thread can execute other tasks. The configuration information is dynamically loaded by adopting a three-level cache lazy loading strategy, the configuration can be modified at any time and is effective at any time, the configuration information is loaded when needed, the configuration is not required to be loaded when the system is started, the risk when the system is started is reduced, and the system can be concentrated in executing busy tasks. By adopting a Pipeline-Filter task processing mode, a linear execution flow is more in line with the thinking habit of a developer, and the developer can realize a business function only by developing different filters and then combining the filters by using Pipeline, thereby reducing the development difficulty. The three-level abnormal bottom-trapping mechanism provides better experience for a requester, the stability of the system is guaranteed by the abnormal bottom-trapping mechanism, and other tasks cannot be influenced by sending abnormality of individual tasks.

Description

Netty-based high-performance gateway system processing method and system
Technical Field
The invention relates to the technical field of computers and networks, in particular to a processing method and a processing system of a high-performance gateway system based on Netty.
Background
The API gateway is an API-oriented serial centralized strong management and control service appearing on a system boundary, and the system boundary refers to the boundary of an enterprise IT system. Before the popularity of the micro-service concept, the entity of the API gateway has emerged, and the main application scenario at this time is OpenAPI, that is, an open platform, which is oriented to external partners of an enterprise. When the microservice concept became popular, the API gateway has become a standard component integrated at the upper application layer.
Api gateways can be used to solve the following problems:
1) the API granularity provided by a microservice is usually different from the requirements of a client, and a microservice generally provides a fine-grained API, that is, a client generally needs to interact with multiple services.
2) Different clients require different data and different types of clients have different network capabilities.
3) The partitioning of services may change over time, thus requiring details to be hidden from the client.
The main locations of the API gateway are:
1) the method is oriented to Web App, the scenes are similar to the separation of the front end and the back end in the physical form, and the Web App at the moment is not a full-function Web App but a customized and scene-based App according to the scenes.
2) For Mobile apps, in such scenarios, the Mobile App is a user of a backend Service, and the API gateway needs to assume a part of functions of Mobile Device Management (MDM).
3) The scenario is mainly to satisfy the external opening of business forms, an ecosystem is established with an external Partner of an enterprise, and at the moment, a series of security control functions such as quota, flow control, token and the like need to be added to an API gateway.
4) Facing the Partner extranarl api, when the internet morphology gradually affects traditional enterprises, many systems rely on the capabilities of external partners to import traffic or content, such as logging in using Partner accounts, and payment waiting using third party payment platforms, which are external capabilities for the insides of enterprises. At this time, the API gateway needs to uniformly schedule external APIs for enterprise internal Service for uniform authentication, authorization, and access control.
In the prior art, an API gateway system uses Zuul as a technical prototype, and is implemented based on a Filter mechanism and a PRPE (PRE-ROUTING-POST-ERROR) model. In the aspect of system architecture, plug-in management of service functions is realized through a responsibility chain mechanism (FilterChain) and a Java SPI mechanism, and management of service/configuration information is realized by means of a Registry module (Registry).
Data related to an API gateway in the prior art mainly includes basic configuration information and service configuration information, a Hengfeng bank stores the basic configuration information in a local file, and the service configuration information is subscribed and notified through a Zookeeper registration center.
The key technology of the gateway system in the prior art is realized as follows:
1) platform independence, an extension loading mechanism of a Filter and other extension points is realized through an SPI mechanism, and part of service functions are realized based on a third-party tool class. In addition, no other third party platform or architecture is relied upon.
2) The Filter-PRPE mechanism improves Zuul implementation, and the init method of each Filter is called in sequence when a gateway is started so as to obtain data required by the operation period through a registration center and cache the data in a memory, thereby improving the efficiency of the operation period. The method comprises the steps of defining doper and doPost methods in an interface class, and is used for realizing a bidirectional mechanism of the Filter, namely sequentially executing the doper method of each Filter in sequence through the FilterChain, and then executing the Filter heavily loaded with the doPost method in reverse sequence.
3) And the service/configuration data is dynamically managed, and the gateway does not depend on the persistent data of the database and carries out data management through a decentralized registration center. The initial service data are maintained through the management terminal, the gateway subscribes the initial service data when being started, and when the data of the registration center are updated, the gateway can timely receive the notification to update the cache.
4) And the request filtering mechanism is used for performing basic check or processing on the request information and providing the functions of message conversion, message analysis, black and white list check and request parameter verification. Each function supports dynamic start/stop on demand while supporting dynamic extensions.
5) And the multidimensional dynamic routing mechanism sends the transaction to a corresponding back-end system according to the parameters in the request message. Providing functions of rule parsing/checking and service calling out.
6) The service degradation/fusing mechanism realizes the following functions through introducing a Hystrix mechanism of Netflix company and a resource isolation mechanism: preventing a single dependency from exhausting all user threads within a container; the system load is reduced, and the requests which cannot be processed in time quickly fail and are not queued; providing failure rollback, and making failures transparent to users when necessary; and the influence of the dependent service on the whole system is reduced by using an isolation mechanism.
However, the prior art has the following technical problems:
1) the Zuul gateway framework used by the heyofts bank does not have much advantage in overall performance.
2) By using the Filter preloading mode, after the number of filters in the later period is increased, the number of contents to be preloaded during the system starting is increased, and the starting speed is slower and slower.
3) The configuration information is realized through a local file and a registration center, the gateway subscribes initial service data when being started, and the gateway is informed to update when being updated. This mode requires a large amount of service data to be subscribed at startup, which slows down the startup speed of the system.
4) The used service degradation/fusing mechanism is a Hystrix mechanism of Netflix company, the current limiting strategy in the mechanism has an optimization space, the current limiting function is the most basic function for the gateway system, and the stability of the gateway system can be better ensured by optimizing the current limiting strategy.
Disclosure of Invention
The invention provides a processing method and a system of a high-performance gateway system based on Netty, which aims to more thoroughly realize platform independence and give full play to the performance of the gateway system. Because the transaction frequency of other systems butted by the gateway is uneven, many systems can have one transaction for a long time, so that the gateway configuration adopts a lazy loading mode, namely, the loading is carried out according to the requirement instead of carrying out all the loading when the system is started, in order to enable the system to concentrate on processing busy tasks. In order to improve the operating efficiency of the system, a three-level cache strategy is adopted, frequently-used configuration information is stored in a memory, and the locality principle of a program is utilized. In order to improve the stability of a gateway system and ensure that the gateway cannot generate abnormity due to huge requests, an Ali Sentinel distributed flow control component is adopted to optimize flow control, and a three-level abnormity bottom-holding strategy is adopted to ensure that the requests of each external system can be responded, and tasks cannot be influenced mutually. In order to enable developers to quickly understand and develop different business functions, the invention uses a Pipeline-Filter mode to process tasks, and the linear processing flow is easier to understand.
The first aspect of the present invention provides a method for processing a high-performance gateway system based on Netty, which includes:
receiving a connection request sent by a client, establishing a data transmission channel, calling a Netty Server processor to process the connection request, obtaining a data channel and request data, packaging the data channel and the request data into tasks, and putting the tasks into a queue to be executed;
and polling the task state in the queue to be executed, wherein the running task state is the task to be executed.
Further, the polling a task state in the queue to be executed, before the task running state is a task to be executed, includes:
judging whether an idle thread exists; and if so, putting the task to be executed into an idle thread for execution.
Further, after the determining whether there is an idle thread, the method further includes:
polling the task state in the waiting queue, if the task in the priority processing state exists, putting the task in the priority processing state into the queue to be executed from the waiting queue, and preferentially executing the task in the priority processing state; wherein the priority processing state comprises: wait for asynchronous IO, and the like.
Further, before the determining whether there is an idle thread, the method further includes:
receiving a network request, packaging the network request into a task to be executed, and putting the task to be executed into a queue to be executed;
polling the task state in the queue to be executed, and after the task state is the task to be executed, further comprising:
when a task execution thread is called to execute the task waiting for operation, if the situation that the node needs to execute asynchronous operation exists, the task execution thread executes the asynchronous operation first, and updates the state of the task waiting for operation into a waiting state and puts the task waiting for operation into a waiting queue; and after the asynchronous operation is executed, synchronously updating the task state to be executed, moving the task from the waiting queue to the to-be-executed queue, and executing when an idle thread is waited.
Further, the invoking the task execution thread to execute the task waiting to be executed includes:
when a task thread is called to execute the task waiting for running, if an abnormal condition exists, the abnormal condition is processed through a three-level abnormal bottom-trapping mechanism; the method comprises the following steps:
when a task thread is called to run the task waiting for running, if an abnormal condition exists, the abnormal condition is processed through a secondary abnormal pipeline in the task thread;
if the abnormal condition exists in the process of processing the abnormal condition by the secondary abnormal pipeline, marking the currently executed task as an error state;
and processing the tasks marked as the error states through the exception pipeline.
The second aspect of the present invention further provides a processing system of a Netty-based high-performance gateway system, including:
the task receiving module is used for receiving a connection request sent by a client, establishing a data transmission channel, calling a Netty Server processor to process the connection request, obtaining a data channel and request data, packaging the data channel and the request data into a task and placing the task into a queue to be executed;
and the task execution module is used for polling the task state in the queue to be executed, and the running task state is the task to be executed.
Further, the processing system of the Netty-based high-performance gateway system further includes:
the thread polling module is used for judging whether idle threads exist or not; and if so, putting the task to be executed into an idle thread for execution.
Further, the processing system of the Netty-based high-performance gateway system further includes:
the priority processing module is used for polling the task state in the waiting queue, if the task in the priority processing state exists, the task in the priority processing state is put into the queue to be executed from the waiting queue, and the task in the priority processing state is executed preferentially; wherein the priority processing state comprises: wait for asynchronous IO, and the like.
Further, the processing system of the Netty-based high-performance gateway system further includes:
the network request module is used for receiving a network request, packaging the network request into a task to be executed and putting the task to be executed into a queue to be executed;
the asynchronous operation module is used for calling the task execution thread to execute the task waiting to be executed, if the situation that the node needs to execute the asynchronous operation exists, the task execution thread executes the asynchronous operation firstly, and updates the state of the task waiting to be executed into the state waiting to be executed and puts the state into a waiting queue; and after the asynchronous operation is executed, synchronously updating the task state to be executed, moving the task from the waiting queue to the to-be-executed queue, and executing when an idle thread is waited.
Further, the asynchronous operation module is further configured to:
when a task thread is called to execute the task waiting for running, if an abnormal condition exists, the abnormal condition is processed through a three-level abnormal bottom-trapping mechanism; the method comprises the following steps:
when a task thread is called to run the task waiting for running, if an abnormal condition exists, the abnormal condition is processed through a secondary abnormal pipeline in the task thread;
if the abnormal condition exists in the process of processing the abnormal condition by the secondary abnormal pipeline, marking the currently executed task as an error state;
and processing the tasks marked as the error states through the exception pipeline.
Compared with the prior art, the embodiment of the invention has the beneficial effects that:
the invention provides a method and a system for processing a high-performance gateway system based on Netty, wherein the method comprises the following steps: receiving a connection request sent by a client, establishing a data transmission channel, calling a Netty Server processor to process the connection request, obtaining a data channel and request data, packaging the data channel and the request data into tasks, and putting the tasks into a queue to be executed; and polling the task state in the queue to be executed, wherein the running task state is the task to be executed. The invention adopts a fully asynchronous multi-task processing model, and when the task is subjected to time-consuming operations such as IO and the like during execution, an asynchronous waiting mode is adopted, so that a working thread cannot be blocked in the process of waiting the time-consuming operations, and the working thread can execute other tasks. The configuration information is dynamically loaded by adopting a three-level cache lazy loading strategy, the configuration can be modified at any time and is effective at any time, the configuration information is loaded when needed, the configuration is not required to be loaded when the system is started, the risk when the system is started is reduced, and the system can be concentrated in executing busy tasks. And the local cache in the third-level cache can well meet the locality principle of the program and give play to the system performance again. By adopting a Pipeline-Filter task processing mode, a linear execution flow is more in line with the thinking habit of a developer, and the developer can realize a business function only by developing different filters and then combining the filters by using Pipeline, thereby reducing the development difficulty. The three-level abnormal bottom-trapping mechanism gives better experience to a requester, the condition that the requester is overtime or returns nothing due to abnormality is avoided, the stability of the system is ensured by the abnormal bottom-trapping mechanism, and other tasks cannot be influenced by the abnormal sending of individual tasks.
Drawings
In order to more clearly illustrate the technical solution of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart of a processing method of a Netty-based high-performance gateway system according to an embodiment of the present invention;
fig. 2 is a flowchart of a processing method of a Netty-based high-performance gateway system according to another embodiment of the present invention;
fig. 3 is a flowchart of a processing method of a Netty-based high-performance gateway system according to another embodiment of the present invention;
fig. 4 is a flowchart of a processing method of a Netty-based high-performance gateway system according to another embodiment of the present invention;
fig. 5 is a flowchart of a processing method of a Netty-based high-performance gateway system according to another embodiment of the present invention;
FIG. 6 is a schematic diagram of a fully asynchronous gateway mode provided by an embodiment of the present invention;
FIG. 7 is a diagram of a multitasking switch provided by one embodiment of the present invention;
fig. 8 is a schematic diagram of a gateway operation model provided by an embodiment of the present invention;
FIG. 9 is a schematic diagram of single task processing provided by an embodiment of the invention;
FIG. 10 is a diagram of a multi-level cache according to an embodiment of the invention;
FIG. 11 is a flow diagram of a multi-level cache according to another embodiment of the present invention;
FIG. 12 is a flow diagram of a multi-level cache provided by yet another embodiment of the present invention;
fig. 13 is a device diagram of a processing system of a Netty-based high performance gateway system according to an embodiment of the present invention;
fig. 14 is a device diagram of a processing system of a Netty-based high performance gateway system according to another embodiment of the present invention;
fig. 15 is a device diagram of a processing system of a Netty-based high performance gateway system according to another embodiment of the present invention;
fig. 16 is an apparatus diagram of a processing system of a Netty-based high performance gateway system according to a further embodiment of the present invention;
fig. 17 is a block diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be understood that the step numbers used herein are for convenience of description only and are not intended as limitations on the order in which the steps are performed.
It is to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the specification of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
The terms "comprises" and "comprising" indicate the presence of the described features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
The term "and/or" refers to and includes any and all possible combinations of one or more of the associated listed items.
A first aspect.
Referring to fig. 1, an embodiment of the present invention provides a processing method for a high performance gateway system based on Netty, including:
s10, receiving a connection request sent by a client, establishing a data transmission channel, calling a Netty Server processor to process the connection request, obtaining a data channel and request data, packaging the data channel and the request data into tasks, and putting the tasks into a queue to be executed.
And S30, polling the task state in the queue to be executed, wherein the running task state is the task to be executed.
Referring to fig. 2, in an embodiment, the step S30 further includes:
s20, judging whether an idle thread exists or not; and if so, putting the task to be executed into an idle thread for execution.
Referring to fig. 3, in another embodiment, after the step S20, the method further includes:
s21, polling the task state in the waiting queue, if the task in the priority processing state exists, putting the task in the priority processing state into the queue to be executed from the waiting queue, and preferentially executing the task in the priority processing state; wherein the priority processing state comprises: wait for asynchronous IO, and the like.
Referring to fig. 4, in another embodiment, before the step S20, the method further includes:
s11 receives the network request, packages the network request into the task to be executed, and puts the task to be executed into the queue to be executed.
After the step S20, the method further includes:
s40, when the task to be run is called to run the task, if the situation that the node needs to execute asynchronous operation exists, the task to be run is executed by the task to be run, the state of the task to be run is updated to be waiting, and the task to be run is put into a waiting queue; and after the asynchronous operation is executed, synchronously updating the task state to be executed, moving the task from the waiting queue to the to-be-executed queue, and executing when an idle thread is waited.
In another specific embodiment, after the step S40, the method further includes:
and S50, when the task thread is called to execute the task waiting for running, if an abnormal condition exists, the abnormal condition is processed through a three-level abnormal bottom-trapping mechanism.
Specifically, the step S50 includes:
and S51, when the task waiting for running is called to run by the task execution thread, if an abnormal condition exists, the abnormal condition is processed by the secondary abnormal pipeline in the task execution thread.
And S52, if the abnormal condition exists in the process of processing the abnormal condition by the secondary abnormal pipeline, marking the current execution task as an error state.
And S53, processing the task marked as the error state through the exception pipeline.
An embodiment of the present invention provides a method for processing a high-performance gateway system based on Netty, including:
1. and realizing the full asynchronous gateway based on the Netty technology.
Referring to fig. 6, the I/O thread and the service processing thread are separated, so that the time-consuming call request does not occupy the I/O thread, thereby improving the throughput performance. The I/O thread and the service processing thread which call the back-end service system adopt asynchronous queue events, the requests are directly put into the queue, when the events occur, callback function processing is triggered, the number of threads is reduced while more requests are received, and therefore thread context switching overhead is reduced, and the problem of multithreading blocking basically does not exist.
In the full asynchronous gateway mode, the thread number does not become the bottleneck of the gateway system any more, the slow API can not cause the instability of the gateway system, and the influence is effectively isolated by combining the thread pool technology.
Wherein, the terms in FIG. 6 explain:
Figure RE-GDA0003295789380000121
graph analysis of fig. 6:
firstly, when the service-side Acceptor receives a connection request of a client, a thread is taken from a reaction thread pool, a data transmission channel is established, and a Netty Server processor is called to process the connection request.
The Netty Server processor firstly processes the message sent by the Client, and then calls the Netty Client processor through the Netty Client scheduler to send the processed message to other application systems.
2. And switching the tasks.
Referring to fig. 7, in order to ensure that the fully asynchronous task processing of the gateway does not have a thread blocking condition, the switching process of the multiple tasks is completed by using the lower poller in cooperation with the high-performance queue dispatcher. When some tasks are blocked due to time-consuming operations such as IO (input/output), the threads are released to process other tasks immediately, and the thread resource processing tasks are occupied continuously after the time-consuming operations are completed.
FIG. 7 terminology explains:
Figure RE-GDA0003295789380000122
Figure RE-GDA0003295789380000131
the flow of fig. 7 explains:
1) when a network request reaches a gateway, a data Channel and request data Message are obtained after being processed by a Netty Server processor, then a Task is generated by the processor, the Channel, the Message and a Pipeline name for processing the Task are put into the Task, and then the Task is put into a TaskQueue.
2) And the EventLooper polls whether the task in the StandBY state exists in the TaskQueue or not, and pushes the task to the TaskWorker to execute if the task exists.
3) When the task Worker executes a task, situations of WAITING for IO events, being limited in current and the like may exist, and when the situations are met, the task state is set to WAITING, the current execution position is recorded, the task is put into the task queue, and then the working thread is released.
4) The worker thread performs other tasks.
5) When the thread is executed to an EndPoint node, the thread needs to be used as a client to forward a network IO message to other application systems, at the moment, Task is put into an IOWorker to be processed by Netty, the Task state is set to WAITING, the current execution position is recorded, the Task is put into a TaskQueue to wait for a Netty execution result, and then the working thread is released.
6) The worker thread performs other tasks.
7) And circulating the processes from 2) to 6). When the task waiting condition in the step 3) and the step 5) is met or the netyclient obtains a back-end system response, setting the task state to STANDBY, and waiting for the poller in the step 2) to push the task to the working thread for execution.
3. And (4) a gateway working model.
For a better understanding of the above-described multitask switching process, reference may be made to the following gateway operational model diagram.
When the gateway receives network requests of different protocols (Socket, Http, etc.), the requests are wrapped into Task tasks that are put into the Task queue. The TaskWorker obtains the task from the TaskQueue and calls the corresponding Pipeline to process different tasks.
If a certain node TaskNode in Pipeline needs to execute an asynchronous process, the asynchronous process is divided into two Action processes, wherein the first Action executes asynchronous operation, and the second Action executes after a response result of the asynchronous operation is obtained. After the first Action is executed, setting the task state to WAITING, recording the current execution position, putting the task into the TaskQueue, then releasing the worker thread (as shown by an Async arrow in the figure), and after an asynchronous result is obtained and pushed to the worker thread by a poller to be executed, starting to execute from the second Action. As shown in fig. 8.
4. And (4) single task processing.
As shown in fig. 9, the gateway processes a single task through Pipeline, encapsulates single functions such as parameter verification, security authentication, black and white list, current limit control, message conversion, message encryption and decryption, message conversion, etc. into a single Filter, can be organized according to different industry standards or partner access requirements, according to a configuration sequence, and supports hot plug of Filter functions. And acquiring a corresponding pipeline from the pipeline pool according to different accessed partners during transaction, and then filtering the transaction by using different filters according to the functional requirements by the pipeline.
And the TaskWorker for executing the tasks calls Pipeline to execute different tasks, and single TaskNode function node in the Task is executed through the Filter. For example, the first Filter is used for parameter verification, the second Filter is used for safety certification, and the third Filter is used for current limiting control.
Most basic functions of the API gateway can be realized through the Filter, such as functions of current limiting control, admission control, encryption and decryption, log recording, exception handling, flow interception, field mapping, message analysis, dynamic routing and the like.
The Pipeline and the Filter can be multiplexed, so that the same Pipeline and the Filter can be reused for repeated function requirements, and development of function repeated code blocks is avoided.
5. Pipeline exception handling model.
The Pipeline function is a processing flow normally executed by the Task, but an abnormal condition is inevitable, and in order to maintain the stability of the system, the abnormality must be handled. The invention adopts a three-level abnormity bottom-holding mechanism to process abnormity. Each Pipeline has a corresponding secondary exception Pipeline (excepting Pipeline), multiple pipelines can multiplex the same excepting Pipeline, when an expected ERROR is sent in the Filter, the Task state is set to be ERROR state by the working thread and put into a Task queue, and when a poller polls an exception Task, the exception Pipeline set during Pipeline initialization is called to execute the corresponding Task. And the third-level exception is DefaultExceptionPipeline, when the exception occurs during the execution of the exception pipeline, the exception is sent into the DefaultExceptionPipeline, the DefaultExceptionPipeline carries out exception base making, and an http message with the state code of 500 is returned to the requester. The three-level abnormal bottom-trapping mechanism ensures the stable operation of the server, the tasks are isolated from each other and do not influence each other, and the execution of other tasks cannot be influenced when one task is abnormally sent.
6. Multi-level cache
As shown in fig. 10, in order to improve the throughput performance of the gateway, the financial open platform uses a three-level cache mode to avoid the performance consumption of the gateway directly interacting with the database.
As shown in fig. 10, the third level cache mode is divided into: the system comprises a first-level local cache, a second-level Redis cache center cache and a third-level database persistent storage. The local cache mainly stores frequently used or loaded configurations during system initialization; the Redis cache center stores the full configuration information of the database for other application servers to use; the database stores persistent data, only the capacity center can access the database, other application servers cannot be directly connected with the database, and only the database data can be obtained by calling a capacity center interface or from a Redis cache center.
When the configuration is subjected to addition, modification and deletion, the database records are updated firstly, then the data of the Redis cache center are updated, and the gateway reads the data from the Redis.
When the gateway requests configuration information, the local cache is firstly queried, then the Redis cache center is queried, if the Redis cache center has no data, the interface of the capability center is asynchronously called, after the capability center queries the configuration information in the database, the configuration information is firstly stored in the Redis, and then the configuration information is returned to the gateway, as shown in FIG. 11:
the local cache uses a high-performance queue dispatcher to manage local cache events, and uses two local java data structures to manage cache data, namely a LinkedHashMap object cacheStandby and a HashMap object cache. The cache standby is mainly used for realizing lru first-in first-out cache strategy, and the cache maintains the current local cache data. The local cache lookup and update logic is as follows:
1) if the local cache is not used, directly querying Redis;
2) if the cache does not exist or the cache is expired (the current time-cache update time > cache expiration time), acquiring data from Redis;
3) if the condition 2) is not met, acquiring data from the local cache;
4) no matter which condition is met in the step 2) or the step 3), the local cache needs to be updated;
5) if the local cache does not have the record, storing the record into the head of a cacheStandby linked list and simultaneously storing the record into the cache;
6) if the local cache has records, but the new and old values are not consistent, the cache is updated by using the new value, the old value in the cache standby is deleted, then the new value is inserted into the head of the linked list, and the records in the cache are updated;
7) and if the local cache has records and the new value and the old value are consistent, moving the old node in the cacheStandby linked list to the head of the linked list, and not processing by the cache.
8) And if the data in the cache standby exceeds the maximum limit, deleting the tail node of the linked list, and synchronously deleting the records in the cache.
9) And synchronizing the data in the cacheStandby to the cache again every 1024 milliseconds.
The invention has the beneficial effects that:
1. a fully asynchronous task processing model for multi-task switching. In the prior art, a service degradation/fusing mechanism is adopted, a Hystrix mechanism of Netflix company is introduced, a multithreading task processing function is realized through a resource isolation mechanism, a rapid failure and no queuing strategy is adopted for blocked tasks, and a failure rollback function is provided. The invention adopts a full asynchronous task processing model based on Netty self-research, waits when a task is blocked, does not occupy a system thread during the waiting period, and returns error information after overtime.
2. The third-level cache lazy loading mode fully exerts the system performance. In the prior art, a registration center and a local file are used to store configuration information, and the configuration information is initialized when a system is started. The invention adopts a three-level cache strategy and a configuration center mode to store the configuration information, and only when needed, the configuration information is sent to a Redis cache center to obtain the configuration and is stored in a local cache.
3. The local cache strategy fully utilizes the locality principle of a program and improves the system efficiency. The present invention designs a local caching strategy with reference to the three-level caching function of a CPU in a computer architecture.
4. Pipe-filter mode task processing. In the prior art, a Filter-PRPE mechanism is adopted, configuration is loaded during starting, each Filter has two methods, namely doper and doPost, and the two methods are used for calling the doper in a forward direction at first and then calling the doPost in a reverse direction so as to realize a bidirectional mechanism of the Filter. The invention adopts a Pipeline-Filter mode, each task corresponds to one Pipeline, the configuration information is loaded when running from the cache, and a unidirectional linear execution mode is used.
5. The third-level abnormal bottom-trapping mechanism. The abnormal bottom-trapping mechanism ensures that any abnormal condition cannot be omitted, better experience is provided for a request party, the condition that the system is overtime or returns nothing due to the abnormal condition is avoided, the stability of the system is ensured, and other tasks cannot be influenced by the abnormal sending of individual tasks.
A second aspect.
Referring to fig. 13-16, an embodiment of the invention provides a processing system of a Netty-based high-performance gateway system, including:
the task receiving module 10 is configured to receive a connection request sent by a client, establish a data transmission channel, call a Netty Server processor to process the connection request, obtain a data channel and request data, package the data channel and the request data into a task, and place the task in a queue to be executed.
And the task execution module 30 is configured to poll the task state in the queue to be executed, and the running task state is a task to be executed.
In a specific embodiment, the method further comprises:
a thread polling module 20, configured to determine whether there is an idle thread; and if so, putting the task to be executed into an idle thread for execution.
In another embodiment, the method further comprises:
the priority processing module 40 is configured to poll task states in the waiting queue, and if there is a task in a priority processing state, put the task in the priority processing state from the waiting queue into the queue to be executed, and preferentially execute the task in the priority processing state; wherein the priority processing state comprises: wait for asynchronous IO, and the like.
In another embodiment, the method further comprises:
the network request module 50 is configured to receive a network request, package the network request into a task to be executed, and place the task to be executed into a queue to be executed.
The asynchronous operation module 60 is configured to, when a task execution thread is called to execute the task to be executed, if there is a situation that a node needs to execute an asynchronous operation, execute the asynchronous operation by the task execution thread first, update the state of the task to be executed to be waiting, and place the task in a waiting queue; and after the asynchronous operation is executed, synchronously updating the task state to be executed, moving the task from the waiting queue to the to-be-executed queue, and executing when an idle thread is waited.
In another specific embodiment, the asynchronous operation module 60 is further configured to:
when a task thread is called to execute the task waiting for running, if an abnormal condition exists, the abnormal condition is processed through a three-level abnormal bottom-trapping mechanism; the method comprises the following steps:
when a task thread is called to run the task waiting for running, if an abnormal condition exists, the abnormal condition is processed through a secondary abnormal pipeline in the task thread;
if the abnormal condition exists in the process of processing the abnormal condition by the secondary abnormal pipeline, marking the currently executed task as an error state;
and processing the tasks marked as the error states through the exception pipeline.
In a third aspect.
The present invention provides an electronic device, including:
a processor, a memory, and a bus;
the bus is used for connecting the processor and the memory;
the memory is used for storing operation instructions;
the processor is configured to invoke the operation instruction, and the executable instruction enables the processor to perform an operation corresponding to the processing method of the Netty-based high-performance gateway system according to the first aspect of the present application.
In an alternative embodiment, there is provided an electronic apparatus, as shown in fig. 17, an electronic apparatus 5000 shown in fig. 17 including: a processor 5001 and a memory 5003. The processor 5001 and the memory 5003 are coupled, such as via a bus 5002. Optionally, the electronic device 5000 may also include a transceiver 5004. It should be noted that the transceiver 5004 is not limited to one in practical application, and the structure of the electronic device 5000 is not limited to the embodiment of the present application.
The processor 5001 may be a CPU, general purpose processor, DSP, ASIC, FPGA or other programmable logic device, transistor logic device, hardware component, or any combination thereof. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure. The processor 5001 may also be a combination of processors implementing computing functionality, e.g., a combination comprising one or more microprocessors, a combination of DSPs and microprocessors, or the like.
Bus 5002 can include a path that conveys information between the aforementioned components. The bus 5002 may be a PCI bus or EISA bus, etc. The bus 5002 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 17, but this does not mean only one bus or one type of bus.
The memory 5003 may be, but is not limited to, a ROM or other type of static storage device that can store static information and instructions, a RAM or other type of dynamic storage device that can store information and instructions, an EEPROM, a CD-ROM or other optical disk storage, optical disk storage (including compact disk, laser disk, optical disk, digital versatile disk, blu-ray disk, etc.), magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
The memory 5003 is used for storing application program codes for executing the present solution, and the execution is controlled by the processor 5001. The processor 5001 is configured to execute application program code stored in the memory 5003 to implement the teachings of any of the foregoing method embodiments.
Among them, electronic devices include but are not limited to: mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), and the like, and fixed terminals such as digital TVs, desktop computers, and the like.
A fourth aspect.
The present invention provides a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements a method for processing a Netty-based high-performance gateway system as set forth in the first aspect of the present application.
Yet another embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, which, when run on a computer, enables the computer to perform the corresponding content in the aforementioned method embodiments.

Claims (10)

1. A processing method of a high-performance gateway system based on Netty is characterized by comprising the following steps:
receiving a connection request sent by a client, establishing a data transmission channel, calling a netyServer processor to process the connection request, obtaining a data channel and request data, packaging the data channel and the request data into tasks, and putting the tasks into a queue to be executed;
and polling the task state in the queue to be executed, wherein the running task state is the task to be executed.
2. The method as claimed in claim 1, wherein said polling task status in the queue to be executed, and before the task status is a task waiting to be executed, comprises:
judging whether an idle thread exists; and if so, putting the task to be executed into an idle thread for execution.
3. The method as claimed in claim 2, wherein after determining whether there is an idle thread, the method further comprises:
polling the task state in the waiting queue, if the task in the priority processing state exists, putting the task in the priority processing state into the queue to be executed from the waiting queue, and preferentially executing the task in the priority processing state; wherein the priority processing state comprises: wait for asynchronous IO, and the like.
4. The method of processing a Netty-based high performance gateway system according to claim 2 wherein,
before the judging whether there is an idle thread, the method further includes:
receiving a network request, packaging the network request into a task to be executed, and putting the task to be executed into a queue to be executed;
polling the task state in the queue to be executed, and after the task state is the task to be executed, further comprising:
when a task execution thread is called to execute the task waiting for operation, if the situation that the node needs to execute asynchronous operation exists, the task execution thread executes the asynchronous operation first, and updates the state of the task waiting for operation into a waiting state and puts the task waiting for operation into a waiting queue; and after the asynchronous operation is executed, synchronously updating the task state to be executed, moving the task from the waiting queue to the to-be-executed queue, and executing when an idle thread is waited.
5. The method as claimed in claim 4, wherein said invoking task thread executes said task waiting to be executed, and comprises:
when a task thread is called to execute the task waiting for running, if an abnormal condition exists, the abnormal condition is processed through a three-level abnormal bottom-trapping mechanism; the method comprises the following steps:
when a task thread is called to run the task waiting for running, if an abnormal condition exists, the abnormal condition is processed through a secondary abnormal pipeline in the task thread;
if the abnormal condition exists in the process of processing the abnormal condition by the secondary abnormal pipeline, marking the currently executed task as an error state;
and processing the tasks marked as the error states through the exception pipeline.
6. A processing system for a Netty based high performance gateway system comprising:
the task receiving module is used for receiving a connection request sent by a client, establishing a data transmission channel, calling a Netty Server processor to process the connection request, obtaining a data channel and request data, packaging the data channel and the request data into a task and placing the task into a queue to be executed;
and the task execution module is used for polling the task state in the queue to be executed, and the running task state is the task to be executed.
7. The processing system of a Netty-based high performance gateway system of claim 6 further comprising:
the thread polling module is used for judging whether idle threads exist or not; and if so, putting the task to be executed into an idle thread for execution.
8. The processing system of a Netty-based high performance gateway system of claim 6 further comprising:
the priority processing module is used for polling the task state in the waiting queue, if the task in the priority processing state exists, the task in the priority processing state is put into the queue to be executed from the waiting queue, and the task in the priority processing state is executed preferentially; wherein the priority processing state comprises: wait for asynchronous IO, and the like.
9. The processing system of a Netty-based high performance gateway system of claim 7 further comprising:
the network request module is used for receiving a network request, packaging the network request into a task to be executed and putting the task to be executed into a queue to be executed;
the asynchronous operation module is used for calling the task execution thread to execute the task waiting to be executed, if the situation that the node needs to execute the asynchronous operation exists, the task execution thread executes the asynchronous operation firstly, and updates the state of the task waiting to be executed into the state waiting to be executed and puts the state into a waiting queue; and after the asynchronous operation is executed, synchronously updating the task state to be executed, moving the task from the waiting queue to the to-be-executed queue, and executing when an idle thread is waited.
10. The processing system of a Netty-based high performance gateway system of claim 9 wherein the asynchronous operation module is further configured to:
when a task thread is called to execute the task waiting for running, if an abnormal condition exists, the abnormal condition is processed through a three-level abnormal bottom-trapping mechanism; the method comprises the following steps:
when a task thread is called to run the task waiting for running, if an abnormal condition exists, the abnormal condition is processed through a secondary abnormal pipeline in the task thread;
if the abnormal condition exists in the process of processing the abnormal condition by the secondary abnormal pipeline, marking the currently executed task as an error state;
and processing the tasks marked as the error states through the exception pipeline.
CN202110630084.7A 2021-06-07 2021-06-07 Netty-based high-performance gateway system processing method and system Pending CN113641410A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110630084.7A CN113641410A (en) 2021-06-07 2021-06-07 Netty-based high-performance gateway system processing method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110630084.7A CN113641410A (en) 2021-06-07 2021-06-07 Netty-based high-performance gateway system processing method and system

Publications (1)

Publication Number Publication Date
CN113641410A true CN113641410A (en) 2021-11-12

Family

ID=78416013

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110630084.7A Pending CN113641410A (en) 2021-06-07 2021-06-07 Netty-based high-performance gateway system processing method and system

Country Status (1)

Country Link
CN (1) CN113641410A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115051987A (en) * 2022-06-06 2022-09-13 瞳见科技有限公司 Mobile terminal service distribution system and method for multiple nodes
CN115065588A (en) * 2022-05-31 2022-09-16 浪潮云信息技术股份公司 API fusing degradation implementation method and system based on back-end error codes
CN115118590A (en) * 2022-06-22 2022-09-27 平安科技(深圳)有限公司 Method, device, system, equipment and storage medium for managing configuration data

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115065588A (en) * 2022-05-31 2022-09-16 浪潮云信息技术股份公司 API fusing degradation implementation method and system based on back-end error codes
CN115065588B (en) * 2022-05-31 2024-04-05 浪潮云信息技术股份公司 API fusing degradation realization method and system based on back-end error code
CN115051987A (en) * 2022-06-06 2022-09-13 瞳见科技有限公司 Mobile terminal service distribution system and method for multiple nodes
CN115051987B (en) * 2022-06-06 2024-04-16 瞳见科技有限公司 Mobile terminal service distribution system and method for multiple nodes
CN115118590A (en) * 2022-06-22 2022-09-27 平安科技(深圳)有限公司 Method, device, system, equipment and storage medium for managing configuration data

Similar Documents

Publication Publication Date Title
US10817331B2 (en) Execution of auxiliary functions in an on-demand network code execution system
US11875173B2 (en) Execution of auxiliary functions in an on-demand network code execution system
JP7197612B2 (en) Execution of auxiliary functions on on-demand network code execution systems
CN113641410A (en) Netty-based high-performance gateway system processing method and system
US20190377604A1 (en) Scalable function as a service platform
CN106161537B (en) Method, device and system for processing remote procedure call and electronic equipment
US10671458B2 (en) Epoll optimisations
US7823170B2 (en) Queued asynchronous remote function call dependency management
US8006005B2 (en) Centralized polling service
US9231995B2 (en) System and method for providing asynchrony in web services
US20060075404A1 (en) Method and system for scheduling user-level I/O threads
CN114928579B (en) Data processing method, device, computer equipment and storage medium
US20230188516A1 (en) Multi-tenant mode for serverless code execution
WO2024016624A1 (en) Multi-cluster access method and system
WO2023046141A1 (en) Acceleration framework and acceleration method for database network load performance, and device
CN111200606A (en) Deep learning model task processing method, system, server and storage medium
CN110727507B (en) Message processing method and device, computer equipment and storage medium
US20120066554A1 (en) Application query control with cost prediction
CN111586140A (en) Data interaction method and server
CN111831402A (en) Method, apparatus and computer program product for managing software functions
US10348814B1 (en) Efficient storage reclamation for system components managing storage
CN111752728B (en) Message transmission method and device
US11366648B2 (en) Compiling monoglot function compositions into a single entity
CN114327404A (en) File processing method and device, electronic equipment and computer readable medium
US11861386B1 (en) Application gateways in an on-demand network code execution system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination