CN113127204B - Method and server for processing concurrent service based on reactor network model - Google Patents

Method and server for processing concurrent service based on reactor network model Download PDF

Info

Publication number
CN113127204B
CN113127204B CN202110472998.5A CN202110472998A CN113127204B CN 113127204 B CN113127204 B CN 113127204B CN 202110472998 A CN202110472998 A CN 202110472998A CN 113127204 B CN113127204 B CN 113127204B
Authority
CN
China
Prior art keywords
module
service
request
local
thread
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110472998.5A
Other languages
Chinese (zh)
Other versions
CN113127204A (en
Inventor
张银波
陈良
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Hongmei Intelligent Technology Co Ltd
Original Assignee
Sichuan Hongmei Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Hongmei Intelligent Technology Co Ltd filed Critical Sichuan Hongmei Intelligent Technology Co Ltd
Priority to CN202110472998.5A priority Critical patent/CN113127204B/en
Publication of CN113127204A publication Critical patent/CN113127204A/en
Application granted granted Critical
Publication of CN113127204B publication Critical patent/CN113127204B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0656Data buffering arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • G06F3/0674Disk device
    • G06F3/0676Magnetic disk device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/52Program synchronisation; Mutual exclusion, e.g. by means of semaphores
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/542Event management; Broadcasting; Multicasting; Notifications
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/544Buffers; Shared memory; Pipes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5018Thread allocation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/54Indexing scheme relating to G06F9/54
    • G06F2209/541Client-server

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Stored Programmes (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Hardware Redundancy (AREA)
  • Computer And Data Communications (AREA)

Abstract

The method comprises the steps of designing a server into a first module for butting an external client and a second module for butting an internal server, setting and initializing a plurality of multiplexers in the first module and the second module respectively based on the reactor network model, then adopting an event driving mode and a communication framework of non-blocking multiplexing, and processing independent tasks in parallel based on the plurality of modules.

Description

Method and server for processing concurrent service based on reactor network model
Technical Field
The invention relates to the technical field of network communication, in particular to a method and a server for processing concurrent services based on a reactor network model.
Background
With the high-speed development of the Internet and the Internet of things, the number of Internet surfing people and the number of networking devices are rapidly increased, which puts forward higher performance, concurrency, availability and other requirements on an Internet background system,
The rapid development of services causes the number of users to be increased continuously, the concurrent number is increased, and the performance requirement on the system is increased. Referring to fig. 1, when concurrent services are processed in the prior art, firstly, a service socket is created, then a port is monitored, when a user requests, the system receives user connection, creates a working thread, then the working thread reads user request data, processes service logic, and finally returns a processing result. In the process, when the system calls the ServerSocket accept method, the system is blocked until the user connection is available and returns, after each user connection is available, the system creates a working thread to process the user request, and also is blocked and waiting when the working thread calls the buffer readline method for reading data, the system can not return until the data arrives or the buffer is full or the waiting time is over, and a large amount of garbage is generated.
Therefore, the current system can not meet the service requirement, and has low concurrent service processing performance, inextensible system processing capability, capacity and the like, and poor concurrent performance.
Disclosure of Invention
The invention provides a method and a server for processing concurrent services based on a reactor network model, which are used for solving the defects of low performance, inextensibility and poor concurrent performance in the prior art, thereby reducing the coupling between threads and improving the concurrent processing performance and stability.
To solve the above technical problems, one or more embodiments of the present specification are implemented as follows:
in a first aspect, a method for processing concurrent services based on a reactor network model is provided, and the method is applied to a server formed by a first module and a second module, wherein a plurality of multiplexers are preconfigured in the first module and the second module based on the reactor network model respectively, and the method includes:
the method comprises the steps that a first module sends a connection request to a second module according to at least one service request sent by a received external client, so that a connection channel matched with the number of the service requests is established between the first module and the second module and registered on a local corresponding multiplexer, and a communication channel matched with the number of the service requests is established between a service thread and a process thread of the second module and registered on the local corresponding multiplexer; wherein, the established connection channel and the communication channel are both set as non-blocking;
the following operations are performed for each service request of the at least one service request, respectively:
the first module sends the service request to the second module through the established connection channel;
When a multiplexer local to the second module monitors that a service request exists in a communication channel, acquiring and analyzing the service request through a processing thread, and sending an analysis result to a request queue;
the second module reads the analysis result according to the sequence of the request queue through the input/output thread, performs read-write operation on the disk according to the service event in the analysis result, and sends the service response after the read-write operation is completed to the response queue;
the second module reads the service response according to the sequence of the response queue through the processing thread and returns the service response to the first module;
and after the multiplexer local to the first module monitors that the connection channel has service response, the service response is returned to the corresponding external client.
Optionally, the first module sends a connection request to the second module according to at least one service request sent by the received external client, so as to establish a connection channel matched with the number of the service requests between the first module and the second module and register the connection channel on a local corresponding multiplexer, and establish a communication channel matched with the number of the service requests between a service thread and a process thread of the second module and register the communication channel on the local corresponding multiplexer, which specifically includes:
The first module sends a connection request to the second module according to at least one service request sent by the received external client;
after the second module monitors the connection request through the port, returning a connection response to the first module, establishing a local connection channel matched with the number of the service requests, registering the local connection channel to a plurality of local multiplexers, and establishing a communication channel matched with the number of the service requests between the service thread and the process thread and registering the communication channel on the local corresponding multiplexer;
the first module creates a local connection channel matched with the local connection channel created by the second module based on the received connection response, and registers the local connection channel to a plurality of local multiplexers;
after the connection channel between the first module and the second module is successfully created, the connection channel and the communication channel are both set to be non-blocking.
Optionally, when the multiplexer local to the second module monitors that the communication channel has a service request, the service request is acquired and parsed by a processing thread, and the parsing result is sent to a request queue, which specifically includes:
The second module judges whether a service event occurs in the communication channel or not through processing the thread polling call multiplexer;
if any communication channel is monitored to have a service event, acquiring the service event from the corresponding communication channel through a processing thread and sending an analysis result of analyzing the service event to a request queue for caching and waiting;
otherwise, continuing to poll the call.
Optionally, the second module obtains the service event from the corresponding communication channel through the processing thread and sends the analysis result of analyzing the service event to the request queue for waiting, which specifically includes:
the second module obtains service events from the corresponding communication channels by processing threads by using byte buffers, wherein the types of the service events comprise read events and/or write events;
after the business event is analyzed, converting the business event into serialized object data carrying a global unique event ID;
and packaging the serialized object data and sending the packaged serialized object data into a request queue for caching and waiting.
Optionally, after the second module reads the analysis result according to the order of the request queue through the input/output thread, performing read/write operation on the disk according to the service event in the analysis result, and sending the service response after completing the read/write operation to the response queue, which specifically includes:
The second module monitors the request queue through the input/output thread, and sequentially reads corresponding analysis results from the request queue when the service event is monitored;
performing corresponding read-write operation on the local disk in a buffer mode according to the type of the service event in the analysis result;
and sending the service response after the read-write operation is completed to a response queue, and triggering a processing thread.
In a second aspect, a server for processing concurrent services based on a reactor network model is provided, the server comprising: a first module and a second module, wherein a plurality of multiplexers are preconfigured in the first module and the second module based on a reactor network model respectively,
the method comprises the steps that a first module sends a connection request to a second module according to at least one service request sent by a received external client, so that a connection channel matched with the number of the service requests is established between the first module and the second module and registered on a local corresponding multiplexer, and a communication channel matched with the number of the service requests is established between a service thread and a process thread of the second module and registered on the local corresponding multiplexer; wherein, the established connection channel and the communication channel are both set as non-blocking;
The following operations are performed for each service request of the at least one service request, respectively:
the first module sends the service request to the second module through the established connection channel;
when a multiplexer local to the second module monitors that a service request exists in a communication channel, acquiring and analyzing the service request through a processing thread, and sending an analysis result to a request queue;
the second module reads the analysis result according to the sequence of the request queue through the input/output thread, performs read-write operation on the disk according to the service event in the analysis result, and sends the service response after the read-write operation is completed to the response queue;
the second module reads the service response according to the sequence of the response queue through the processing thread and returns the service response to the first module;
and after the multiplexer local to the first module monitors that the connection channel has service response, the service response is returned to the corresponding external client.
Optionally, the first module sends a connection request to the second module according to at least one service request sent by the received external client, so as to establish a connection channel matched with the number of the service requests between the first module and the second module and register the connection channel on a local corresponding multiplexer, and establish a communication channel matched with the number of the service requests between a service thread and a process thread of the second module and register the communication channel on the local corresponding multiplexer, which specifically includes:
The first module sends a connection request to the second module according to at least one service request sent by the received external client;
after the second module monitors the connection request through the port, returning a connection response to the first module, establishing a local connection channel matched with the number of the service requests, registering the local connection channel to a plurality of local multiplexers, and establishing a communication channel matched with the number of the service requests between the service thread and the process thread and registering the communication channel on the local corresponding multiplexer;
the first module creates a local connection channel matched with the local connection channel created by the second module based on the received connection response, and registers the local connection channel to a plurality of local multiplexers;
after the connection channel between the first module and the second module is successfully created, the first module and the second module respectively set the connection channel and the communication channel to be non-blocking.
Optionally, when the multiplexer local to the second module monitors that the communication channel has a service request, the service request is acquired and parsed by a processing thread, and the parsing result is sent to a request queue, which specifically includes:
The second module judges whether a service event occurs in the communication channel or not through processing the thread polling call multiplexer; if any communication channel is monitored to have a service event, acquiring the service event from the corresponding communication channel through a processing thread and sending an analysis result of analyzing the service event to a request queue for caching and waiting; otherwise, continuing to poll the call.
Optionally, when the second module obtains the service event from the corresponding communication channel through the processing thread and sends the analysis result of analyzing the service event to the request queue for buffering waiting, the second module is specifically configured to:
acquiring service events from the corresponding communication channels by processing threads by using byte buffers, wherein the types of the service events comprise read events and/or write events;
after the business event is analyzed, converting the business event into serialized object data carrying a global unique event ID;
and packaging the serialized object data and sending the packaged serialized object data into a request queue for caching and waiting.
Optionally, after the second module reads the analysis result according to the request queue sequence through the input/output thread, performing read/write operation on the disk according to the service event in the analysis result, and when sending the service response after completing the read/write operation to the response queue, the second module is specifically configured to:
Monitoring a request queue through an input/output thread, and sequentially reading corresponding analysis results from the request queue when a business event is monitored;
performing corresponding read-write operation on the local disk in a buffer mode according to the type of the service event in the analysis result;
and sending the service response after the read-write operation is completed to a response queue, and triggering a processing thread.
In a third aspect, an electronic device is provided, comprising:
a processor; and
a memory arranged to store computer executable instructions which, when executed, cause the processor to perform the method of the first aspect.
In a fourth aspect, there is provided a computer readable storage medium storing one or more programs which, when executed by an electronic device comprising a plurality of application programs, cause the electronic device to perform the method of the first aspect.
As can be seen from the technical solutions provided in one or more embodiments of the present disclosure, by designing the server to be a first module for interfacing with an external client and a second module for interfacing with an internal server, and setting and initializing a plurality of multiplexers in the first module and the second module based on a reactor network model, respectively, then, using a communication architecture of event-driven mode and non-blocking multiplexing, processing independent tasks in parallel based on the plurality of modules, and improving concurrency performance of the system due to the use of a plurality of channels and a plurality of multiplexers, and simultaneously, improving concurrency and scalability of the system by using a load balancing technique and a queue technique, reducing coupling, and improving read-write performance of the disk by using a caching technique and a buffering technique.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of a concurrent service processing flow in the prior art.
Fig. 2 is a schematic diagram of a server structure for processing concurrent services according to the present invention.
Fig. 3a is a schematic diagram of steps of a method for processing concurrent services based on a reactor network model according to an embodiment of the present disclosure.
Fig. 3b is a schematic flow chart of processing concurrent services according to an embodiment of the present disclosure.
FIG. 4a is a flowchart of a Nio Server thread provided by an embodiment of the present disclosure.
FIG. 4b is a Processor thread flow diagram provided by an embodiment of the present description.
FIG. 4c is a flowchart of IO threads provided by an embodiment of the present disclosure.
Fig. 5 is a schematic structural view of a first module provided in the embodiment of the present disclosure.
Fig. 6 is a schematic structural view of a second module provided in the embodiment of the present disclosure.
Fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
In order that those skilled in the art will better understand the technical solutions in this specification, a clear and complete description of the technical solutions in one or more embodiments of this specification will be provided below with reference to the accompanying drawings in one or more embodiments of this specification, and it is apparent that the one or more embodiments described are only a part of embodiments of this specification, not all embodiments. All other embodiments, which can be made by one or more embodiments of the present disclosure without inventive faculty, are intended to be within the scope of the present disclosure.
First, some terms involved in the present specification will be briefly described.
The reactor model is a mode of processing network concurrent read-write, for network synchronous read-write, in which the core thinks that all read-write events to be processed are registered on the multiplexer, while the main thread is blocked on the multiplexer, and once the read-write events are triggered or ready, the multiplexer returns and distributes the corresponding events to the processor. The reactor model is an event-driven mechanism, when an event occurs, a plurality of corresponding events are driven successively, and when a plurality of events are triggered simultaneously, the processed events are more and more, like a nuclear reactor, huge energy is generated, and the processing is carried out simultaneously in the network model.
Still referring to fig. 1, in the whole processing process of the architecture, the main thread and the working thread of the system are synchronously blocked to process various tasks, so that a user always blocks and waits for a processing result of the system, the efficiency is quite low, and the system performance is low. And when a large number of users request concurrently, the system creates a large number of threads to process the user requests, each thread consumes memory space and occupies CPU time, and the threads are blocked when encountering external data which is not ready. The blocking results in a large amount of thread context switching, continuously consuming system resources, not releasing the resources in time, less available resources, and slower response, resulting in overwhelming and even system crashing. The method is characterized by the following defects: the performance is low: after receiving the request of the user, the system reads the request data of the user, analyzes the data, processes business logic, reads and writes the disk, returns a processing result, and synchronously processes all tasks by one thread, so that the system efficiency is low. Inextensible: with the continuous development of the service, the access amount and data flow of the system are rapidly increased, and the processing capacity, the operation speed, the memory size, the hard disk capacity and the like of the system are correspondingly increased, so that a single server cannot bear the service at all, but the network architecture does not have a corresponding expansion mechanism to ensure that the continuous service can normally run. The concurrency is poor: the current system is based on a traditional architecture of blocking network programming, and after a user requests the system, a system thread is usually blocked in the process of reading and writing data, so that system resources are always occupied. In the case of high concurrency of a large number of users, the system may launch a large number of threads to handle many user requests. Because the resources of the system are limited, when the number of threads of the system reaches a certain number, memory overflow occurs, and finally the system is crashed, so that the service cannot be continuously developed, and the user experience is extremely poor.
To solve the above-mentioned problems in the prior art, the present specification uses a reactor network model to design a high-performance, high-concurrency, scalable and scalable system architecture, and referring to fig. 2, the architecture is generally composed of a first module 202 for providing services to clients and a second module 204 for providing services to servers, and a multiplexer, a Channel and a processing thread Processor are designed based on the reactor model, the Channel functions to establish connection with the clients and read and write data, and the multiplexer listens to multiple channels, detects events on the multiple channels, and then distributes the events to corresponding processing threads for processing. The logic of the second module in the architecture is divided into three parts of network connection, event processing and read-write disk, which are respectively and independently processed by three core threads, and then an event driving model, a multiplexing technology, a non-blocking technology, a queue technology, a load balancing strategy, a caching technology, a design mode and the like are used for achieving the exclusive division of the module, so that the coupling degree is reduced, the performance and the stability are greatly improved, and the supported concurrency quantity is greatly increased.
The following describes the embodiments of the present specification in detail by way of specific examples.
Referring to fig. 3a, which is a schematic diagram of steps of a method for processing concurrent services based on a reactor network model according to an embodiment of the present disclosure, the method may be applied to a server formed by a first module and a second module, and in combination with the schematic diagram of the architecture shown in fig. 3b, a plurality of multiplexers are preconfigured in the first module and the second module based on the reactor network model, respectively, and the method may include the following steps:
step 302: the method comprises the steps that a first module sends a connection request to a second module according to at least one service request sent by a received external client, so that a connection channel matched with the number of the service requests is established between the first module and the second module and registered on a local corresponding multiplexer, and a communication channel matched with the number of the service requests is established between a service thread and a process thread of the second module and registered on the local corresponding multiplexer; wherein the established connection channel and the communication channel are both set to be non-blocking.
It should be understood that in the embodiment of the present specification, two modules need to be designed first on the server, one being the first module to dock with the external client and the other being the second module to dock with the local server. In addition, in order to realize high concurrency processing, a plurality of multiplexers can be respectively arranged in the first module and the second module in advance, wherein the number of the multiplexers can be reasonably adjusted according to the local resource CPU and the memory size of the server. Preferably, the number of multiplexers in the first module may be set to be the same as the number of multiplexers in the second module to implement the coordinated processing service. Wherein the multiplexer needs to be initialized before the server works; meanwhile, a request queue and a response queue are designed in the second module in advance, and queue parameters are initialized respectively.
Alternatively, this step 302 may be performed as: the first module sends a connection request to the second module according to at least one service request sent by the received external client; after the second module monitors the connection request through the port, returning a connection response to the first module, establishing a local connection channel matched with the number of the service requests, registering the local connection channel to a plurality of local multiplexers, and establishing a communication channel matched with the number of the service requests between the service thread and the process thread and registering the communication channel on the local corresponding multiplexer; the first module creates a local connection channel matched with the local connection channel created by the second module based on the received connection response, and registers the local connection channel to a plurality of local multiplexers; after the connection channel between the first module and the second module is successfully created, the connection channel and the communication channel are both set to be non-blocking.
The first module establishes a connection channel between the first module and the second module through the port opened by the second module in a manner of sending a connection request to the second module, establishes a communication channel in the second module, and registers the communication channels to respective local multiplexers respectively, wherein the established connection channel and communication channel are non-blocking, and the asynchronous non-blocking processing of each thread is ensured.
The following operations are performed for each service request of the at least one service request, respectively:
step 304: and the first module sends the service request to the second module through the established connection channel.
After establishing the connection channel and the communication channel, the service request may be transmitted to the second module through the connection channel and the communication channel. Wherein the service request may be a request carrying a specific service event.
Step 306: and when the multiplexer local to the second module monitors that the communication channel has a service request, acquiring and analyzing the service request through a processing thread, and sending an analysis result to a request queue.
In a specific implementation, step 306 may be performed as: the second module judges whether a service event occurs in the communication channel or not through processing the thread polling call multiplexer; if any communication channel is monitored to have a service event, acquiring the service event from the corresponding communication channel through a processing thread and sending an analysis result of analyzing the service event to a request queue for caching and waiting; otherwise, continuing to poll the call. The load balancing strategy can be used in the processing thread, and the allocation of processing resources can be reasonably invoked.
Optionally, when step 306 obtains the service event from the corresponding communication channel through the processing thread and sends the analysis result of analyzing the service event to the request queue for waiting, the method specifically may include: the second module obtains service events from the corresponding communication channels by processing threads by using byte buffers, wherein the types of the service events comprise read events and/or write events; after the business event is analyzed, converting the business event into serialized object data carrying a global unique event ID; and packaging the serialized object data and sending the packaged serialized object data into a request queue for caching and waiting.
Step 308: and the second module reads the analysis result according to the sequence of the request queue through the input/output thread, performs read-write operation on the disk according to the service event in the analysis result, and sends the service response after the read-write operation is completed to the response queue.
Step 308 may specifically include: the second module monitors the request queue through the input/output thread, and sequentially reads corresponding analysis results from the request queue when the service event is monitored; performing corresponding read-write operation on the local disk in a buffer mode according to the type of the service event in the analysis result; and sending the service response after the read-write operation is completed to a response queue, and triggering a processing thread.
Step 310: and the second module reads the service response according to the sequence of the response queue through the processing thread and returns the service response to the first module.
In step 308 and step 310, the service data and the service response are cached in a queue manner, so as to reduce the coupling and improve the concurrent processing performance.
Step 312: and after the multiplexer local to the first module monitors that the connection channel has service response, the service response is returned to the corresponding external client.
The construction and use flow of the server shown in fig. 2 will be described.
-first module side: the system comprises an external request entry layer, a logic processing layer and a data layer, wherein the external request entry layer provides an interface outwards and is used for receiving service requests; the logic processing layer encapsulates the request data and then sends the request data to the second module; the data layer encapsulates the request data and returns the result data.
1.1, a plurality of multiplexers are initialized at the first module side.
And 1.2, a service request entry creating method is used for receiving the service request sent by the external client.
And 1.3, after receiving the service request, opening a connection channel and setting the connection channel to be in a non-blocking mode, and simultaneously setting the parameters of client connection and asynchronously connecting the server.
1.4, judging whether the connection is successful, if so, directly registering the read status bit to the multiplexer, otherwise, registering the connection status bit to the multiplexer, and monitoring the response of the server.
1.5, polling the multiplexer, judging whether the connection is finished according to the event type, and registering the read status bit to the multiplexer if the connection is finished.
1.6, after registering the read status bit to the multiplexer, the user request data is transcoded into buffer data and then sent to the server side.
1.7, after the data is sent, the method of polling the multiplexer monitors a reading event, reads the data processed by the second module, packages the data and returns to the external client.
-second module side: the method comprises the operations of monitoring the connection of the first module, processing read-write events, generating a communication channel, registering a multiplexer, reading analysis data, logically processing, queuing, reading and writing magnetic disks and the like.
2.1, initializing and setting a plurality of multiplexers on the second module side.
And 2.2, creating a connecting channel between the second module and the first module, and monitoring the connection of the client by the binding port.
2.3 setting the blocking mode of the connection channel to be a non-blocking mode, registering the connection channel to the multiplexer.
And 2.4, polling to call a multiplexer method, and establishing a communication channel when a connection request or a read-write event occurs to the first module.
And 2.5, setting the communication channel blocking mode to be a non-blocking mode, registering the communication channel to the multiplexer, and waiting for the first module to read and write data.
And 2.6, polling to call a multiplexer method, when a read-write event triggers, reading channel data, performing data analysis and logic processing after the data is read, and putting the processed data into a request queue.
And 2.7, reading the data of the request queue, reading the data of the disk or writing the data into the disk according to the service type, and putting the processing result into the response queue after the processing is completed.
And 2.8, reading the response queue data, and returning to the first module after packaging the processing result.
In fact, in the above scheme, the server is divided into a first module and a second module, and the first module mainly functions as: providing a service interface to the outside as a user request entry; and sending the user request to the second module, receiving the processing result of the second module, packaging the processing result, and returning to the external client. The second module is designed into a three-layer architecture to process the client request, and the first layer is composed of NioServer threads and is responsible for processing the client connection; the second layer consists of Processor threads and is responsible for carrying out logic processing on received data; the third layer is composed of IO threads and is responsible for reading and writing data to a disk.
Referring to fig. 4a, a Nio Server thread, i.e., a Non-blocking I/O Server Non-blocking service thread, processes a connection request of a first module. The treatment process comprises the following steps: creating a channel ServerSocketChannel for connecting the first module and the second module, setting the channel as non-blocking, registering the connection channel to a multiplexer Selector, and finally monitoring a port of the second module to wait for a connection request of the first module. And the method for polling and calling the multiplexer judges whether an event triggers or not, and when the first module has a connection request, the method generates a communication channel SocketChannel between the second module and the first module. The communication channel is set to a non-blocking mode and registered with the multiplexer, and after the communication channel registration is completed, the first module can read and write data with the second module.
Referring to FIG. 4b, the Processor thread is a logical processing thread that processes data sent by the first module. Firstly, a multiplexer method is polled and called to judge whether a read-write event occurs in a communication channel SocketChannel of a first module and a dier module, when the read-write event triggers in the channel, a byte buffer is used for reading data, the data is analyzed and processed, then the data is put into a request queue, then a response queue message is received to obtain an IO thread processing result, and finally, the processing result data is packaged and returned to the first module.
Referring to fig. 4c, the IO thread is a thread that reads and writes disk data. And when the request queue has the message, reading the request queue data, correspondingly processing the data, then reading the second module disk data or writing the request data into the second module disk according to the event type, and finally returning the processing result to the response queue to inform the Processor thread of acquiring the processing result data.
According to the technical scheme, based on an event-driven mode and a non-blocking multiplexing communication architecture, a first module for butting an external client side and a second module for butting a local server side are used for layering and designing a plurality of modules to process independent tasks in parallel, a plurality of channels are used, a plurality of multiplexers are used for improving the concurrency performance of a system, a load balancing technology and a queue technology are adopted for improving the concurrency and expandability of the system, the coupling is reduced, and a caching technology and a buffering technology are used for improving the read-write performance of a disk.
The embodiment of the invention provides a server for processing concurrent services based on a reactor network model, which is shown by referring to fig. 2, and comprises: a first module 202 and a second module 204, wherein each of said first module 202 and said second module 204 is preconfigured with a plurality of multiplexers based on a reactor network model,
The first module 202 sends a connection request to the second module 204 according to at least one service request sent by the received external client, so as to establish a connection channel matched with the number of the service requests between the first module 202 and the second module 204 and register the connection channel on a local corresponding multiplexer, and establish a communication channel matched with the number of the service requests between a service thread and a process thread of the second module 204 and register the communication channel on the local corresponding multiplexer; wherein, the established connection channel and the communication channel are both set as non-blocking;
the following operations are performed for each service request of the at least one service request, respectively:
the first module 202 sends the service request to the second module 204 through the established connection channel;
when the multiplexer local to the second module 204 monitors that the communication channel has a service request, the service request is acquired and analyzed through a processing thread, and the analysis result is sent to a request queue;
after the second module 204 reads the analysis result according to the sequence of the request queue through the input/output thread, performing read-write operation on the disk according to the service event in the analysis result, and sending the service response after completing the read-write operation to the response queue;
After the second module 204 reads the service responses according to the sequence of the response queues through the processing thread, the service responses are returned to the first module 202;
after the multiplexer local to the first module 202 monitors that the connection channel has a service response, the service response is returned to the corresponding external client.
Optionally, the first module sends a connection request to the second module according to at least one service request sent by the received external client, so as to establish a connection channel matched with the number of the service requests between the first module and the second module and register the connection channel on a local corresponding multiplexer, and establish a communication channel matched with the number of the service requests between a service thread and a process thread of the second module and register the communication channel on the local corresponding multiplexer, which specifically includes:
the first module sends a connection request to the second module according to at least one service request sent by the received external client;
after the second module monitors the connection request through the port, returning a connection response to the first module, establishing a local connection channel matched with the number of the service requests, registering the local connection channel to a plurality of local multiplexers, and establishing a communication channel matched with the number of the service requests between the service thread and the process thread and registering the communication channel on the local corresponding multiplexer;
The first module creates a local connection channel matched with the local connection channel created by the second module based on the received connection response, and registers the local connection channel to a plurality of local multiplexers;
after the connection channel between the first module and the second module is successfully created, the first module and the second module respectively set the connection channel and the communication channel to be non-blocking.
Optionally, when the multiplexer local to the second module monitors that the communication channel has a service request, the service request is acquired and parsed by a processing thread, and the parsing result is sent to a request queue, which specifically includes:
the second module judges whether a service event occurs in the communication channel or not through processing the thread polling call multiplexer; if any communication channel is monitored to have a service event, acquiring the service event from the corresponding communication channel through a processing thread and sending an analysis result of analyzing the service event to a request queue for caching and waiting; otherwise, continuing to poll the call.
Optionally, when the second module obtains the service event from the corresponding communication channel through the processing thread and sends the analysis result of analyzing the service event to the request queue for buffering waiting, the second module is specifically configured to:
Acquiring service events from the corresponding communication channels by processing threads by using byte buffers, wherein the types of the service events comprise read events and/or write events;
after the business event is analyzed, converting the business event into serialized object data carrying a global unique event ID;
and packaging the serialized object data and sending the packaged serialized object data into a request queue for caching and waiting.
Optionally, after the second module reads the analysis result according to the request queue sequence through the input/output thread, performing read/write operation on the disk according to the service event in the analysis result, and when sending the service response after completing the read/write operation to the response queue, the second module is specifically configured to:
monitoring a request queue through an input/output thread, and sequentially reading corresponding analysis results from the request queue when a business event is monitored;
performing corresponding read-write operation on the local disk in a buffer mode according to the type of the service event in the analysis result;
and sending the service response after the read-write operation is completed to a response queue, and triggering a processing thread.
Further, referring to fig. 5, the first module may use a Spring Boot frame to build a project, and design the project with MVC layering ideas includes: a request entry layer 502, a logic processing layer 504, and a data layer 506; the request entry layer 502 receives a user request; the logic processing layer 504 encapsulates the request data and sends the request data to the second module; the data layer 506 encapsulates the request data and returns the result data.
Referring to fig. 6, the second module uses a Spring Boot frame to construct a project, and can be divided into: a connection processing sub-module 602, a logic processing sub-module 604, a read-write data sub-module 606, and a queue sub-module 608;
the connection processing sub-module 602 invokes a method to create a connection channel between the first module and the second module based on the channel, then the binding port monitors the connection of the first module, and sets the blocking mode of the connection channel to be a non-blocking mode, which can enable one thread to process a plurality of first module connections, thereby improving the concurrency performance of the system, and finally registering the connection channel to the multiplexer. And initializing a multiplexer set according to the CPU performance and the memory size of the second module, and dynamically registering the communication channels of the first module and the second module to the multiplexers by using a polling strategy when a connection event is triggered, wherein the polling strategy can enable a large number of client connection requests to be uniformly distributed to a plurality of multiplexers for processing so as to achieve load balancing and further improve the concurrency performance of the system.
The logic processing sub-module 604 polls the multiplexer method to judge whether a read-write event occurs, when the read-write event is triggered, the byte buffer is used for reading data, the data is converted into serialized object data after being analyzed, then a global unique event ID is generated, the data is packaged and then placed into a request queue, finally a return result message of the event ID in the response queue is received, and the packaged processing result message is returned to the first module.
The read-write data sub-module 606 listens for the request queue message, reads the queue data when the request queue has message notification, then uses the modes of buffer, buffer and the like to read and write the disk to improve the system performance, and finally puts the processing result into the response queue to notify the logic processing module to receive and process the message.
The queue sub-module 608 includes a request queue and a response queue, the request queue uses a concurrent secure queue to ensure data consistency, and is responsible for storing request data of a client, and the response queue is responsible for storing processing result data. And the service logic is decoupled from the disk task processing and reading and writing by using a queue asynchronous notification mode, so that the system performance and expansibility are improved.
It will be appreciated that the architecture illustrated by the embodiments of the present invention does not constitute a specific limitation on servers that handle concurrent services based on a reactor network model. In other embodiments of the invention, a server that processes concurrent services based on a reactor network model may include more or fewer components than shown, or may combine certain components, or split certain components, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The content of information interaction and execution process between the units in the device is based on the same conception as the embodiment of the method of the present invention, and specific content can be referred to the description in the embodiment of the method of the present invention, which is not repeated here.
The embodiment of the invention also provides an electronic device, which is shown with reference to fig. 7, and includes: at least one memory and at least one processor;
the at least one memory for storing a machine readable program;
the at least one processor is configured to invoke the machine-readable program to perform the method for processing concurrent services based on the reactor network model according to any of the embodiments of the present invention.
The embodiment of the invention also provides a computer readable medium, wherein the computer readable medium stores computer instructions, and the computer instructions, when executed by a processor, cause the processor to execute the method for processing concurrent services based on the reactor network model in any embodiment of the invention. Specifically, a system or apparatus provided with a storage medium on which a software program code realizing the functions of any of the above embodiments is stored, and a computer (or CPU or MPU) of the system or apparatus may be caused to read out and execute the program code stored in the storage medium.
In this case, the program code itself read from the storage medium may realize the functions of any of the above-described embodiments, and thus the program code and the storage medium storing the program code form part of the present invention.
Examples of the storage medium for providing the program code include a floppy disk, a hard disk, a magneto-optical disk, an optical disk (e.g., CD-ROM, CD-R, CD-RW, DVD-ROM, DVD-RAM, DVD-RW, DVD+RW), a magnetic tape, a nonvolatile memory card, and a ROM. Alternatively, the program code may be downloaded from a server computer by a communication network.
Further, it should be apparent that the functions of any of the above-described embodiments may be implemented not only by executing the program code read out by the computer, but also by causing an operating system or the like operating on the computer to perform part or all of the actual operations based on the instructions of the program code.
Further, it is understood that the program code read out by the storage medium is written into a memory provided in an expansion board inserted into a computer or into a memory provided in an expansion unit connected to the computer, and then a CPU or the like mounted on the expansion board or the expansion unit is caused to perform part and all of actual operations based on instructions of the program code, thereby realizing the functions of any of the above embodiments.
It should be noted that not all the steps and modules in the above flowcharts and the system configuration diagrams are necessary, and some steps or modules may be omitted according to actual needs. The execution sequence of the steps is not fixed and can be adjusted as required. The system structure described in the above embodiments may be a physical structure or a logical structure, that is, some modules may be implemented by the same physical entity, or some modules may be implemented by multiple physical entities, or may be implemented jointly by some components in multiple independent devices.
In the above embodiments, the hardware unit may be mechanically or electrically implemented. For example, a hardware unit may include permanently dedicated circuitry or logic (e.g., a dedicated processor, FPGA, or ASIC) to perform the corresponding operations. The hardware unit may also include programmable logic or circuitry (e.g., a general-purpose processor or other programmable processor) that may be temporarily configured by software to perform the corresponding operations. The particular implementation (mechanical, or dedicated permanent, or temporarily set) may be determined based on cost and time considerations.
While the invention has been illustrated and described in detail in the drawings and in the preferred embodiments, the invention is not limited to the disclosed embodiments, and it will be appreciated by those skilled in the art that the code audits of the various embodiments described above may be combined to produce further embodiments of the invention, which are also within the scope of the invention.

Claims (10)

1. A method for processing concurrent services based on a reactor network model, the method being applied to a server formed by a first module and a second module, wherein a plurality of multiplexers are preconfigured in the first module and the second module based on the reactor network model, respectively, the method comprising:
the method comprises the steps that a first module sends a connection request to a second module according to at least one service request sent by a received external client, so that a connection channel matched with the number of the service requests is established between the first module and the second module and registered on a local corresponding multiplexer, and a communication channel matched with the number of the service requests is established between a service thread and a process thread of the second module and registered on the local corresponding multiplexer; wherein, the established connection channel and the communication channel are both set as non-blocking;
The following operations are performed for each service request of the at least one service request, respectively:
the first module sends the service request to the second module through the established connection channel;
when a multiplexer local to the second module monitors that a service request exists in a communication channel, acquiring and analyzing the service request through a processing thread, and sending an analysis result to a request queue;
the second module reads the analysis result according to the sequence of the request queue through the input/output thread, performs read-write operation on the disk according to the service event in the analysis result, and sends the service response after the read-write operation is completed to the response queue;
the second module reads the service response according to the sequence of the response queue through the processing thread and returns the service response to the first module;
and after the multiplexer local to the first module monitors that the connection channel has service response, the service response is returned to the corresponding external client.
2. The method for processing concurrent services based on a reactor network model according to claim 1, wherein the first module sends a connection request to the second module according to at least one service request sent by the received external client, so as to establish a connection channel matching the number of service requests between the first module and the second module and register the connection channel on a local corresponding multiplexer, and establish a communication channel matching the number of service requests between a service thread and a process thread of the second module and register the connection channel on the local corresponding multiplexer, specifically comprising:
The first module sends a connection request to the second module according to at least one service request sent by the received external client;
after the second module monitors the connection request through the port, returning a connection response to the first module, establishing a local connection channel matched with the number of the service requests, registering the local connection channel to a plurality of local multiplexers, and establishing a communication channel matched with the number of the service requests between the service thread and the process thread and registering the communication channel on the local corresponding multiplexer;
the first module creates a local connection channel matched with the local connection channel created by the second module based on the received connection response, and registers the local connection channel to a plurality of local multiplexers;
after the connection channel between the first module and the second module is successfully created, the connection channel and the communication channel are both set to be non-blocking.
3. The method for processing concurrent services based on a reactor network model according to claim 1 or 2, wherein when the multiplexer local to the second module monitors that the communication channel has a service request, the service request is acquired and parsed by a processing thread, and the parsing result is sent to a request queue, specifically including:
The second module judges whether a service event occurs in the communication channel or not through processing the thread polling call multiplexer;
if any communication channel is monitored to have a service event, acquiring the service event from the corresponding communication channel through a processing thread and sending an analysis result of analyzing the service event to a request queue for caching and waiting;
otherwise, continuing to poll the call.
4. The method for processing concurrent services based on a reactor network model according to claim 3, wherein the second module obtains the service event from the corresponding communication channel through the processing thread and sends the analysis result of analyzing the service event to the request queue for waiting, specifically comprising:
the second module obtains service events from the corresponding communication channels by processing threads by using byte buffers, wherein the types of the service events comprise read events and/or write events;
after the business event is analyzed, converting the business event into serialized object data carrying a global unique event ID;
and packaging the serialized object data and sending the packaged serialized object data into a request queue for caching and waiting.
5. The method for processing concurrent services based on a reactor network model according to claim 3, wherein the second module reads the analysis result according to the request queue sequence through the input/output thread, performs read-write operation on the disk according to the service event in the analysis result, and sends the service response after the completion of the read-write operation to the response queue, and specifically comprises the following steps:
The second module monitors the request queue through the input/output thread, and sequentially reads corresponding analysis results from the request queue when the service event is monitored;
performing corresponding read-write operation on the local disk in a buffer mode according to the type of the service event in the analysis result;
and sending the service response after the read-write operation is completed to a response queue, and triggering a processing thread.
6. A server for processing concurrent services based on a reactor network model, the server comprising: a first module and a second module, wherein a plurality of multiplexers are preconfigured in the first module and the second module based on a reactor network model respectively,
the method comprises the steps that a first module sends a connection request to a second module according to at least one service request sent by a received external client, so that a connection channel matched with the number of the service requests is established between the first module and the second module and registered on a local corresponding multiplexer, and a communication channel matched with the number of the service requests is established between a service thread and a process thread of the second module and registered on the local corresponding multiplexer; wherein, the established connection channel and the communication channel are both set as non-blocking;
The following operations are performed for each service request of the at least one service request, respectively:
the first module sends the service request to the second module through the established connection channel;
when a multiplexer local to the second module monitors that a service request exists in a communication channel, acquiring and analyzing the service request through a processing thread, and sending an analysis result to a request queue;
the second module reads the analysis result according to the sequence of the request queue through the input/output thread, performs read-write operation on the disk according to the service event in the analysis result, and sends the service response after the read-write operation is completed to the response queue;
the second module reads the service response according to the sequence of the response queue through the processing thread and returns the service response to the first module;
and after the multiplexer local to the first module monitors that the connection channel has service response, the service response is returned to the corresponding external client.
7. The server for processing concurrent services based on a reactor network model according to claim 6, wherein the first module sends a connection request to the second module according to at least one service request sent by the received external client, so as to establish a connection channel matching the number of service requests between the first module and the second module and register the connection channel on a local corresponding multiplexer, and establish a communication channel matching the number of service requests between a service thread and a process thread of the second module and register the connection channel on the local corresponding multiplexer, specifically comprising:
The first module sends a connection request to the second module according to at least one service request sent by the received external client;
after the second module monitors the connection request through the port, returning a connection response to the first module, establishing a local connection channel matched with the number of the service requests, registering the local connection channel to a plurality of local multiplexers, and establishing a communication channel matched with the number of the service requests between the service thread and the process thread and registering the communication channel on the local corresponding multiplexer;
the first module creates a local connection channel matched with the local connection channel created by the second module based on the received connection response, and registers the local connection channel to a plurality of local multiplexers;
after the connection channel between the first module and the second module is successfully created, the first module and the second module respectively set the connection channel and the communication channel to be non-blocking.
8. The server for processing concurrent services based on a reactor network model according to claim 6 or 7, wherein when the multiplexer local to the second module monitors that the communication channel has a service request, the service request is acquired and parsed by a processing thread, and the parsing result is sent to a request queue, which specifically includes:
The second module judges whether a service event occurs in the communication channel or not through processing the thread polling call multiplexer; if any communication channel is monitored to have a service event, acquiring the service event from the corresponding communication channel through a processing thread and sending an analysis result of analyzing the service event to a request queue for caching and waiting; otherwise, continuing to poll the call.
9. The server for processing concurrent services based on a reactor network model according to claim 8, wherein the second module is specifically configured to, when the processing thread obtains a service event from the corresponding communication channel and sends an analysis result of analyzing the service event to the request queue for waiting:
acquiring service events from the corresponding communication channels by processing threads by using byte buffers, wherein the types of the service events comprise read events and/or write events;
after the business event is analyzed, converting the business event into serialized object data carrying a global unique event ID;
and packaging the serialized object data and sending the packaged serialized object data into a request queue for caching and waiting.
10. The server for processing concurrent services based on a reactor network model according to claim 8, wherein the second module reads the analysis result according to the request queue sequence through the input/output thread, performs read/write operation on the disk according to the service event in the analysis result, and when sending the service response after completing the read/write operation to the response queue, is specifically configured to:
Monitoring a request queue through an input/output thread, and sequentially reading corresponding analysis results from the request queue when a business event is monitored;
performing corresponding read-write operation on the local disk in a buffer mode according to the type of the service event in the analysis result;
and sending the service response after the read-write operation is completed to a response queue, and triggering a processing thread.
CN202110472998.5A 2021-04-29 2021-04-29 Method and server for processing concurrent service based on reactor network model Active CN113127204B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110472998.5A CN113127204B (en) 2021-04-29 2021-04-29 Method and server for processing concurrent service based on reactor network model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110472998.5A CN113127204B (en) 2021-04-29 2021-04-29 Method and server for processing concurrent service based on reactor network model

Publications (2)

Publication Number Publication Date
CN113127204A CN113127204A (en) 2021-07-16
CN113127204B true CN113127204B (en) 2023-05-16

Family

ID=76780611

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110472998.5A Active CN113127204B (en) 2021-04-29 2021-04-29 Method and server for processing concurrent service based on reactor network model

Country Status (1)

Country Link
CN (1) CN113127204B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113746574B (en) * 2021-07-30 2023-01-24 苏州浪潮智能科技有限公司 Information interaction method, system and equipment
CN117573328B (en) * 2024-01-15 2024-03-29 西北工业大学 Parallel task rapid processing method and system based on multi-model driving

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102722377B (en) * 2012-06-28 2015-05-20 上海美琦浦悦通讯科技有限公司 Network video application processing system based on adaptive communication environment (ACE) framework
US10382343B2 (en) * 2017-04-04 2019-08-13 Netapp, Inc. Intelligent thread management across isolated network stacks
CN109189595A (en) * 2018-09-17 2019-01-11 深圳怡化电脑股份有限公司 Event-handling method, device, equipment and medium based on server
US11734204B2 (en) * 2019-04-02 2023-08-22 Intel Corporation Adaptive processor resource utilization
CN110134534B (en) * 2019-05-17 2023-08-25 普元信息技术股份有限公司 System and method for optimizing message processing for big data distributed system based on NIO
CN110795254A (en) * 2019-09-23 2020-02-14 武汉智美互联科技有限公司 Method for processing high-concurrency IO based on PHP
CN112148500A (en) * 2020-05-18 2020-12-29 南方电网数字电网研究院有限公司 Netty-based remote data transmission method
CN112417349B (en) * 2020-08-31 2023-01-10 上海哔哩哔哩科技有限公司 Programming device and network state monitoring method
CN112685148A (en) * 2020-12-07 2021-04-20 南方电网数字电网研究院有限公司 Asynchronous communication method and device of mass terminals, computer equipment and storage medium

Also Published As

Publication number Publication date
CN113127204A (en) 2021-07-16

Similar Documents

Publication Publication Date Title
CN110677277B (en) Data processing method, device, server and computer readable storage medium
CN113127204B (en) Method and server for processing concurrent service based on reactor network model
US7274706B1 (en) Methods and systems for processing network data
CN108647104B (en) Request processing method, server and computer readable storage medium
US9195527B2 (en) System and method for processing messages using native data serialization/deserialization in a service-oriented pipeline architecture
US8094560B2 (en) Multi-stage multi-core processing of network packets
CN102567111B (en) A kind of method of asynchronous procedure call, system and terminal device
CN110134534B (en) System and method for optimizing message processing for big data distributed system based on NIO
JP3882917B2 (en) Information processing system, information processing apparatus, and program
CN111274019B (en) Data processing method, device and computer readable storage medium
JP5479709B2 (en) Server-processor hybrid system and method for processing data
US20210406068A1 (en) Method and system for stream computation based on directed acyclic graph (dag) interaction
CN111338769B (en) Data processing method, device and computer readable storage medium
CN113515361B (en) Lightweight heterogeneous computing cluster system facing service
CN114095537A (en) Netty-based mass data access method and system in application of Internet of things
CN113553153A (en) Service data processing method and device and micro-service architecture system
CN115878301A (en) Acceleration framework, acceleration method and equipment for database network load performance
CN114363269A (en) Message transmission method, system, equipment and medium
CN113965628A (en) Message scheduling method, server and storage medium
US8255933B2 (en) Method and system for reading data, related network and computer program product therefor
CN114371935A (en) Gateway processing method, gateway, device and medium
CN110955461A (en) Processing method, device and system of computing task, server and storage medium
Rosa et al. INSANE: A Unified Middleware for QoS-aware Network Acceleration in Edge Cloud Computing
EP4191413A1 (en) Message management method, device, and serverless system
CN111160546B (en) Data processing system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant