CN112148500A - Netty-based remote data transmission method - Google Patents

Netty-based remote data transmission method Download PDF

Info

Publication number
CN112148500A
CN112148500A CN202010421186.3A CN202010421186A CN112148500A CN 112148500 A CN112148500 A CN 112148500A CN 202010421186 A CN202010421186 A CN 202010421186A CN 112148500 A CN112148500 A CN 112148500A
Authority
CN
China
Prior art keywords
netty
data
server
client
partition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010421186.3A
Other languages
Chinese (zh)
Inventor
张华兵
黄海英
曹小明
张今革
杨航
徐晖
魏理豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Southern Power Grid Co Ltd
Southern Power Grid Digital Grid Research Institute Co Ltd
Original Assignee
China Southern Power Grid Co Ltd
Southern Power Grid Digital Grid Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Southern Power Grid Co Ltd, Southern Power Grid Digital Grid Research Institute Co Ltd filed Critical China Southern Power Grid Co Ltd
Priority to CN202010421186.3A priority Critical patent/CN112148500A/en
Publication of CN112148500A publication Critical patent/CN112148500A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/542Event management; Broadcasting; Multicasting; Notifications
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/544Buffers; Shared memory; Pipes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/547Remote procedure calls [RPC]; Web services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5011Pool
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5018Thread allocation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/54Indexing scheme relating to G06F9/54
    • G06F2209/541Client-server
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/54Indexing scheme relating to G06F9/54
    • G06F2209/544Remote
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/54Indexing scheme relating to G06F9/54
    • G06F2209/548Queue

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer And Data Communications (AREA)

Abstract

The invention discloses a Netty-based remote data transmission method, which comprises the following steps: initiating an RPC request; in the process of establishing connection, initializing a thread group; dispatching NIO threads and establishing a thread pool; starting a data forwarder and processing an I/O event; processing the packet boundary; constructing a connecting server channel; placing the connected lanes in a queue; refreshing and outputting the data after the data are sent to the buffer area; when receiving data, accessing the data into a kafka message queue; encapsulating the data into a producer record of kafka; judging whether topic has an appointed partition or not; if the topic is determined not to be assigned with the partition, judging whether an assigned key exists; if the specified key is determined, a partition is specified for the topic by adopting a corresponding hash algorithm according to the value of the key; accessing to a partition corresponding to topic, and sequentially storing data in a buffer pool corresponding to the partition. By the mode, the data transmission with high throughput can be realized, the I/O event processing efficiency is effectively improved, and the phenomenon that the server is down due to overflow of the thread stack is avoided.

Description

Netty-based remote data transmission method
Technical Field
The invention relates to the technical field of internet, in particular to a Netty-based remote data transmission method.
Background
With the rapid development of the mobile internet, the scale of websites is gradually enlarged, for example, the access volumes of communication, commerce, logistics and game systems are rapidly increasing numbers, and for the application of a system based on a single body (the traditional vertical architecture based on a Web container such as Tomcat), the system faces a test of large data carrying capacity.
Traditional remote procedure calls of RPC framework or remote services based on remote method calls in RMI form all use synchronous blocking I/O (input/output) events which tend to cause I/O threads to easily block as the amount of concurrency of clients increases and the network latency grows to frequent wait. If the threads cannot be released in time, the processing efficiency of the I/O event is sharply reduced, and even the phenomenon that a thread stack overflows to cause the downtime of the server occurs.
Disclosure of Invention
The technical problem mainly solved by the invention is to provide a Netty-based remote data transmission method, which can realize high-throughput data transmission, effectively improve the processing efficiency of I/O events and avoid the phenomenon that a server is down due to overflow of a thread stack.
In order to solve the technical problems, the invention adopts a technical scheme that: the provided Netty-based remote data transmission method is characterized by comprising the following steps: the Netty client initiates an RPC request and starts a Netty connector to establish connection with the Netty server; in the process of establishing connection between the Netty client and the Netty server, the Netty client initializes a thread group; the Netty client allocates an NIO thread and establishes a thread pool; the Netty client starts a data forwarder and processes an I/O event; the Netty client processes the packet boundary; the Netty client builds a connection server channel; the Netty client puts the connected channels in a queue; the Netty client sends the data to the buffer area and then refreshes and outputs the data; when the Netty server receives the data, the data is accessed into the kafka message queue; the Netty server encapsulates data into a producer record of kafka, wherein the producer record comprises key, value, partition, time and topic; the Netty server judges whether topic has an appointed partition; if the topic is determined not to be assigned with the partition, judging whether an assigned key exists; if the specified key is determined, the Netty server uses a corresponding hash algorithm to specify a partition for the topic according to the value of the key; accessing to a partition corresponding to topic, sequentially storing the data in a buffer pool corresponding to the partition, and performing data verification.
Further, the method further comprises: and if the key is not specified, polling one partition, and executing the step of accessing to the partition corresponding to the topic.
Further, the method further comprises: and if the topic is determined to have the designated partition, executing the step of accessing to the partition corresponding to the topic.
Further, after the step of accessing the data into the kafka message queue, the method further comprises: a Producer instance of kafka was created to serialize the data.
Further, the step of performing data verification includes: judging whether the Broker of Kafka can read the data in the buffer pool or not; if yes, packaging success information into a RecordMetaData object for returning; if so, the failure information is packaged into a RecordMetaData object for return.
Further, the method further comprises: in the process that the Netty server receives connection, the Netty server initializes a thread group; the Netty server assigns an NIO thread and establishes a thread pool; the Netty server side starts a data forwarder and processes an I/O event of the Netty client side; the Netty service end processes the packet boundary; the Netty server side constructs a connection server channel; the Netty server puts the connected channels in a queue; and the Netty server sends the feedback data to the buffer area and then refreshes and outputs the feedback data.
Further, the method further comprises: the Netty client creates an instance of ServerBootstrap; the Netty client defines a thread group; the Netty client binds a channel NioSetChannel. When the connection is established, the Netty client installs an echo clienthandeller instance into one ChannelPipeline of the Channel; the Netty client adds a processing class to inherit the ChannelHandlerAdap; the Netty client adds a self-defined MessageToByteEncoder and a decoder; the Netty client adds a heartbeat mechanism.
Further, the method further comprises: the Netty server creates an instance of ServerBootstrap; the Netty server defines a thread group; the Netty server side binds a channel NioSecketChannel.class and a handler; the Netty server side binds a monitoring port; the Netty server binds a custom messageToByteEncoder encoder and a decoder; the Netty server registers Channel and adds a listener Channel future Lister; the Netty server establishes a server processing channel, and the DiscardServerHandler inherits from the ChannelHandlerAdap; the Netty server adds IdleStateHandler to the ChannelPipeline object to monitor whether the Netty client is alive.
The invention has the beneficial effects that: different from the prior art, the disclosed Netty-based remote data transmission method comprises the following steps: the Netty client initiates an RPC request and starts a Netty connector to establish connection with the Netty server; in the process of establishing connection between the Netty client and the Netty server, the Netty client initializes a thread group; the Netty client allocates an NIO thread and establishes a thread pool; the Netty client starts a data forwarder and processes an I/O event; the Netty client processes the packet boundary; the Netty client builds a connection server channel; the Netty client puts the connected channels in a queue; the Netty client sends the data to the buffer area and then refreshes and outputs the data; when the Netty server receives the data, the data is accessed into the kafka message queue; the Netty server side encapsulates the data into a producer record of kafka; the Netty server judges whether topic has an appointed partition; if the topic is determined not to be assigned with the partition, judging whether an assigned key exists; if the specified key is determined, the Netty server uses a corresponding hash algorithm to specify a partition for the topic according to the value of the key; accessing to a partition corresponding to topic, sequentially storing the data in a buffer pool corresponding to the partition, and performing data verification. By the mode, the data transmission with high throughput can be realized, the I/O event processing efficiency is effectively improved, and the phenomenon that the server is down due to overflow of the thread stack is avoided.
Drawings
FIG. 1 is a schematic flow chart diagram of a first embodiment of a Netty-based remote data transmission method of the present invention;
fig. 2 is a flowchart illustrating a second embodiment of the Netty-based remote data transmission method according to the present invention.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings and embodiments.
As shown in fig. 1, the method for remote data transmission based on a Netty NIO (non-blocking IO) framework includes the following steps:
step S101: the Netty client initiates an RPC (Remote Procedure Call Protocol) request, and starts a Netty connector to establish connection with the Netty server.
Step S102: in the process of establishing connection between the Netty client and the Netty server, the Netty client initializes a thread group.
Step S103: the Netty client dispatches a no-blocking IO (non-blocking IO) thread and establishes a thread pool.
In this embodiment, the Netty client adopts a Reactor mode (a Reactor mode) as the start mode of this embodiment. It should be understood that when multiple Netty servers initiate request traffic, since the embodiment uses the Reactor mode, only one mianhactor is responsible for establishing a connection in response to a connection request of a Netty server, that is, one NIO selector and one or more subreactors (slave reactors) are used, and each subReactor executes and maintains the NIO selector in a separate thread, which requires allocating multiple threads to improve I/O (input/output) operations to achieve high throughput. Furthermore, the mainReactor only accepts the connection and passes it to the subReactor, so the intermediate acceptor is the role of initiating the connection.
Further, in order to avoid the time consumption for executing the I/O thread, the Netty client uses the thread pool to execute the service processing logic, for example, the Netty server connects to the Channel, and only registers in a NioEventLoop (NIO event loop) thread for I/O operation, then the ChannelHandler (Channel handler) specifies the running thread pool after the evonggroup (NIO event loop group) is created, and then selects an evenexecutor (event executor) from the evonggroup to bind to the ChannelHandler instance when the ChannelHandler context (which is a relationship between the ChannelHandler and the ChannelPipeline) context is created.
Step S104: the Netty client starts the data forwarder and handles the I/O event.
It should be understood that the Selector of the Netty client acts as a multiplexer and EventLoop (an event loop monitoring mechanism) acts as a repeater.
Step S105: the Netty client handles packet bounds (i.e., sticky and unpack).
Step S106: the Netty client builds a connection server channel.
Step S107: the Netty client places the connected channel in a queue.
Step S108: and the Netty client sends the data to the buffer and then refreshes and outputs the data.
It should be understood that the Netty client will set a Selector thread to listen for I/O events after starting, preferably four events, i.e. read, write, receive and connect.
Step S109: and when the Netty server receives the data, the data is accessed into the kafka message queue.
In this embodiment, after the step of accessing the data into the kafka message queue, the Netty-based remote data transmission method further includes: an instance of the Producer (kafka message Producer) of kafka is created, and the data is then serialized.
It should be appreciated that data is serialized because data transfers require byte transfers and storage of the data.
Step S110: the Netty server encapsulates the data into a productierrecord (representing the key/value key-value pair sent to the kafka brooker) of the kafka cluster.
In the present embodiment, the producer record includes key (key of record), value (content of record), partition (partition of record), timestamp (time stamp of record), and topic (subject of record).
It should be understood that producer record normally requires explicit partitioning of value as well as topic.
Step S111: the Netty server judges whether topic has an appointed partition.
Step S112: if it is determined that topic does not specify a partition, a determination is made as to whether there is a specified key.
Step S113: if the specified key is determined, the Netty server uses a corresponding hash algorithm to specify a partition for the topic according to the value of the key.
It should be understood that the present embodiment can arrange a partitioning mechanism for all topics of kafka, so that high throughput can be achieved, and the transmission efficiency of data is greatly improved.
Step S114: accessing to a partition corresponding to topic, sequentially storing the data in a buffer pool corresponding to the partition, and performing data verification.
It should be appreciated that in step S114, the data is sequentially saved into the cache pool according to the sequence and partitioning mechanism of Kafka.
Further, in this embodiment, the Netty-based remote data transmission method further includes: and if the key is not specified, polling one partition, and executing the step of accessing to the partition corresponding to the topic.
Further, in this embodiment, the Netty-based remote data transmission method further includes: and if the topic is determined to have the designated partition, executing the step of accessing to the partition corresponding to the topic.
In this embodiment, in step S114, the step of performing data verification includes:
step S1141: it is determined whether the Kafka Broker (an important node component of Kafka) can read the data in the buffer pool.
It should be understood that the Broker of Kafka would read the data in the buffer pool if the data could be read indicating success, and if the data could not be read indicating failure.
Step S1142: if so, then success information is packaged into a RecordMetaData object for return.
Step S1143: if so, the failure information is packaged into a RecordMetaData object for return.
Further, as shown in fig. 2, the Netty-based remote data transmission method further includes:
step S201: and in the process that the Netty server receives the connection, the Netty server initializes the thread group.
Step S202: the Netty server dispatches the NIO threads and establishes a thread pool.
In this embodiment, the Netty server uses a Reactor mode as the start mode of this embodiment. It should be understood that when multiple Netty clients initiate request traffic, since the embodiment uses the Reactor mode, only one mianReactor is responsible for establishing a connection in response to a connection request of a Netty client, that is, one NIO selector and one or more subreactors are used, and each subReactor is executed in a separate thread and maintains the NIO selector, which requires allocating multiple threads to improve I/O operations to achieve high throughput.
Further, in order to avoid the time consumption for executing the I/O thread, the Netty server uses the thread pool to execute the service processing logic, for example, the Netty client connects to the Channel and only registers to one NioEventLoop thread for I/O operation, then the ChannelHandler specifies to run the thread pool after the eventloop group is created, and then selects one evenxecutor (event executor) from the eventloop group to bind to the corresponding ChannelHandler instance when the ChannelHandler context is executed.
Step S203: the Netty server starts a data forwarder and processes the I/O event of the Netty client.
Step S204: the Netty server handles packet boundaries.
Step S205: and the Netty server side constructs a connection server channel.
Step S206: the Netty server puts the connected channels in a queue.
Step S207: and the Netty server sends the feedback data to the buffer area and then refreshes and outputs the feedback data.
It should be understood that the Netty server will set a Selector thread to listen for I/O events after it is started.
Further, in this embodiment, the Netty-based remote data transmission method further includes a Netty client setting operation step:
step A1: the Netty client creates an instance of a serverbotsutrap (server-side bootstrapper class).
It should be understood that serverportrap can be responsible for initializing the Netty client and starting listening for port socket requests.
Step A2: the Netty client defines thread groups.
It should be understood that two thread groups (NioEventLoopgroup) are defined for assigning a multi-threaded event cycler for processing I/O operations, one called 'boss' for receiving TCP connections and one called 'worker' for processing received TCP connections, the 'boss' registering connected information with the 'worker'. In addition, a Netty client Socket object (in a client/server communication mode, a server creates a specific port, receives a client connection request and generates a connected Socket) is constructed to initialize a Netty client connectable queue, and the Netty client processes the Netty client connection request in sequence, so that the size of a backlog queue is required to be set to place connections which cannot be processed in time in the queue.
Step A3: class is a channel bound by the Netty client.
It should be understood that the Netty client binds the server channel nioversocketchannel class, then binds the handler to handle the read/write event, initializes the channel using ChannelInitializer, rewrites its abstract method initChannel ().
Step A4: when the connection is established, the Netty client installs the echoclienthandeller instance into one ChannelPipeline of the Channel.
Step A5: the Netty client adds a handling class to inherit the channelhandleradater.
Step A6: the Netty client adds a custom MessageToByteEncoder and decoder.
Step A7: the Netty client adds a heartbeat mechanism.
Further, in this embodiment, the Netty-based remote data transmission method further includes a Netty server setting operation step:
step B1: the Netty server creates an instance of serverbotsutrap.
It should be understood that serverportrap can be responsible for initializing the Netty server and starting listening for port socket requests.
Step B2: the Netty server defines thread groups.
It should be understood that the net server defines two thread groups (NioEventLoopgroup) for allocating a multi-threaded event circulator for processing I/O operations, one is called 'boss' for receiving TCP connections, one is called 'worker' for processing received TCP connections, and the 'boss' registers connected information on the 'worker'.
Step B3: class (is a class of channels connected to TCP network sockets) and handle.
It should be understood that the Netty server binds the server channel nioversocketchannel class, then binds the handler to handle the read/write event, initializes the channel using the ChannelInitializer, and rewrites its abstract method initChannel ().
Step B4: and the Netty server binds the monitoring port.
It should be appreciated that a custom ChannelHandler is added for the channel listening to the Netty client read/write event. ServerSocketChannel connects based on the selector of the NIO, and creating Channels requires implementation dependent on EventLoopGroup. Serverbottrap is an auxiliary boot class for starting NIO, and can directly use channel, the specific type of the channel is NioSecketchannel, and the process of channel instantiation is newCannel () for adjusting channel factory.
Step B5: the Netty server binds custom MessageToByteEncoder (message encoder) encoders and decoders.
Step B6: the Netty server registers the Channel and adds a listener ChannelFutureListener (listening to the ChannelFuture is the listener that holds the result of the Channel asynchronous operation).
Step B7: the Netty server establishes a server processing path, and the discardserver handler (which is a discardprotocol service handler) inherits from the channelhandler adapter (the definition of the Netty path handler and adapter).
Step B8: the Netty server adds IdleStateHandler (mainly detects the idle time of connection or triggers a heartbeat detection event when the read-write time is too long) to the ChannelPipeline object to monitor whether the Netty client is alive.
The embodiment is a service application program used by combining RCP based on Netty with Kafka, which is constructed based on a Netty NIO framework to realize high-availability and high-performance transmission of big data, and is combined with a distributed message queue Kafka to realize high-performance writing, reading and persistence of the big data, and the service application program mainly has the functions of transmitting, caching and reading large-scale concurrent data.
The NIO framework mainly comprises a remote data sending end (Netty client) and a remote data receiving end (Netty server combines with Kafka message production server), in addition, Kafka message consumption service is used for carrying out function expansion on a consumption data service module, the remote data sending end mainly establishes a data channel for carrying out communication connection according to a specific port, therefore, data communication interaction is carried out, and data transmission and receiving conditions are judged according to response data returned by the data receiving end. In addition, the remote data receiving end mainly receives data of one or more sending ends for data analysis, and the data is pushed to the Kafka message queue in real time after the data analysis is completed.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes performed by the present specification and drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (8)

1. A Netty-based remote data transmission method is characterized by comprising the following steps:
the Netty client initiates an RPC request and starts a Netty connector to establish connection with the Netty server;
in the process of establishing connection between the Netty client and the Netty server, the Netty client initializes a thread group;
the Netty client allocates an NIO thread and establishes a thread pool;
the Netty client starts a data forwarder and processes an I/O event;
the Netty client processes the packet boundary;
the Netty client builds a connection server channel;
the Netty client puts the connected channels in a queue;
the Netty client sends the data to the buffer area and then refreshes and outputs the data;
when the Netty server receives the data, the data is accessed into the kafka message queue;
the Netty server encapsulates data into a producer record of kafka, wherein the producer record comprises key, value, partition, time and topic;
the Netty server judges whether topic has an appointed partition;
if the topic is determined not to be assigned with the partition, judging whether an assigned key exists;
if the specified key is determined, the Netty server uses a corresponding hash algorithm to specify a partition for the topic according to the value of the key;
accessing to a partition corresponding to topic, sequentially storing the data in a buffer pool corresponding to the partition, and performing data verification.
2. The method of claim 1, further comprising:
and if the key is not specified, polling one partition, and executing the step of accessing to the partition corresponding to the topic.
3. The method of claim 2, further comprising:
and if the topic is determined to have the designated partition, executing the step of accessing to the partition corresponding to the topic.
4. The method of claim 3, wherein after the step of accessing data into the kafka message queue, the method further comprises:
a Producer instance of kafka was created to serialize the data.
5. The method of claim 4, wherein the step of performing a data check comprises:
judging whether the Broker of Kafka can read the data in the buffer pool or not;
if yes, packaging success information into a RecordMetaData object for returning;
if so, the failure information is packaged into a RecordMetaData object for return.
6. The method of claim 5, further comprising:
in the process that the Netty server receives connection, the Netty server initializes a thread group;
the Netty server assigns an NIO thread and establishes a thread pool;
the Netty server side starts a data forwarder and processes an I/O event of the Netty client side;
the Netty service end processes the packet boundary;
the Netty server side constructs a connection server channel;
the Netty server puts the connected channels in a queue;
and the Netty server sends the feedback data to the buffer area and then refreshes and outputs the feedback data.
7. The method of claim 6, further comprising:
the Netty client creates an instance of ServerBootstrap;
the Netty client defines a thread group;
the Netty client binds a channel NioSetChannel.
When the connection is established, the Netty client installs an echo clienthandeller instance into one ChannelPipeline of the Channel;
the Netty client adds a processing class to inherit the ChannelHandlerAdap;
the Netty client adds a self-defined MessageToByteEncoder and a decoder;
the Netty client adds a heartbeat mechanism.
8. The method of claim 6, further comprising:
the Netty server creates an instance of ServerBootstrap;
the Netty server defines a thread group;
the Netty server side binds a channel NioSecketChannel.class and a handler;
the Netty server side binds a monitoring port;
the Netty server binds a custom messageToByteEncoder encoder and a decoder;
the Netty server registers Channel and adds a listener Channel future Lister;
the Netty server establishes a server processing channel, and the DiscardServerHandler inherits from the ChannelHandlerAdap;
the Netty server adds IdleStateHandler to the ChannelPipeline object to monitor whether the Netty client is alive.
CN202010421186.3A 2020-05-18 2020-05-18 Netty-based remote data transmission method Pending CN112148500A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010421186.3A CN112148500A (en) 2020-05-18 2020-05-18 Netty-based remote data transmission method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010421186.3A CN112148500A (en) 2020-05-18 2020-05-18 Netty-based remote data transmission method

Publications (1)

Publication Number Publication Date
CN112148500A true CN112148500A (en) 2020-12-29

Family

ID=73891476

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010421186.3A Pending CN112148500A (en) 2020-05-18 2020-05-18 Netty-based remote data transmission method

Country Status (1)

Country Link
CN (1) CN112148500A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112905640A (en) * 2021-01-25 2021-06-04 武汉武钢绿色城市技术发展有限公司 Netty-based distributed database data access control method
CN112954006A (en) * 2021-01-26 2021-06-11 重庆邮电大学 Industrial Internet edge gateway design method supporting Web high-concurrency access
CN113127204A (en) * 2021-04-29 2021-07-16 四川虹美智能科技有限公司 Method and server for processing concurrent services based on reactor network model
CN113553199A (en) * 2021-07-14 2021-10-26 浙江亿邦通信科技有限公司 Method and device for processing multi-client access by using asynchronous non-blocking mode
CN113641410A (en) * 2021-06-07 2021-11-12 广发银行股份有限公司 Netty-based high-performance gateway system processing method and system
CN114095537A (en) * 2021-11-18 2022-02-25 重庆邮电大学 Netty-based mass data access method and system in application of Internet of things
CN114327951A (en) * 2021-12-30 2022-04-12 上海众人智能科技有限公司 Modularized data management system based on multi-semantic expression
CN114640719A (en) * 2022-03-22 2022-06-17 康键信息技术(深圳)有限公司 Data processing method, device, equipment and storage medium based on Netty framework

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109756461A (en) * 2017-11-06 2019-05-14 北京航天长峰科技工业集团有限公司 A kind of remote procedure calling (PRC) method based on NETTY
CN110928491A (en) * 2019-10-30 2020-03-27 平安科技(深圳)有限公司 Storage partition dynamic selection method, system, computer equipment and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109756461A (en) * 2017-11-06 2019-05-14 北京航天长峰科技工业集团有限公司 A kind of remote procedure calling (PRC) method based on NETTY
CN110928491A (en) * 2019-10-30 2020-03-27 平安科技(深圳)有限公司 Storage partition dynamic selection method, system, computer equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
苏锦: "基于Netty的高性能RPC服务器的研究与实现", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112905640A (en) * 2021-01-25 2021-06-04 武汉武钢绿色城市技术发展有限公司 Netty-based distributed database data access control method
CN112954006A (en) * 2021-01-26 2021-06-11 重庆邮电大学 Industrial Internet edge gateway design method supporting Web high-concurrency access
CN112954006B (en) * 2021-01-26 2022-07-22 重庆邮电大学 Industrial Internet edge gateway design method supporting Web high-concurrency access
CN113127204A (en) * 2021-04-29 2021-07-16 四川虹美智能科技有限公司 Method and server for processing concurrent services based on reactor network model
CN113641410A (en) * 2021-06-07 2021-11-12 广发银行股份有限公司 Netty-based high-performance gateway system processing method and system
CN113553199A (en) * 2021-07-14 2021-10-26 浙江亿邦通信科技有限公司 Method and device for processing multi-client access by using asynchronous non-blocking mode
CN113553199B (en) * 2021-07-14 2024-02-02 浙江亿邦通信科技有限公司 Method and device for processing multi-client access by using asynchronous non-blocking mode
CN114095537A (en) * 2021-11-18 2022-02-25 重庆邮电大学 Netty-based mass data access method and system in application of Internet of things
CN114095537B (en) * 2021-11-18 2023-07-14 重庆邮电大学 Netty-based mass data access method and system in Internet of things application
CN114327951A (en) * 2021-12-30 2022-04-12 上海众人智能科技有限公司 Modularized data management system based on multi-semantic expression
CN114640719A (en) * 2022-03-22 2022-06-17 康键信息技术(深圳)有限公司 Data processing method, device, equipment and storage medium based on Netty framework

Similar Documents

Publication Publication Date Title
CN112148500A (en) Netty-based remote data transmission method
US8010972B2 (en) Application connector parallelism in enterprise application integration systems
US9176772B2 (en) Suspending and resuming of sessions
US8782117B2 (en) Calling functions within a deterministic calling convention
WO2022105736A1 (en) Data processing method and apparatus, device, computer storage medium, and program
CN110134534B (en) System and method for optimizing message processing for big data distributed system based on NIO
US20120042327A1 (en) Method and System for Event-Based Remote Procedure Call Implementation in a Distributed Computing System
US20030055862A1 (en) Methods, systems, and articles of manufacture for managing systems using operation objects
CN111641676B (en) Method and device for constructing third-party cloud monitoring service
US11438423B1 (en) Method, device, and program product for transmitting data between multiple processes
WO2023046141A1 (en) Acceleration framework and acceleration method for database network load performance, and device
US10541927B2 (en) System and method for hardware-independent RDMA
CN102075434A (en) Communication method in virtual cluster
US20170041402A1 (en) Method for transparently connecting augmented network socket operations
US20190278639A1 (en) Service for enabling legacy mainframe applications to invoke java classes in a service address space
US20100306783A1 (en) Shared memory reusable ipc library
CN111738721A (en) Block chain transaction monitoring method and related device
CN115361348B (en) Method for communicating with web browser performed by data acquisition device
Beineke et al. Efficient messaging for java applications running in data centers
CN111324395A (en) Calling method, calling device and computer-readable storage medium
US20150081774A1 (en) System and method for implementing augmented object members for remote procedure call
CN112306718B (en) Communication method, system and related device between local equipment and heterogeneous equipment
CN114860480A (en) Web service proxy method, device and storage medium based on Serverless
Zhang et al. The impact of event processing flow on asynchronous server efficiency
EP4236125A1 (en) Method for implementing collective communication, computer device, and communication system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20201229

WD01 Invention patent application deemed withdrawn after publication