CN112398744A - Network communication method and device and electronic equipment - Google Patents

Network communication method and device and electronic equipment Download PDF

Info

Publication number
CN112398744A
CN112398744A CN201910760236.8A CN201910760236A CN112398744A CN 112398744 A CN112398744 A CN 112398744A CN 201910760236 A CN201910760236 A CN 201910760236A CN 112398744 A CN112398744 A CN 112398744A
Authority
CN
China
Prior art keywords
sending
message
queue
receiving
state
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910760236.8A
Other languages
Chinese (zh)
Inventor
张杨
冯亦挥
陶阳宇
刘小宇
赵先阳
毛银
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201910760236.8A priority Critical patent/CN112398744A/en
Publication of CN112398744A publication Critical patent/CN112398744A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/625Queue scheduling characterised by scheduling criteria for service slots or service orders

Abstract

The embodiment of the application provides a network communication method, a network communication device and electronic equipment, wherein a sending end caches a sending message to be sent to a first sending queue corresponding to a receiving end; detecting whether the first sending queue is in a first state; wherein the initial state of the first transmit queue is the first state; if the first sending queue is in the first state, sending at least one sending message currently cached in the first sending queue to a receiving end, and switching the first sending queue to a second state; and receiving a reply message of the receiving end aiming at the at least one sending message, clearing the at least one sending message in the first sending queue, and switching the first sending queue to the first state. The technical scheme provided by the embodiment of the application improves the network communication quality.

Description

Network communication method and device and electronic equipment
Technical Field
The embodiment of the application relates to the technical field of network communication, in particular to a network communication method, a network communication device and electronic equipment.
Background
In many data processing systems today, network communications between many roles are typically involved, for messaging and processing, etc.
In these data processing systems, when network communication is performed between roles, a message sent by a sending end to a receiving end needs to be responded and replied by the receiving end, the sending end usually sends a plurality of messages to the receiving end continuously, and the message sending rate of the sending end and the message processing rate of the receiving end may be different, and these factors affect the quality of the network communication, so how to ensure the quality of the network communication becomes a technical problem to be solved at present.
Disclosure of Invention
The embodiment of the application provides a network communication method, a network communication device and electronic equipment, which are used for improving the network communication quality.
In a first aspect, an embodiment of the present application provides a network communication method, including:
the sending end caches a sending message to be sent to a first sending queue corresponding to the receiving end;
detecting whether the first sending queue is in a first state;
if the first sending queue is in the first state, sending at least one sending message currently cached in the first sending queue to the receiving end, and switching the first sending queue to a second state;
and receiving a reply message of the receiving end aiming at the at least one sending message, clearing the at least one sending message in the first sending queue, and switching the first sending queue to the first state.
In a second aspect, an embodiment of the present application provides a network communication method, including:
a receiving end receives at least one sending message which is currently cached in a first sending queue and sent by a sending end;
processing the at least one sending message in sequence according to the caching sequence of the at least one sending message to obtain one or more reply messages;
caching the one or more reply messages into a second sending queue;
and sending at least one currently cached reply message in the second sending queue to the sending end.
In a third aspect, an embodiment of the present application provides a network communication method, including:
a sending end determines a sending message to be sent;
searching a third sending queue in the first state from the plurality of sending queues;
buffering the sending message to the third sending queue;
sending at least one sending message currently cached in the third sending queue to a receiving end, and switching the third sending queue to a second state;
and receiving a reply message of the receiving end aiming at the at least one sending message, clearing the at least one sending message from the third sending queue, and switching the third sending queue to the first state.
In a fourth aspect, an embodiment of the present application provides a network communication method, including:
the receiving end receives at least one sending message which is currently cached in a third sending queue and sent by the sending end;
processing the at least one sending message in sequence according to the caching sequence of the at least one sending message to obtain one or more reply messages;
caching the one or more reply messages into a fourth sending queue;
and sending at least one currently cached reply message in the fourth sending queue to the sending end.
In a fifth aspect, an embodiment of the present application provides a network communication apparatus, including:
the first cache module is used for caching the sending message to be sent to a first sending queue corresponding to a receiving end;
the first detection module is used for detecting whether the first sending queue is in a first state or not;
a first sending module, configured to send at least one sending message currently cached in the first sending queue to the receiving end if the first sending queue is in the first state, and switch the first sending queue to a second state;
a first processing module, configured to receive a reply message to the at least one sending message from the receiving end, clear the at least one sending message in the first sending queue, and switch the first sending queue to the first state.
In a sixth aspect, an embodiment of the present application provides a network communication apparatus, including:
the first receiving module is used for receiving at least one sending message which is cached currently in a first sending queue and sent by a sending end;
the second processing module is used for sequentially processing the at least one sending message according to the caching sequence of the at least one sending message to obtain one or more reply messages;
the second cache module is used for caching the one or more reply messages into a second sending queue;
and the second sending module is used for sending at least one currently cached reply message in the second sending queue to the sending end.
In a seventh aspect, an embodiment of the present application provides a network communication apparatus, including:
the message determining module is used for determining a sending message to be sent;
the searching module is used for searching a third sending queue in the first state from the plurality of sending queues;
a third buffer module, configured to buffer the sending message to the third sending queue;
a third sending module, configured to send at least one currently cached sending message in the third sending queue to a receiving end, and switch the third sending queue to a second state;
a third processing module, configured to receive a reply message to the at least one sending message from the receiving end, clear the sending message from the third sending queue, and switch the third sending queue to the first state.
In an eighth aspect, an embodiment of the present application provides a network communication apparatus, including:
the second receiving module is used for receiving at least one sending message which is currently cached in a third sending queue and sent by the sending end;
the fourth processing module is used for sequentially processing the at least one sending message according to the caching sequence of the at least one sending message to obtain one or more reply messages;
a fourth caching module, configured to cache the one or more reply messages in a fourth sending queue;
and a fourth sending module, configured to send, to the sending end, at least one currently cached reply message in the fourth sending queue.
In a ninth aspect, an embodiment of the present application provides an electronic device, including a processing component and a storage component;
the storage component stores one or more computer instructions; the one or more computer instructions to be invoked for execution by the processing component;
the processing component is to:
caching a sending message to be sent into a first sending queue corresponding to a receiving end;
detecting whether the first sending queue is in a first state;
if the first sending queue is in the first state, sending at least one sending message currently cached in the first sending queue to the receiving end, and switching the first sending queue to a second state;
and receiving a reply message of the receiving end aiming at the at least one sending message, clearing the at least one sending message in the first sending queue, and switching the first sending queue to the first state.
In a tenth aspect, an embodiment of the present application provides an electronic device, including a processing component and a storage component;
the storage component stores one or more computer instructions; the one or more computer instructions to be invoked for execution by the processing component;
the processing component is to:
receiving at least one sending message which is currently cached in a first sending queue and sent by a sending end;
processing the at least one sending message in sequence according to the caching sequence of the at least one sending message to obtain one or more reply messages;
caching the one or more reply messages into a second sending queue;
and sending at least one currently cached reply message in the second sending queue to the sending end.
In an eleventh aspect, an embodiment of the present application provides an electronic device, including a processing component and a storage component;
the storage component stores one or more computer instructions; the one or more computer instructions to be invoked for execution by the processing component;
the processing component is to:
determining a sending message to be sent;
searching a third sending queue in the first state from the plurality of sending queues;
buffering the sending message to the third sending queue;
sending at least one sending message currently cached in the third sending queue to a receiving end, and switching the third sending queue to a second state;
and receiving a reply message of the receiving end aiming at the at least one sending message, clearing the at least one sending message from the third sending queue, and switching the third sending queue to the first state.
In a twelfth aspect, an embodiment of the present application provides an electronic device, including a processing component and a storage component;
the storage component stores one or more computer instructions; the one or more computer instructions to be invoked for execution by the processing component;
the processing component is to:
receiving at least one sending message which is sent by a sending end and is currently cached in a third sending queue;
processing the at least one sending message in sequence according to the caching sequence of the at least one sending message to obtain one or more reply messages;
caching the one or more reply messages into a fourth sending queue;
and sending at least one currently cached reply message in the fourth sending queue to the sending end.
In a thirteenth aspect, an embodiment of the present application provides a network communication method, including:
the sending end caches a sending message to be sent to a first sending queue corresponding to the receiving end;
the sending end detects whether the first sending queue is in a first state;
if the first sending queue is in the first state, the sending end sends at least one sending message currently cached in the first sending queue to the receiving end, and switches the first sending queue to a second state;
the receiving end sequentially processes the at least one sending message according to the caching sequence of the at least one sending message to obtain one or more reply messages;
the receiving end caches the one or more reply messages to a second sending queue;
the receiving end sends at least one reply message cached currently in the second sending queue to the sending end;
and the sending end receives a reply message aiming at the at least one sending message, clears the at least one sending message in the first sending queue and switches the first sending queue to the first state.
In a fourteenth aspect, an embodiment of the present application provides a network communication system, including a sending end and a receiving end;
the sending end is used for caching a sending message to be sent to a first sending queue corresponding to the receiving end; detecting whether the first sending queue is in a first state; if the first sending queue is in the first state, sending at least one sending message currently cached in the first sending queue to a receiving end, and switching the first sending queue to a second state; receiving a reply message of the receiving end aiming at the at least one sending message, clearing the at least one sending message in the first sending queue, and switching the first sending queue to the first state;
the receiving end is used for receiving at least one sending message which is cached currently in a first sending queue and sent by the sending end; processing the at least one sending message in sequence according to the caching sequence of the at least one sending message to obtain one or more reply messages; caching the one or more reply messages into a second sending queue; and sending at least one currently cached reply message in the second sending queue to the sending end.
In the embodiment of the application, a sending end maintains a corresponding first sending queue for a receiving end, and sets the first sending queue to have a first state and a second state, when the first sending queue is in the first state, a sending message to be sent to the receiving end is firstly cached in the first sending queue, if the first sending queue is in the first state, at least one message currently cached in the first sending queue is sent to the receiving end, a message sending event occurs, namely, the first sending queue is switched to the second state, until a reply message aiming at least one message is received, the first sending queue is switched back to the first state, and the at least one replied message is eliminated, so that the message can be continuously cached and sent in the first sending queue, because the queue has a first-in first-out characteristic, each message in the queue has a certain caching sequence, the receiving end can process at least one message according to the caching sequence, the order preservation of network communication is guaranteed, multiple messages can be sent to the receiving end as a message packet by caching the messages in the queue, the flow control function of the network communication can be guaranteed, in addition, the messages cannot be sent under the condition that the first sending queue is in the second state, the processing performance of the sending end can be guaranteed, and therefore the network communication quality is guaranteed through the embodiment of the application.
These and other aspects of the present application will be more readily apparent from the following description of the embodiments.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flow chart illustrating one embodiment of a network communication method provided herein;
FIG. 2 is a flow chart illustrating a further embodiment of a network communication method provided herein;
FIG. 3 is a schematic diagram illustrating network communication interaction in one practical application of the embodiment of the present application;
fig. 4 is a schematic diagram illustrating a plurality of transmission queues maintained by a transmitting end in a practical application according to an embodiment of the present application;
FIG. 5 is a flow chart illustrating a further embodiment of a network communication method provided herein;
FIG. 6 is a schematic diagram illustrating network communication interaction in one practical application of the embodiment of the present application;
FIG. 7 is a flow chart illustrating a further embodiment of a network communication method provided herein;
FIG. 8 is a block diagram illustrating an embodiment of a network communication device provided herein;
FIG. 9 is a schematic diagram illustrating an embodiment of an electronic device provided by the present application;
FIG. 10 is a schematic diagram illustrating a network communication device according to another embodiment of the present application;
FIG. 11 is a schematic diagram illustrating an architecture of yet another embodiment of an electronic device;
fig. 12 is a schematic diagram illustrating a network communication device according to another embodiment of the present application;
FIG. 13 is a schematic diagram illustrating an architecture of yet another embodiment of an electronic device;
fig. 14 is a schematic structural diagram of a network communication device according to another embodiment of the present application;
fig. 15 is a schematic structural diagram of another embodiment of an electronic device provided in the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
In some of the flows described in the specification and claims of this application and in the above-described figures, a number of operations are included that occur in a particular order, but it should be clearly understood that these operations may be performed out of order or in parallel as they occur herein, the number of operations, e.g., 101, 102, etc., merely being used to distinguish between various operations, and the number itself does not represent any order of performance. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel. It should be noted that, the descriptions of "first", "second", etc. in this document are used for distinguishing different messages, devices, modules, etc., and do not represent a sequential order, nor limit the types of "first" and "second" to be different.
The technical scheme of the embodiment of the application can be applied to any network communication scene, such as network communication among all roles in a data processing system. In a practical application, the method may be specifically applied to a distributed data processing system, since the distributed data processing system includes a plurality of nodes, one node may need to perform network communication with the plurality of nodes, and one node may send messages of different message types, the communication environment is more complicated, and the requirement on the network communication quality is higher.
In a complex network communication environment such as a data processing system, a message sent by a sending end to a receiving end needs to be responded and replied by the receiving end, the sending end usually sends a plurality of messages to the receiving end, the message sending rate of the sending end and the message processing rate of the receiving end may be different, and the like, which may affect the network communication quality.
The inventor finds in research that, in order to ensure the network communication quality, the network communication is required to have high performance, order preserving performance, flow control function, and the like, wherein the high performance includes ensuring the processing performance of the sending end, because when objective factors such as machine failure, network failure, process restart, and the like occur, a message sent by the sending end cannot reach the receiving end, and the sending end cannot receive a reply message, and may frequently send the message all the time, thereby affecting the performance of the sending end. The order retention means: when a sending end continuously sends a plurality of messages to a receiving end, the arrival sequence of the plurality of messages is expected to be consistent with the sending sequence, because if the arrival sequence of the plurality of messages is inconsistent with the sending sequence, an error fault occurs, for example, if the sending end firstly sends a first message for executing 3 tasks to the receiving end and then sends a second message for executing 4 tasks to the receiving end, if the second message arrives at the receiving end before the first message, the receiving end finally executes 3 tasks, and a processing error occurs; the flow control function includes ensuring the processing performance of the receiving end, because if the sending rate of the sending end is greater than the processing rate of the receiving end, especially under the condition of continuously sending a plurality of messages, the receiving end may be paralyzed and the message processing cannot be carried out.
In view of this, the inventor proposes a technical solution of the present application, in an embodiment of the present application, a first sending queue corresponding to a receiving end is maintained in a sending end, and the first sending queue is set to have a first state and a second state, when the first sending queue is in the first state, a sending message to be sent to the receiving end is firstly buffered in the first sending queue, if the first sending queue is in the first state, at least one message currently buffered in the first sending queue is sent to the receiving end, a message sending event occurs, that is, the first sending queue is switched to the second state, until a reply message for at least one message is received, the first sending queue is switched back to the first state, and the at least one message that has been replied is cleared, so that the message can be continuously buffered and sent in the first sending queue, because the queue has a first-in first-out characteristic, therefore, each message in the queue has a certain caching sequence, so that a receiving end can process at least one message according to the caching sequence, the order preserving performance of network communication is ensured, and a plurality of messages can be sent to the receiving end as a message packet by caching the messages in the queue, so that the flow control function of the network communication can be ensured.
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Fig. 1 is a flowchart of an embodiment of a network communication method provided in an embodiment of the present application, where the method is executed by a sending end, and may include the following steps:
101: and caching the sending message to be sent to a first sending queue corresponding to a receiving end.
The sending message to be sent is used for sending to the receiving end.
The first send queue is a special linear table that allows only queue elements to be inserted at one end and queue elements to be deleted at the other end, so the first send queue is a first-in-first-out linear table. The sending message buffered in the first sending queue refers to a queue element.
Optionally, in order to further ensure the processing performance of the sending end, in practical applications, the sending end may need to perform network communication with multiple receiving ends, so that the sending end may respectively maintain respective corresponding sending queues for different receiving ends, and a sending message to be sent to each receiving end is first cached in the sending queue corresponding to each receiving end.
In the embodiment of the application, a sending message to be sent to a receiving end exists, the sending operation is not executed immediately, but the sending message is firstly cached in a first sending queue, when the sending end needs to send a plurality of sending messages to the receiving end, the plurality of sending messages are sequentially cached in the first sending queue, and therefore the message processing sequence is ensured.
102: and detecting whether the first sending queue is in a first state, if so, executing a step 103, otherwise, continuing to execute the step 102.
Wherein the initial state of the first transmit queue is the first state; the first state may also be referred to as a message transmittable state.
Wherein the initial state of the first transmit queue is the first state.
103: and sending at least one sending message currently cached in the first sending queue to a receiving end, and switching the first sending queue to a second state.
The second state may refer to a message unsent state. If the first sending queue is not in the first state, the first sending queue can be continuously detected when the first sending queue is in the second state, and if the sending end newly generates a sending message to be sent to the receiving end under the condition that the second sending queue is in the second state, the sending end continues to cache the sending message into the first sending queue, so that at least one sending message may be cached in the first sending queue.
Therefore, if the first sending queue is in the first sending state, at least one sending message currently buffered in the first sending queue is sent.
At least one currently buffered transmission message in the first transmission queue, that is, all currently buffered messages in the first transmission queue.
Since at least one transmission message is sequentially buffered in the first transmission queue, and has a buffering order, the receiving end may sequentially process the at least one message according to the buffering order of the at least one transmission message.
To facilitate the receiving end processing, as another embodiment, the method may further include:
setting message sequence numbers for all messages cached to the sending queue according to the caching sequence; the receiving end may specifically process the at least one message in sequence according to the buffering order indicated by the message sequence number of the at least one message.
In order to further facilitate the determination of the buffering order, the setting of the message sequence number for each message in the sending queue according to the buffering order includes:
and according to the buffering sequence, sequentially setting message sequence numbers for all messages buffered to the first sending queue by adopting continuous numbers from the number 1.
Alternatively, Arabic numerals may be employed. That is, starting from the arabic number 1, according to the sequence from small to large, the consecutive arabic numbers are respectively set as the message sequence numbers, that is, 1, 2, 3, and 4 … …, for each message that is sequentially buffered into the first sending queue, so that it is indicated by the message sequence numbers that a message is the several messages buffered into the first sending queue, and the smaller the message sequence number is, the earlier the buffering time of the message is, the larger the message sequence number is, the later the buffering time of the message is.
In some embodiments, to facilitate understanding of the message buffering condition in the first sending queue, the method may further include:
setting a first transmission field for the first transmission queue; wherein, the first sending field is used for storing the maximum message sequence number of the buffered message in the first sending queue; therefore, the first sending field value is also the maximum message sequence number of the buffered message in the first sending queue. When the message sequence numbers are represented by consecutive Arabic numerals, the method can determine that the sending end requests the receiving end to send a plurality of sending messages based on the first sending field value.
The at least one message can be simultaneously sent to the receiving end as a data packet, so as to save network resources. The receiving end can process at least one message in sequence according to the message sequence number of the at least one message.
104: and receiving a reply message of the receiving end aiming at the at least one sending message, clearing the at least one sending message in the first sending queue, and switching the first sending queue to the first state.
If a reply message of the receiving end aiming at the at least one sending message is received, the receiving end can be considered to process the at least one sending message, namely the at least one sending message can be cleared from the first sending queue to avoid repeated sending.
The at least one sending message is sent to the receiving end, wherein the at least one sending message includes one or more reply messages, that is, the reply results respectively obtained by the receiving end processing the at least one sending message can be respectively sent to the sending end as the reply message request, or packed into a data packet and then sent to the sending end as the reply message request.
In this embodiment, a first sending queue corresponding to a receiving end is maintained in a sending end, and the first sending queue is set to have a first state and a second state, when the first sending queue is in the first state, a sending message to be sent to the receiving end is firstly buffered in the first sending queue, if the first sending queue is in the first state, at least one message currently buffered in the first sending queue is sent to the receiving end, a message sending event occurs, that is, the first sending queue is switched to the second state, until a reply message for at least one message is received, the first sending queue is switched back to the first state, and the at least one replied message is removed, so that the message can be continuously buffered in the first sending queue and sent, because the queue has a first-in first-out characteristic, each message in the queue has a certain buffering order, the receiving end can process at least one message according to the caching sequence, the order preservation of network communication is guaranteed, multiple messages can be sent to the receiving end as a message packet by caching the messages in the queue, the flow control function of the network communication can be guaranteed, in addition, the messages cannot be sent under the condition that the first sending queue is in the second state, the processing performance of the sending end can be guaranteed, and therefore the network communication quality is guaranteed through the embodiment of the application.
In order to further improve the network communication quality, reliability is also required for the network communication, that is, a message sent to a receiving end may not be received by the receiving end due to various failure reasons, and after the failure reason is overcome, it is desirable that the message can be automatically sent successfully.
In the embodiment of the present application, when a message is buffered in a first sending queue, at least one sending message currently buffered is always tried to be continuously sent to a receiving end, but since the first sending queue can be successfully sent only when the first sending queue is in a first state, the initial state of the first sending queue is the first state, and after a first message sending time occurs, the first sending queue is set to be in a second state, and if the first sending queue is in the second state, the first sending queue cannot send the message.
In order to ensure that the sending message can be sent to the receiving end, the timeout time may be set, the first sending queue is set within a predetermined time after one sending, and if the reply message of the receiving end is not received yet, the message in the first sending queue may be forcibly sent to the receiving end, so as to ensure that the message can be sent successfully automatically after the failure reason is removed.
Thus, in certain embodiments, the method may further comprise:
and if the reply message of the receiving end aiming at the at least one message is not received within the preset time after the at least one message cached in the first sending queue is sent to the receiving end, forcibly sending the at least one message cached in the first sending queue to the receiving end.
It can be understood that, because the sending end may still have a sending message to be sent in the waiting time for the receiving end to feed back the reply message, the sending message to be sent may still be buffered to the first sending queue according to the embodiment shown in fig. 1. Thus, after a predetermined time has elapsed, the message number of the at least one message currently buffered in the first transmission queue at the forced transmission time may have changed. In short, every time a message is sent, all messages in the first sending queue are sent.
Wherein the predetermined time may be set in combination with actual service requirements.
As can be seen from the above description, a first transmission field is set for the first transmission queue, and for convenience of description, the first transmission field is denoted by the symbol "sendId _ s" hereinafter.
In order to facilitate the sender to determine whether the receiver has processed the sent message it sent, the receiver may set a second receive field, which is denoted by the symbol "ackId _ r" hereinafter. The second receiving field is used for storing the maximum message sequence number in the message sent by the sending end and processed by the receiving end.
Therefore, in some embodiments, the receiving a reply message to the at least one message from the receiving end, clearing the at least one message from the send queue, and switching the send queue to the first state includes:
receiving at least one reply message and a second receiving field value sent by the receiving end; the second receiving field is used for storing the maximum message sequence number in the message sent by the sending end and processed by the receiving end;
if the second receiving field value is the same as the first sending field value, the sending queue is switched to the first state, and messages corresponding to message sequence numbers smaller than or equal to the second receiving field value in the first sending queue are cleared.
In order to determine whether the at least one reply message is a response processing by the receiving end for the sending message sent by the sending end, the second receiving field value and the at least one reply message may be sent to the sending end at the same time.
If the second receiving field value is the same as the first sending field value, it indicates that all the sending messages sent by the sending end are processed by the receiving end, at this time, the messages corresponding to the message sequence numbers smaller than or equal to the second receiving field value in the first sending queue can be cleared, and the first sending queue can be switched to the first state.
If the second receiving field value is different from the first sending field value, it indicates that the sending message receiving end sent by the sending end is not completely processed, but the messages with the message sequence numbers smaller than or equal to the second receiving field value are processed, and at this time, the messages corresponding to the message sequence numbers smaller than or equal to the second receiving field value in the first sending queue can be removed.
The second receiving field value is different from the first sending field value, and after messages corresponding to message sequence numbers smaller than or equal to the second receiving field value are sent to the receiving end, the sending end caches new sending messages to be sent to the first sending queue, and the messages can be continuously found, so that optionally, the first sending queue can be switched to the first state at this time. Of course, in order to further ensure the network communication quality, the first sending queue may be kept in the second state until all the reply messages of the sent messages are received or a predetermined time is reached for forced sending.
The receiving end can maintain a second sending queue, and the receiving end can sequentially cache the reply messages to be sent by the sending end into the second sending queue.
The sender may set a first receive field, which is denoted by "ackId _ s" hereinafter, for the first send queue, and the first receive field may be used to store a maximum message sequence number of a message sent by the receiver and received by the sender.
The receiving end may set a second sending field, which is denoted by "sendId _ r" hereinafter, for the second sending queue, and the second sending field may store the maximum message sequence number of the reply message currently buffered in the second sending queue.
Thus, in certain embodiments, the method may further comprise:
setting a first receiving field for the first sending queue; the first receiving field is used for storing the maximum message sequence number of the message sent by the receiving end and received by the sending end; the receiving end maintains a second sending queue for the sending end, and is used for caching a reply message to be sent to the sending end;
and when at least one sending message cached currently in the first sending queue is sent to the receiving end, the first receiving field value is sent to the receiving end, so that the receiving end can clear reply messages corresponding to message sequence numbers smaller than or equal to the first receiving field in the second sending queue.
In addition, in practical application, a sending end may send sending messages of different message types to a receiving end, the receiving end processes the sending messages of different message types independently without interference from users, and if the sending messages of different message types are sent by using the same sending queue, the message transmission efficiency is affected, so that the network communication efficiency is affected. Therefore, in some embodiments, the buffering, by the sending end, the message to be sent to the first sending queue corresponding to the receiving end may include:
determining the message type of a sending message to be sent;
determining a first sending queue corresponding to the message type from a plurality of sending queues maintained for the receiving end; the sending end respectively maintains a plurality of first sending queues corresponding to different message types for different receiving ends;
and caching the sending message to be sent to a first sending queue corresponding to the message type.
That is, the sending end maintains a plurality of sending queues for different message types for the same receiving end, so that the sending messages of the same message type can be cached in the same sending queue, and the sending messages of different message types can be sent concurrently, thereby ensuring the network communication efficiency.
Fig. 2 is a flowchart of another embodiment of a network communication method provided in an embodiment of the present application, where the embodiment is executed by a receiving end, and the method may include the following steps:
201: and receiving at least one sending message which is currently cached in a first sending queue and sent by a sending end.
Optionally, the sending end may send at least one sending message currently buffered in the first sending queue to the sending end as a data packet at the same time.
202: and processing the at least one sending message in sequence according to the caching sequence of the at least one sending message to obtain one or more reply messages.
At least one message buffered in the first sending queue has a buffering order, so that the receiving end can process the at least one message in sequence according to the buffering order of the at least one message, thereby obtaining one or more reply messages.
The receiving end may package reply contents respectively obtained by processing at least one transmission message into one reply message or multiple reply messages.
In order to facilitate the processing of the receiving end, the sending end may set a message sequence number for each message buffered in the sending queue according to the buffering sequence; the receiving end may specifically process the at least one message in sequence according to the buffering order indicated by the message sequence number of the at least one message.
In addition, for further convenience of determining the buffering order, the number 1 may start to use consecutive numbers to sequentially set the message sequence numbers for the messages buffered in the first sending queue.
203: and buffering the one or more reply messages to a second sending queue.
In this embodiment, the receiving end may also maintain a sending queue, that is, a second sending queue, that is, a reply message obtained by processing a sending message sent by the sending end may be buffered in the second sending queue.
The one or more reply messages may be sequentially buffered in the second sending queue according to the processing time.
204: and sending at least one currently cached reply message in the second sending queue to the sending end.
In this embodiment, after the receiving end processes at least one sending message according to the buffering order, the obtained one or more reply messages may be buffered in the second sending queue, so that at least one reply message in the second sending queue also has the buffering order, which is convenient for the sending end to receive and process, and can ensure the order-preserving property of message transmission and network communication quality.
In some embodiments, the sending, to the sender, the at least one reply message currently buffered in the second send queue may include:
detecting whether the second sending queue is in a first state; when receiving a sending message of the sending end, switching the second sending queue to the first state;
and if the second sending queue is in the first state, sending at least one currently cached reply message in the second sending queue to a sending end, and switching the second sending queue to a second state.
In order to ensure the performance of the receiving end, the receiving end may also set the first state and the second state for the second sending queue.
Therefore, in some embodiments, the method may further include:
and setting message sequence numbers for all reply messages buffered in the second sending queue according to the buffering sequence.
In some embodiments, the setting, according to the buffering order, a message sequence number for each reply message buffered in the second sending queue may include:
and according to the buffering sequence, sequentially setting message sequence numbers for the reply messages buffered to the second sending queue by adopting continuous numbers from the number 1.
Further, in certain embodiments, the method may further comprise:
setting a second sending field for the second sending queue; the second sending field is used for storing the maximum message sequence number of the reply message currently cached in the second sending queue;
the receiving at least one sending message currently cached in a first sending queue sent by a sending end includes:
receiving at least one sending message and a first receiving field value which are cached currently in a first sending queue and sent by a sending end; the first receiving field value is used for storing a maximum message sequence number in a reply message sent by the receiving end and received by the sending end;
and clearing the messages corresponding to the message sequence numbers which are less than or equal to the first receiving field value in the second sending queue.
For convenience of understanding, as shown in the network communication interaction diagram shown in fig. 3, a sending end 301 buffers a sending message to be sent to a receiving end 302 into a first sending queue 31; when the first send queue 303 is in the first state, at least one send message currently buffered in the first send queue may be sent to the receiving end 302, and the at least one send message is sent to the receiving end 302 as a data packet, which may save sending resources.
Further, the sender 301 may maintain a state for the first send queue 31, with a receiveReply ═ true indicating a first state and a receiveReply ═ flush indicating a second state.
Assuming that the sender 301 is currently buffered in the first sending queue 303 and is the first sending message to be sent to the receiver 302, a message sequence number 1 may be set for the sending message, at this time, the first sending field sendId _ s maintained for the first sending queue 303 is 1, and the first receiving field ackId _ s is 0. Because the receiveReply is true, the send message in the first send queue 31 is sent to the receiving end 302, and then the receiveReply is switched to the flush. Meanwhile, ackId _ s — 0 may be transmitted to the receiving end 302 in synchronization with the transmission message.
If the receiving end 302 cannot receive the sent message due to network reasons, etc., the sending end 301 may not receive the reply message within a certain time. In order to ensure that the receiving end 302 can receive the sent message, the sending end 301 will always try to send out the sent message in the first sending queue 31, but cannot send the sent message because the received reply is false, at this time, a timeout may be set, and if the reply message of the receiving end 302 is not received within a predetermined time after sending the message, all the sent messages in the first sending queue 31 may be forced to be sent out again.
If the receiving time is not up, the sender 301 buffers one more send message into the first send queue 31, where sendId _ s is 2 and ackId _ s is 0, and the received reply is false, so the two send messages are not sent.
When the predetermined time is reached, the sending end 301 forces 2 sending messages in the first sending queue 31 to be sent to the receiving end 302, and if the receiving end 302 receives the 2 sending messages, the 2 sending messages may be processed, and if 2 reply messages are generated. The sink 302 may maintain the second send queue 32, and buffer the 2 reply messages in the second send queue 32 sequentially, where sendId _ r of the sink 302 is 2, ackId _ r is 2, and the sink 302 simultaneously sends ackId _ r to the sender 301 with ackId _ r being 2.
After receiving the 2 reply messages, according to the ackId _ r being 2 and equal to sendId _ s being 2, the sender 301 may clear the 2 send messages with message sequence numbers less than or equal to sendId _ s being 2 in the first send queue 31, and may switch the receiveReply to true, where sendId _ s being 2 and ackId _ s being 2.
After that, the sender 301 buffers a send message to the first send queue 31, where sendId _ s is 3 and ackId _ s is 2, and since receiveReply is true, the send message may be sent to the receiver 302, and meanwhile, sends ackId _ s is 2. The receiving end 302 receives the sending message branch and performs the corresponding processing as described above, assuming that one reply message is obtained and buffered in the second sending queue 32, and according to ackId _ s being equal to ackId _ r being equal to 2, 2 reply messages with message sequence numbers smaller than or equal to ackId _ s being equal to 2 and buffered in the second sending queue 304 can be cleared, at this time, sendId _ r being equal to 3 and ackId _ r being equal to 3 in the second sending queue 32.
In addition, the sending end can maintain respective corresponding sending queues for different receiving ends respectively.
In addition, since the sending end may send messages of different message types to the receiving end, in order to improve network communication messages, the sending end may respectively create one sending queue for each receiving end for different message types, as shown in fig. 4, the sending end 301 may respectively maintain a plurality of sending queues 30 for different receiving ends to send sending messages of different message types.
Of course, multiple sending queues may also be maintained for the same sender for reply messages of different message types for the receiver.
In practical application, a sending end may send messages of different message types to a receiving end, the processing of the messages is independent and non-interfering, and if the same sending queue is used, the pressure of the sending queue is too high, therefore, as another optional manner, the sending end may maintain multiple sending queues, and each time there is a sending message to be sent, one of the sending queues may be selected to be cached and sent, as shown in fig. 5, a flow chart of another embodiment of a network communication method provided in this embodiment of the present application is provided, where the technical scheme of this embodiment is executed by the sending end, and the method may include the following steps:
501: a transmission message to be transmitted is determined.
502: searching a third sending queue in the first state from the plurality of sending queues; wherein the initial state of the transmission queue is a first state.
The third transmit queue is any one of the plurality of transmit queues that is in the first state.
503: buffering the sending message to the third sending queue;
504: and sending at least one sending message in the third sending queue to a receiving end, and switching the third sending queue to a second state.
505: and receiving a reply message of the receiving end aiming at the at least one sending message, clearing the at least one sending message from the third sending queue, and switching the third sending queue to the first state.
In this embodiment, the sending end may maintain a plurality of sending queues, and for a sending message to be sent, one sending queue in the first state may be selected as a third sending queue, and the third sending queue is buffered to the third sending queue, and at least one sending message currently buffered is sent from the third sending queue to the receiving end, and after a reply message for the at least one sending message is received by the receiving end, the third sending queue is switched to the first state, so that the sending end may continue to send other messages. According to each to-be-sent message, any sending queue in the first state can be selected to send, so that the sending performance of a sending end can be guaranteed, and the queue has a first-in first-out characteristic, so that each message in the queue has a certain buffering sequence, a receiving end can process at least one message according to the buffering sequence, and the order preservation of network communication is guaranteed, and therefore, the quality of network communication is guaranteed through the embodiment of the application.
As shown in the network communication interaction diagram in the transmitting end shown in fig. 6, the transmitting end 601 may create a plurality of transmission queues 60, and for a transmission message to be transmitted, may select a third transmission queue 61 in a first state from the plurality of transmission queues 60, buffer the third transmission queue 61, and transmit at least one currently buffered transmission message in the third transmission queue 61 to the receiving end 602.
In certain embodiments, the method may further comprise:
and if the reply message of the receiving end is not received within a preset time after the sending message is sent to the receiving end, forcibly sending the sending message in the third sending queue to the receiving end. Thereby ensuring that the receiving end can receive the transmitted message and avoiding the frequent transmission to waste resources
In certain embodiments, the method may further comprise:
if the plurality of sending queues are all in the second state, selecting any sending queue;
buffering the sending message into any sending queue;
sending at least one sending message cached currently in any sending queue to the receiving end;
and receiving a reply message aiming at the at least one sending message, and switching any one sending queue to a first state.
Optionally, the method may further include:
and if the reply message of the receiving end is not received within a preset time after at least one sending message currently cached in any sending queue is sent to the receiving end, forcibly sending at least one sending message currently cached in any sending queue to the receiving end.
Further, in certain embodiments, the method may further comprise:
and according to the buffering sequence, setting message sequence numbers for all the sending messages buffered into the third sending queue in sequence by adopting continuous numbers from the number 1.
In certain embodiments, the method may further comprise:
setting a first transmission field for the third transmission queue; wherein, the first sending field is used for storing the maximum message sequence number of the buffered message in the third sending queue;
the receiving a reply message to the send message from the receiving end, clearing the send message from the third send queue, and switching the third send queue to the first state includes:
receiving at least one reply message and a second receiving field value sent by the receiving end; the second receiving field is used for storing the maximum message sequence number in the sending message sent by the sending end and processed by the receiving end;
if the second receiving field value is the same as the first sending field value, switching the third sending queue to the first state, and clearing the sending messages corresponding to the message sequence numbers which are smaller than or equal to the second receiving field value in the third sending queue.
Optionally, if the second receiving field value is different from the first sending field value, sending messages corresponding to message sequence numbers in the third sending queue that are smaller than or equal to the second receiving field value may be cleared.
In certain embodiments, the method may further comprise:
setting a first receiving field for the third sending queue; the first receiving field is used for storing the maximum message sequence number of the reply message sent by the receiving end and received by the sending end; the receiving end maintains a fourth sending queue for the sending end, and is used for caching a reply message to be sent to the sending end;
and sending the currently cached at least one sending message in the third sending queue to the receiving end, and sending the first receiving field value to the receiving end, so that the receiving end can clear reply messages corresponding to message sequence numbers which are less than or equal to the first receiving field in the third sending queue.
To facilitate finding a transmit queue in the first state, in some embodiments, the method may further comprise:
setting an available resource list based on the queue identifications of the plurality of sending queues;
when the plurality of sending queues are in an initial state, sequentially storing queue identifications of the plurality of sending queues in the available resource list;
after the buffering the sending message to the third sending queue and switching the third sending queue to the second state, the method further includes:
removing the queue identification of the third transmit queue from the list of available resources.
After the switching the third transmit queue to the first state, the method further comprises:
storing the queue identification of the third transmit queue in the list of available resources.
Then, said searching for the third transmit queue in the first state from the plurality of transmit queues may comprise:
selecting any one queue identification from the list of available resources;
and determining the transmission queue represented by any queue identification as a third transmission queue.
Optionally, a first queue identifier may be selected from the available resource list, and a transmission queue corresponding to the first queue identifier may be used as the third transmission queue. After the third transmit queue switches back to the first state, the queue id of the third transmit queue may be stored to the end of the list of available resources.
Fig. 7 is a flowchart of another embodiment of a network communication method provided in an embodiment of the present application, where the technical solution of the embodiment is executed by a receiving end, and the method may include the following steps:
701: and receiving at least one transmission cancel currently buffered in a third transmission queue transmitted by the transmitting end.
702: and processing the at least one sending message in sequence according to the caching sequence of the at least one sending message to obtain one or more reply messages.
703: and buffering the one or more reply messages to a fourth sending queue.
704: and sending at least one currently cached reply message in the fourth sending queue to the sending end.
In this embodiment, after the receiving end processes at least one sending message according to the buffering order, one or more obtained reply messages may be buffered in the fourth sending queue, so that at least one reply message in the fourth sending queue also has the buffering order, which is convenient for the sending end to receive and process, and can ensure the order preservation of message transmission and network communication quality.
In some embodiments, the sending, to the sender, the at least one reply message currently buffered in the fourth send queue may include:
detecting whether the fourth sending queue is in a first state; when receiving a sending message of the sending end, switching the fourth sending queue to the first state;
and if the fourth sending queue is in the first state, sending at least one currently cached reply message in the fourth sending queue to a sending end, and switching the fourth sending queue to a second state.
In order to ensure the performance of the receiving end, the receiving end may also set the first state and the second state for the fourth transmit queue.
Therefore, in some embodiments, the method may further include:
and setting message sequence numbers for all the skin messages cached to the fourth sending queue according to the caching sequence.
In some embodiments, the setting, according to the buffering order, a message sequence number for each reply message buffered in the fourth sending queue may include:
and according to the buffering sequence, setting message sequence numbers for the reply messages buffered to the fourth sending queue in sequence by adopting continuous numbers from the number 1.
Further, in certain embodiments, the method may further comprise:
setting a second sending field for the fourth sending queue; the second sending field is used for storing the maximum message sequence number of the reply message currently cached in the fourth sending queue;
the receiving at least one sending message currently cached in a third sending queue sent by the sending end includes:
receiving at least one sending message and a first receiving field value which are cached currently in a third sending queue sent by a sending end; the first receiving field value is used for storing a maximum message sequence number in a reply message sent by the receiving end and received by the sending end;
and clearing the messages corresponding to the message sequence numbers which are less than or equal to the first receiving field value in the fourth sending queue.
In a practical application, the technical solution of the embodiment of the present application may be applied to a distributed job processing system, in the job processing system, a machine node is responsible for providing machine resources, each machine resource may run one job node, and a job submitted by a user may be allocated to multiple job nodes for execution, thereby implementing distributed processing. In order to facilitate resource management, a resource manager is used for resource scheduling processing in the job processing system, and the job manager is responsible for managing and controlling job nodes, specifically, the resource manager is responsible for receiving jobs to be processed submitted by users and creating the job manager in the machine nodes; the job manager applies a certain amount of machine resources to the resource manager based on the job to be processed; the resource manager performs resource allocation based on available resources of different machine nodes, and feeds back a resource allocation result (how many machine resources are allocated in different machine nodes, etc.) to the operation manager; the job manager creates a job node at the respective machine node based on the resource allocation result to execute the job.
As can be seen from the above description, network communication between multiple roles exists in the job processing system, and network communication between any two roles, including between a resource manager and a job manager, between a resource manager and a machine node, between a job manager and a machine node, and between a job manager and a job node, can be implemented by the technical solution provided in the embodiments of the present application.
Fig. 8 is a schematic structural diagram of an embodiment of a network communication device according to an embodiment of the present application, where the network communication device may include:
a first caching module 801, configured to cache a to-be-sent transmission message in a first transmission queue corresponding to a receiving end;
a first detecting module 802, configured to detect whether the first sending queue is in a first state; wherein the initial state of the first transmit queue is the first state;
a first sending module 803, configured to send, to a receiving end, at least one sending message currently cached in the first sending queue if the first sending queue is in the first state, and switch the first sending queue to a second state;
a first processing module 804, configured to receive a reply message to the at least one sending message from the receiving end, clear the at least one sending message in the first sending queue, and switch the first sending queue to the first state.
In some embodiments, the first sending module is further configured to, if a reply message to the at least one message is not received by the receiving end within a predetermined time after the at least one sending message currently cached in the first sending queue is sent to the receiving end, forcibly send the at least one sending message currently cached in the first sending queue to the receiving end.
In some embodiments, the apparatus may further comprise:
the first sequence number setting module is used for setting message sequence numbers for all the sending messages cached to the sending queue according to the caching sequence; and the receiving end sequentially processes the at least one sending message according to the caching sequence indicated by the message sequence number of the at least one sending message.
In some embodiments, the first sequence number setting module is specifically configured to set, in order from a number 1, message sequence numbers for each sending message buffered in the first sending queue in sequence by using consecutive numbers.
In some embodiments, the apparatus may further comprise:
a first field setting module, configured to set a first transmission field for the first transmission queue; wherein, the first sending field is used for storing the maximum message sequence number of the buffered message in the first sending queue;
the first processing module is specifically configured to receive at least one reply message and a second received field value sent by the receiving end; the second receiving field is used for storing the maximum message sequence number in the sending message sent by the sending end and processed by the receiving end; if the second receiving field value is the same as the first sending field value, switching the first sending queue to the first state, and clearing the reply messages corresponding to the message sequence numbers which are smaller than or equal to the second receiving field value in the first sending queue.
In some embodiments, the first processing module is further configured to clear the sending messages corresponding to the message sequence numbers in the first sending queue that are smaller than or equal to the second receiving field value if the second receiving field value is different from the first receiving field value.
In some embodiments, the first field setting module is further configured to set a first receive field for the first transmit queue; the first receiving field is used for storing the maximum message sequence number of the reply message sent by the receiving end and received by the sending end; the receiving end maintains a second sending queue for the sending end, and is used for caching a reply message to be sent to the sending end;
the first sending module is further configured to send the first receiving field value to the receiving end when sending the at least one sending message cached currently in the first sending queue to the receiving end, so that the receiving end removes reply messages corresponding to message sequence numbers smaller than or equal to the first receiving field in the second sending queue.
The sending end respectively maintains respective corresponding sending queues for different receiving ends.
In some embodiments, the first sending module may be specifically configured to determine a message type of a sending message to be sent; determining a first sending queue corresponding to the message type from a plurality of sending queues maintained for the receiving end; the sending end respectively maintains a plurality of first sending queues corresponding to different message types for the same receiving end; and caching the sending message to be sent to a first sending queue corresponding to the message type.
The network communication apparatus shown in fig. 8 can execute the network communication method shown in the embodiment shown in fig. 1, and the implementation principle and the technical effect are not described again. The specific manner in which each module and unit of the network communication device in the above embodiments perform operations has been described in detail in the embodiments related to the method, and will not be described in detail herein.
In one possible design, the network communication apparatus of the embodiment shown in fig. 8 may be implemented as an electronic device, which may include a storage component 901 and a processing component 902, as shown in fig. 9;
the storage component 901 stores one or more computer instructions for the processing component 902 to invoke for execution.
The processing component 902 is configured to:
caching a sending message to be sent into a first sending queue corresponding to a receiving end;
detecting whether the first sending queue is in a first state; wherein the initial state of the first transmit queue is the first state;
if the first sending queue is in the first state, sending at least one sending message currently cached in the first sending queue to a receiving end, and switching the first sending queue to a second state;
and receiving a reply message of the receiving end aiming at the at least one sending message, clearing the at least one sending message in the first sending queue, and switching the first sending queue to the first state.
Among other things, the processing component 902 may include one or more processors to execute computer instructions to perform all or some of the steps of the methods described above. Of course, the processing elements may also be implemented as one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components configured to perform the above-described methods.
The storage component 901 is configured to store various types of data to support operations at the electronic device. The memory components may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Of course, the electronic device may of course also comprise other components, such as input/output interfaces, communication components, etc.
The input/output interface provides an interface between the processing components and peripheral interface modules, which may be output devices, input devices, etc.
The communication component is configured to facilitate wired or wireless communication between the electronic device and other devices, and the like.
The electronic device may be a physical device or an elastic computing host provided by a cloud computing platform, and the electronic device may be a cloud server, and the processing component, the storage component, and the like may be basic server resources rented or purchased from the cloud computing platform.
An embodiment of the present application further provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a computer, the network communication method of the embodiment shown in fig. 1 may be implemented.
Fig. 10 is a schematic structural diagram of an embodiment of a network communication device according to an embodiment of the present application, where the network communication device may include:
a first receiving module 1001, configured to receive at least one currently cached sending message in a first sending queue sent by a sending end;
a second processing module 1002, configured to sequentially process the at least one sending message according to a caching order of the at least one sending message to obtain one or more reply messages;
a second caching module 1003, configured to cache the one or more reply messages in a second sending queue;
a second sending module 1004, configured to send, to the sending end, at least one reply message currently buffered in the second sending queue.
In some embodiments, the second sending module is specifically configured to detect whether the second sending queue is in a first state; when receiving a sending message of the sending end, switching the second sending queue to the first state; and if the second sending queue is in the first state, sending at least one currently cached reply message in the second sending queue to a sending end, and switching the first sending queue to a second state.
In some embodiments, the apparatus may further comprise:
and the second sequence number setting module is used for setting message sequence numbers for all reply messages buffered to the second sending queue in sequence from the number 1 to the continuous number according to the buffering sequence.
In some embodiments, the apparatus may further comprise:
a second field setting module, configured to set a second sending field for the second sending queue; the second sending field is used for storing the maximum message sequence number of the reply message currently cached in the second sending queue;
the first receiving module is specifically configured to receive at least one currently cached sending message and a first receiving field value in a first sending queue sent by a sending end; the first receiving field value is used for storing a maximum message sequence number in a reply message sent by the receiving end and received by the sending end;
the second processing module is specifically configured to clear messages corresponding to message sequence numbers in the second sending queue that are less than or equal to the first receiving field value.
The network communication apparatus shown in fig. 10 can execute the network communication method shown in the embodiment shown in fig. 2, and the implementation principle and the technical effect are not described again. The specific manner in which each module and unit of the network communication device in the above embodiments perform operations has been described in detail in the embodiments related to the method, and will not be described in detail herein.
In one possible design, the network communication apparatus of the embodiment shown in fig. 11 may be implemented as an electronic device, which may include a storage component 1101 and a processing component 1102 as shown in fig. 11;
the storage component 1101 stores one or more computer instructions for invoking execution by the processing component 1102.
The processing component 1102 is configured to:
receiving at least one sending message which is currently cached in a first sending queue and sent by a sending end;
processing the at least one sending message in sequence according to the caching sequence of the at least one sending message to obtain one or more reply messages;
caching the one or more reply messages into a second sending queue;
and sending at least one currently cached reply message in the second sending queue to the sending end.
Among other things, the processing component 1102 may include one or more processors to execute computer instructions to perform all or some of the steps of the methods described above. Of course, the processing elements may also be implemented as one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components configured to perform the above-described methods.
The storage component 1101 is configured to store various types of data to support operations at the electronic device. The memory components may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Of course, the electronic device may of course also comprise other components, such as input/output interfaces, communication components, etc.
The input/output interface provides an interface between the processing components and peripheral interface modules, which may be output devices, input devices, etc.
The communication component is configured to facilitate wired or wireless communication between the electronic device and other devices, and the like.
The electronic device may be a physical device or an elastic computing host provided by a cloud computing platform, and the electronic device may be a cloud server, and the processing component, the storage component, and the like may be basic server resources rented or purchased from the cloud computing platform.
An embodiment of the present application further provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a computer, the network communication method of the embodiment shown in fig. 2 may be implemented.
Fig. 12 is a schematic structural diagram of another embodiment of a network communication device according to an embodiment of the present application, where the device may include:
a message determining module 1201, configured to determine a sending message to be sent;
a searching module 1202, configured to search a third sending queue in the first state from the multiple sending queues; wherein the initial state of the third transmit queue is the first state;
a third caching module 1203, configured to cache the sending message to the third sending queue;
a third sending module 1204, configured to send at least one currently cached sending message in the third sending queue to a receiving end, and switch the third sending queue to a second state;
a third processing module 1205, configured to receive a reply message to the at least one sending message from the receiving end, clear the sending message from the third sending queue, and switch the third sending queue to the first state.
In some embodiments, the third processing module may be further configured to, if a reply message of the receiving end is not received within a predetermined time after the at least one sending message is sent to the receiving end, forcibly send, to the receiving end, the at least one sending message currently buffered in the third sending queue.
In some embodiments, the apparatus may further comprise:
the fourth processing module is used for selecting any one sending queue if the plurality of sending queues are in the second state; buffering the sending message into any sending queue; sending at least one sending message cached currently in any sending queue to the receiving end; and receiving a reply message aiming at the at least one sending message, and switching any one sending queue to the first state.
In some embodiments, the apparatus may further comprise:
and the third sequence number setting module is used for setting message sequence numbers for all the sending messages buffered to the third sending queue in sequence by adopting continuous numbers from the number 1 according to the buffering sequence.
In some embodiments, the apparatus may further comprise:
a third field setting module, configured to set a first sending field for the third sending queue; wherein, the first sending field is used for storing the maximum message sequence number of the buffered message in the third sending queue;
the third processing module is specifically configured to receive at least one reply message and a second received field value sent by the receiving end; the second receiving field is used for storing the maximum message sequence number in the sending message sent by the sending end and processed by the receiving end; if the second receiving field value is the same as the first sending field value, switching the third sending queue to the first state, and clearing the sending messages corresponding to the message sequence numbers which are smaller than or equal to the second receiving field value in the third sending queue.
In some embodiments, the third field setting module is specifically configured to set a first receiving field for the third sending queue; the first receiving field is used for storing the maximum message sequence number of the reply message sent by the receiving end and received by the sending end; the receiving end maintains a fourth sending queue for the sending end, and is used for caching a reply message to be sent to the sending end;
the third processing module is further configured to send the first receiving field value to the receiving end while sending the at least one currently cached sending message in the third sending queue to the receiving end, so that the receiving end removes reply messages corresponding to message sequence numbers smaller than or equal to the first receiving field in the fourth sending queue.
In some embodiments, the apparatus may further comprise:
a list setting module, configured to set an available resource list based on the queue identifiers of the multiple sending queues; when the plurality of sending queues are in an initial state, sequentially storing queue identifications of the plurality of sending queues in the available resource list;
the list setting module is further configured to delete the queue identifier of the third transmit queue from the available resource list; and after switching the third sending queue to the first state, storing the queue identification of the third sending queue in the available resource list.
In addition, the lookup module may be specifically configured to select any one of the queue identifications from the available resource list; and determining the transmission queue represented by any queue identification as a third transmission queue.
The network communication apparatus shown in fig. 12 can execute the network communication method shown in the embodiment shown in fig. 5, and the implementation principle and the technical effect are not described again. The specific manner in which each module and unit of the network communication device in the above embodiments perform operations has been described in detail in the embodiments related to the method, and will not be described in detail herein.
In one possible design, the network communication apparatus of the embodiment shown in fig. 12 may be implemented as an electronic device, which may include a storage component 1301 and a processing component 1302 as shown in fig. 13;
the storage component 1301 stores one or more computer instructions for the processing component 1302 to invoke for execution.
The processing component 1302 is configured to:
determining a sending message to be sent;
searching a third sending queue in the first state from the plurality of sending queues; wherein the initial state of the third transmit queue is the first state;
buffering the sending message to the third sending queue;
sending at least one sending message currently cached in the third sending queue to a receiving end, and switching the third sending queue to a second state;
and receiving a reply message of the receiving end aiming at the at least one sending message, clearing the sending message from the third sending queue, and switching the third sending queue to the first state.
Among other things, the processing component 1302 may include one or more processors executing computer instructions to perform all or some of the steps of the methods described above. Of course, the processing elements may also be implemented as one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components configured to perform the above-described methods.
The storage component 1301 is configured to store various types of data to support operations at the electronic device. The memory components may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Of course, the electronic device may of course also comprise other components, such as input/output interfaces, communication components, etc.
The input/output interface provides an interface between the processing components and peripheral interface modules, which may be output devices, input devices, etc.
The communication component is configured to facilitate wired or wireless communication between the electronic device and other devices, and the like.
The electronic device may be a physical device or an elastic computing host provided by a cloud computing platform, and the electronic device may be a cloud server, and the processing component, the storage component, and the like may be basic server resources rented or purchased from the cloud computing platform.
An embodiment of the present application further provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a computer, the network communication method of the embodiment shown in fig. 5 may be implemented.
Fig. 14 is a schematic structural diagram of another embodiment of a network communication device according to an embodiment of the present application, where the device may include:
a second receiving module 1401, configured to receive at least one sending message currently cached in a third sending queue sent by a sending end;
a fourth processing module 1402, configured to sequentially process the at least one sending message according to a buffering order of the at least one sending message to obtain one or more reply messages;
a fourth buffering module 1403, configured to buffer the one or more reply messages into a fourth sending queue;
a fourth sending module 1404, configured to send, to the sending end, at least one reply message currently cached in the fourth sending queue.
The network communication apparatus shown in fig. 14 can execute the network communication method shown in the embodiment shown in fig. 6, and the implementation principle and the technical effect are not described again. The specific manner in which each module and unit of the network communication device in the above embodiments perform operations has been described in detail in the embodiments related to the method, and will not be described in detail herein.
In one possible design, the network communication apparatus of the embodiment shown in fig. 14 may be implemented as an electronic device, which may include a storage component 1501 and a processing component 1502 as shown in fig. 15;
the storage component 1501 stores one or more computer instructions for the processing component 1502 to invoke for execution.
The processing component 1502 is configured to:
receiving at least one sending message which is sent by a sending end and is currently cached in a third sending queue;
processing the at least one sending message in sequence according to the caching sequence of the at least one sending message to obtain one or more reply messages;
caching the one or more reply messages into a fourth sending queue;
and sending at least one currently cached reply message in the fourth sending queue to the sending end.
Among other things, the processing component 1502 may include one or more processors executing computer instructions to perform all or some of the steps of the methods described above. Of course, the processing elements may also be implemented as one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components configured to perform the above-described methods.
The storage component 1501 is configured to store various types of data to support operations at the electronic device. The memory components may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Of course, the electronic device may of course also comprise other components, such as input/output interfaces, communication components, etc.
The input/output interface provides an interface between the processing components and peripheral interface modules, which may be output devices, input devices, etc.
The communication component is configured to facilitate wired or wireless communication between the electronic device and other devices, and the like.
The electronic device may be a physical device or an elastic computing host provided by a cloud computing platform, and the electronic device may be a cloud server, and the processing component, the storage component, and the like may be basic server resources rented or purchased from the cloud computing platform.
An embodiment of the present application further provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a computer, the network communication method of the embodiment shown in fig. 6 can be implemented.
In addition, in combination with the above description, from an overall perspective, an embodiment of the present application further provides a network communication method, including:
the sending end caches a sending message to be sent to a first sending queue corresponding to the receiving end;
the sending end detects whether the first sending queue is in a first state;
if the first sending queue is in the first state, the sending end sends at least one sending message currently cached in the first sending queue to the receiving end, and switches the first sending queue to a second state;
the receiving end sequentially processes the at least one sending message according to the caching sequence of the at least one sending message to obtain one or more reply messages;
the receiving end caches the one or more reply messages to a second sending queue;
the receiving end sends at least one reply message cached currently in the second sending queue to the sending end;
and the sending end receives a reply message aiming at the at least one sending message, clears the at least one sending message in the first sending queue and switches the first sending queue to the first state.
The specific execution operations of the sending end and the receiving end may be described in detail above, and will not be described herein again.
In addition, the embodiment of the application also provides a network communication system, which comprises a sending end and a receiving end;
the sending end is used for caching a sending message to be sent to a first sending queue corresponding to the receiving end; detecting whether the first sending queue is in a first state; if the first sending queue is in the first state, sending at least one sending message currently cached in the first sending queue to a receiving end, and switching the first sending queue to a second state; receiving a reply message of the receiving end aiming at the at least one sending message, clearing the at least one sending message in the first sending queue, and switching the first sending queue to the first state;
the receiving end is used for receiving at least one sending message which is cached currently in a first sending queue and sent by the sending end; processing the at least one sending message in sequence according to the caching sequence of the at least one sending message to obtain one or more reply messages; caching the one or more reply messages into a second sending queue; and sending at least one currently cached reply message in the second sending queue to the sending end.
The specific execution operations of the sending end and the receiving end may be described in detail above, and will not be described herein again.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (32)

1. A network communication method, comprising:
the sending end caches a sending message to be sent to a first sending queue corresponding to the receiving end;
detecting whether the first sending queue is in a first state;
if the first sending queue is in the first state, sending at least one sending message currently cached in the first sending queue to the receiving end, and switching the first sending queue to a second state;
and receiving a reply message of the receiving end aiming at the at least one sending message, clearing the at least one sending message in the first sending queue, and switching the first sending queue to the first state.
2. The method of claim 1, further comprising:
and if the reply message of the receiving end aiming at the at least one message is not received within the preset time after the at least one sending message cached in the first sending queue is sent to the receiving end, forcibly sending the at least one sending message cached in the first sending queue to the receiving end.
3. The method of claim 1, further comprising:
setting message serial numbers for all the sending messages cached to the sending queue according to the caching sequence; and the receiving end sequentially processes the at least one sending message according to the caching sequence indicated by the message sequence number of the at least one sending message.
4. The method of claim 3, wherein the setting of the message sequence number for each send message in the send queue in the buffering order comprises:
and according to the buffering sequence, setting message sequence numbers for all the sending messages buffered in the first sending queue in sequence by adopting continuous numbers from the number 1.
5. The method of claim 4, further comprising:
setting a first transmission field for the first transmission queue; wherein, the first sending field is used for storing the maximum message sequence number of the buffered message in the first sending queue;
the receiving a reply message to the at least one message from the receiving end, clearing the at least one sending message from the sending queue, and switching the sending queue to the first state includes:
receiving at least one reply message and a second receiving field value sent by the receiving end; the second receiving field is used for storing the maximum message sequence number in the sending message sent by the sending end and processed by the receiving end;
if the second receiving field value is the same as the first sending field value, switching the first sending queue to the first state, and clearing the reply messages corresponding to the message sequence numbers which are smaller than or equal to the second receiving field value in the first sending queue.
6. The method of claim 5, further comprising:
and if the second receiving field value is different from the first receiving field value, clearing the sending messages corresponding to the message sequence numbers which are smaller than or equal to the second receiving field value in the first sending queue.
7. The method of claim 5, further comprising:
setting a first receiving field for the first sending queue; the first receiving field is used for storing the maximum message sequence number of the reply message sent by the receiving end and received by the sending end; the receiving end maintains a second sending queue for the sending end, and is used for caching a reply message to be sent to the sending end;
and when at least one sending message cached currently in the first sending queue is sent to the receiving end, the first receiving field value is sent to the receiving end, so that the receiving end can clear reply messages corresponding to message sequence numbers smaller than or equal to the first receiving field in the second sending queue.
8. The method of claim 1, wherein the sending end maintains respective corresponding sending queues for different receiving ends.
9. The method of claim 1, wherein the buffering, by the sending end, the to-be-sent transmission message to a first sending queue corresponding to a receiving end comprises:
determining the message type of a sending message to be sent;
determining a first sending queue corresponding to the message type from a plurality of sending queues maintained for the receiving end; the sending end respectively maintains a plurality of first sending queues corresponding to different message types for the same receiving end;
and caching the sending message to be sent to a first sending queue corresponding to the message type.
10. A network communication method, comprising:
a receiving end receives at least one sending message which is currently cached in a first sending queue and sent by a sending end;
processing the at least one sending message in sequence according to the caching sequence of the at least one sending message to obtain one or more reply messages;
caching the one or more reply messages into a second sending queue;
and sending at least one currently cached reply message in the second sending queue to the sending end.
11. The method of claim 10, wherein the sending the at least one reply message currently buffered in the second send queue to the sender comprises:
detecting whether the second sending queue is in a first state; when receiving a sending message of the sending end, switching the second sending queue to the first state;
and if the second sending queue is in the first state, sending at least one currently cached reply message in the second sending queue to a sending end, and switching the first sending queue to a second state.
12. The method of claim 10, further comprising:
and according to the buffering sequence, sequentially setting message sequence numbers for the reply messages buffered to the second sending queue by adopting continuous numbers from the number 1.
13. The method of claim 12, further comprising
Setting a second sending field for the second sending queue; the second sending field is used for storing the maximum message sequence number of the reply message currently cached in the second sending queue;
the receiving at least one sending message currently cached in a first sending queue sent by a sending end includes:
receiving at least one sending message and a first receiving field value which are cached currently in a first sending queue and sent by a sending end; the first receiving field value is used for storing a maximum message sequence number in a reply message sent by the receiving end and received by the sending end;
and clearing the messages corresponding to the message sequence numbers which are less than or equal to the first receiving field value in the second sending queue.
14. A network communication method, comprising:
a sending end determines a sending message to be sent;
searching a third sending queue in the first state from the plurality of sending queues;
buffering the sending message to the third sending queue;
sending at least one sending message currently cached in the third sending queue to a receiving end, and switching the third sending queue to a second state;
and receiving a reply message of the receiving end aiming at the at least one sending message, clearing the at least one sending message from the third sending queue, and switching the third sending queue to the first state.
15. The method of claim 14, further comprising:
and if the reply message of the receiving end is not received within a preset time after the at least one sending message is sent to the receiving end, forcibly sending the at least one sending message cached currently in the third sending queue to the receiving end.
16. The method of claim 14, further comprising:
if the plurality of sending queues are all in the second state, selecting any sending queue;
buffering the sending message into any sending queue;
sending at least one sending message cached currently in any sending queue to the receiving end;
and receiving a reply message aiming at the at least one sending message, and switching any one sending queue to the first state.
17. The method of claim 14, further comprising:
and according to the buffering sequence, setting message sequence numbers for all the sending messages buffered into the third sending queue in sequence by adopting continuous numbers from the number 1.
18. The method of claim 17, further comprising:
setting a first transmission field for the third transmission queue; wherein, the first sending field is used for storing the maximum message sequence number of the buffered message in the third sending queue;
the receiving a reply message to the at least one transmission message from the receiving end, clearing the transmission message from the third transmission queue, and switching the third transmission queue to the first state includes:
receiving at least one reply message and a second receiving field value sent by the receiving end; the second receiving field is used for storing the maximum message sequence number in the sending message sent by the sending end and processed by the receiving end;
if the second receiving field value is the same as the first sending field value, switching the third sending queue to the first state, and clearing the sending messages corresponding to the message sequence numbers which are smaller than or equal to the second receiving field value in the third sending queue.
19. The method of claim 17, further comprising:
setting a first receiving field for the third sending queue; the first receiving field is used for storing the maximum message sequence number of the reply message sent by the receiving end and received by the sending end; the receiving end maintains a fourth sending queue for the sending end, and is used for caching a reply message to be sent to the sending end;
and sending the currently cached at least one sending message in the third sending queue to the receiving end, and sending the first receiving field value to the receiving end, so that the receiving end can clear the reply messages corresponding to the message sequence numbers which are less than or equal to the first receiving field in the fourth sending queue.
20. The method of claim 14, further comprising:
setting an available resource list based on the queue identifications of the plurality of sending queues;
when the plurality of sending queues are in an initial state, sequentially storing queue identifications of the plurality of sending queues in the available resource list;
after the buffering the sending message to the third sending queue and switching the third sending queue to the second state, the method further includes:
removing the queue identification of the third transmit queue from the list of available resources;
after the switching the third transmit queue to the first state, the method further comprises:
storing the queue identification of the third transmit queue in the list of available resources.
21. The method of claim 20, wherein searching for the third transmit queue in the first state from the plurality of transmit queues comprises:
selecting any one queue identification from the list of available resources;
and determining the transmission queue represented by any queue identification as a third transmission queue.
22. A network communication method, comprising:
the receiving end receives at least one sending message which is currently cached in a third sending queue and sent by the sending end;
processing the at least one sending message in sequence according to the caching sequence of the at least one sending message to obtain one or more reply messages;
caching the one or more reply messages into a fourth sending queue;
and sending at least one currently cached reply message in the fourth sending queue to the sending end.
23. A network communication apparatus, comprising:
the first cache module is used for caching the sending message to be sent to a first sending queue corresponding to a receiving end;
the first detection module is used for detecting whether the first sending queue is in a first state or not;
a first sending module, configured to send at least one sending message currently cached in the first sending queue to the receiving end if the first sending queue is in the first state, and switch the first sending queue to a second state;
a first processing module, configured to receive a reply message to the at least one sending message from the receiving end, clear the at least one sending message in the first sending queue, and switch the first sending queue to the first state.
24. A network communication apparatus, comprising:
the first receiving module is used for receiving at least one sending message which is cached currently in a first sending queue and sent by a sending end;
the second processing module is used for sequentially processing the at least one sending message according to the caching sequence of the at least one sending message to obtain one or more reply messages;
the second cache module is used for caching the one or more reply messages into a second sending queue;
and the second sending module is used for sending at least one currently cached reply message in the second sending queue to the sending end.
25. A network communication apparatus, comprising:
the message determining module is used for determining a sending message to be sent;
the searching module is used for searching a third sending queue in the first state from the plurality of sending queues;
a third buffer module, configured to buffer the sending message to the third sending queue;
a third sending module, configured to send at least one currently cached sending message in the third sending queue to a receiving end, and switch the third sending queue to a second state;
a third processing module, configured to receive a reply message to the at least one sending message from the receiving end, clear the sending message from the third sending queue, and switch the third sending queue to the first state.
26. A network communication apparatus, comprising:
the second receiving module is used for receiving at least one sending message which is currently cached in a third sending queue and sent by the sending end;
the fourth processing module is used for sequentially processing the at least one sending message according to the caching sequence of the at least one sending message to obtain one or more reply messages;
a fourth caching module, configured to cache the one or more reply messages in a fourth sending queue;
and a fourth sending module, configured to send, to the sending end, at least one currently cached reply message in the fourth sending queue.
27. An electronic device comprising a processing component and a storage component;
the storage component stores one or more computer instructions; the one or more computer instructions to be invoked for execution by the processing component;
the processing component is to:
caching a sending message to be sent into a first sending queue corresponding to a receiving end;
detecting whether the first sending queue is in a first state;
if the first sending queue is in the first state, sending at least one sending message currently cached in the first sending queue to the receiving end, and switching the first sending queue to a second state;
and receiving a reply message of the receiving end aiming at the at least one sending message, clearing the at least one sending message in the first sending queue, and switching the first sending queue to the first state.
28. An electronic device comprising a processing component and a storage component;
the storage component stores one or more computer instructions; the one or more computer instructions to be invoked for execution by the processing component;
the processing component is to:
receiving at least one sending message which is currently cached in a first sending queue and sent by a sending end;
processing the at least one sending message in sequence according to the caching sequence of the at least one sending message to obtain one or more reply messages;
caching the one or more reply messages into a second sending queue;
and sending at least one currently cached reply message in the second sending queue to the sending end.
29. An electronic device comprising a processing component and a storage component;
the storage component stores one or more computer instructions; the one or more computer instructions to be invoked for execution by the processing component;
the processing component is to:
determining a sending message to be sent;
searching a third sending queue in the first state from the plurality of sending queues;
buffering the sending message to the third sending queue;
sending at least one sending message currently cached in the third sending queue to a receiving end, and switching the third sending queue to a second state;
and receiving a reply message of the receiving end aiming at the at least one sending message, clearing the at least one sending message from the third sending queue, and switching the third sending queue to the first state.
30. An electronic device comprising a processing component and a storage component;
the storage component stores one or more computer instructions; the one or more computer instructions to be invoked for execution by the processing component;
the processing component is to:
receiving at least one sending message which is sent by a sending end and is currently cached in a third sending queue;
processing the at least one sending message in sequence according to the caching sequence of the at least one sending message to obtain one or more reply messages;
caching the one or more reply messages into a fourth sending queue;
and sending at least one currently cached reply message in the fourth sending queue to the sending end.
31. A network communication method, comprising:
the sending end caches a sending message to be sent to a first sending queue corresponding to the receiving end;
the sending end detects whether the first sending queue is in a first state;
if the first sending queue is in the first state, the sending end sends at least one sending message currently cached in the first sending queue to the receiving end, and switches the first sending queue to a second state;
the receiving end sequentially processes the at least one sending message according to the caching sequence of the at least one sending message to obtain one or more reply messages;
the receiving end caches the one or more reply messages to a second sending queue;
the receiving end sends at least one reply message cached currently in the second sending queue to the sending end;
and the sending end receives a reply message aiming at the at least one sending message, clears the at least one sending message in the first sending queue and switches the first sending queue to the first state.
32. A network communication system is characterized by comprising a sending end and a receiving end;
the sending end is used for caching a sending message to be sent to a first sending queue corresponding to the receiving end; detecting whether the first sending queue is in a first state; if the first sending queue is in the first state, sending at least one sending message currently cached in the first sending queue to a receiving end, and switching the first sending queue to a second state; receiving a reply message of the receiving end aiming at the at least one sending message, clearing the at least one sending message in the first sending queue, and switching the first sending queue to the first state;
the receiving end is used for receiving at least one sending message which is cached currently in a first sending queue and sent by the sending end; processing the at least one sending message in sequence according to the caching sequence of the at least one sending message to obtain one or more reply messages; caching the one or more reply messages into a second sending queue; and sending at least one currently cached reply message in the second sending queue to the sending end.
CN201910760236.8A 2019-08-16 2019-08-16 Network communication method and device and electronic equipment Pending CN112398744A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910760236.8A CN112398744A (en) 2019-08-16 2019-08-16 Network communication method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910760236.8A CN112398744A (en) 2019-08-16 2019-08-16 Network communication method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN112398744A true CN112398744A (en) 2021-02-23

Family

ID=74602866

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910760236.8A Pending CN112398744A (en) 2019-08-16 2019-08-16 Network communication method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN112398744A (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002013485A2 (en) * 2000-08-04 2002-02-14 Entropia, Inc. System and method of proxying communications in a data network
WO2007003586A1 (en) * 2005-06-30 2007-01-11 Siemens Vdo Automotive Ag A method for managing data of a vehicle traveling data recorder
CN101136900A (en) * 2006-10-16 2008-03-05 中兴通讯股份有限公司 Fast transparent fault shift device and implementing method facing to service
JP2011087068A (en) * 2009-10-14 2011-04-28 Mitsubishi Electric Corp Communication equipment and congestion control method
CN102217258A (en) * 2011-04-12 2011-10-12 华为技术有限公司 Detection processing method, data transmitter, data receiver and communication system
CN103906207A (en) * 2014-03-03 2014-07-02 东南大学 Wireless sensor network data transmission method based on self-adaptation required awakening technology
CN105915633A (en) * 2016-06-02 2016-08-31 北京百度网讯科技有限公司 Automated operational system and method thereof
CN108986792A (en) * 2018-09-11 2018-12-11 苏州思必驰信息科技有限公司 The training dispatching method and system of speech recognition modeling for voice dialogue platform
CN109068394A (en) * 2018-08-24 2018-12-21 中国科学院上海微系统与信息技术研究所 Channel access method based on queue length and collision risk
CN109379386A (en) * 2018-12-13 2019-02-22 广州市百果园信息技术有限公司 A kind of method for message transmission, device, equipment and medium
CN109617945A (en) * 2018-11-02 2019-04-12 北京达佳互联信息技术有限公司 Sending method, sending device, electronic equipment and the readable medium of file transmission

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002013485A2 (en) * 2000-08-04 2002-02-14 Entropia, Inc. System and method of proxying communications in a data network
WO2007003586A1 (en) * 2005-06-30 2007-01-11 Siemens Vdo Automotive Ag A method for managing data of a vehicle traveling data recorder
CN101136900A (en) * 2006-10-16 2008-03-05 中兴通讯股份有限公司 Fast transparent fault shift device and implementing method facing to service
JP2011087068A (en) * 2009-10-14 2011-04-28 Mitsubishi Electric Corp Communication equipment and congestion control method
CN102217258A (en) * 2011-04-12 2011-10-12 华为技术有限公司 Detection processing method, data transmitter, data receiver and communication system
CN103906207A (en) * 2014-03-03 2014-07-02 东南大学 Wireless sensor network data transmission method based on self-adaptation required awakening technology
CN105915633A (en) * 2016-06-02 2016-08-31 北京百度网讯科技有限公司 Automated operational system and method thereof
CN109068394A (en) * 2018-08-24 2018-12-21 中国科学院上海微系统与信息技术研究所 Channel access method based on queue length and collision risk
CN108986792A (en) * 2018-09-11 2018-12-11 苏州思必驰信息科技有限公司 The training dispatching method and system of speech recognition modeling for voice dialogue platform
CN109617945A (en) * 2018-11-02 2019-04-12 北京达佳互联信息技术有限公司 Sending method, sending device, electronic equipment and the readable medium of file transmission
CN109379386A (en) * 2018-12-13 2019-02-22 广州市百果园信息技术有限公司 A kind of method for message transmission, device, equipment and medium

Similar Documents

Publication Publication Date Title
US20180052797A1 (en) Preventing input/output (i/o) traffic overloading of an interconnect channel in a distributed data storage system
CN106330769B (en) Service processing method and server
US9888048B1 (en) Supporting millions of parallel light weight data streams in a distributed system
US10904184B2 (en) Smart message delivery based on transaction processing status
CN110383764B (en) System and method for processing events using historical data in a serverless system
US8868855B2 (en) Request management system and method for dynamically managing prioritized requests
KR19990082753A (en) Data processing apparatus, method and computer program product for carrying out workload management with respect to a group of servers in an asynchronous client/server computing system
US11201836B2 (en) Method and device for managing stateful application on server
KR20090101055A (en) Content push service
US8671306B2 (en) Scaling out a messaging system
US10009235B2 (en) Messaging queue spinning engine
EP2760184A1 (en) Method and system for processing resource requests
US10642585B1 (en) Enhancing API service schemes
EP3644182A1 (en) Container isolation method and device for netlink resource
CN105516086A (en) Service processing method and apparatus
US9374325B2 (en) Hash perturbation with queue management in data communication
US9052796B2 (en) Asynchronous handling of an input stream dedicated to multiple targets
CN112395077A (en) Resource control method, device and system
KR20180062607A (en) Parallel processing method supporting virtual core automatic scaling, and apparatus for the same
CN112398744A (en) Network communication method and device and electronic equipment
CN109787835A (en) A kind of session backup method and device
CN113055493B (en) Data packet processing method, device, system, scheduling device and storage medium
US8589605B2 (en) Inbound message rate limit based on maximum queue times
US10237592B2 (en) Method and apparatus for video transmission
CN111045778B (en) Virtual machine creation method, device, server and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination