CN115297193A - Multi-concurrent call type communication method under unreliable physical transmission channel - Google Patents

Multi-concurrent call type communication method under unreliable physical transmission channel Download PDF

Info

Publication number
CN115297193A
CN115297193A CN202210918020.1A CN202210918020A CN115297193A CN 115297193 A CN115297193 A CN 115297193A CN 202210918020 A CN202210918020 A CN 202210918020A CN 115297193 A CN115297193 A CN 115297193A
Authority
CN
China
Prior art keywords
data
user
service
queue
data processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210918020.1A
Other languages
Chinese (zh)
Inventor
杜若蒙
魏志峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zuojiang Technology Co ltd
Original Assignee
Beijing Zuojiang Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zuojiang Technology Co ltd filed Critical Beijing Zuojiang Technology Co ltd
Priority to CN202210918020.1A priority Critical patent/CN115297193A/en
Publication of CN115297193A publication Critical patent/CN115297193A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/16Implementation or adaptation of Internet protocol [IP], of transmission control protocol [TCP] or of user datagram protocol [UDP]

Abstract

The invention relates to a multi-concurrent call type communication method under an unreliable physical transmission channel, and belongs to the field of cloud computing. The invention serializes the data of multi-user parallel through the user data processing service, simultaneously ensures the independence and the integrity of the concurrent data streams of all users, and realizes the single-process support of high-concurrency calling service; the invention can provide the virtualization of the equipment through the equipment management service, and improve the utilization rate of the equipment service; the invention can realize high-quality reliable transmission on the premise that the resources of the opposite-end service equipment are limited through the optimized reliable transmission protocol. The method of the invention has higher promotion on the communication quality, the communication transmission efficiency and the service concurrency of the equipment service under the condition that the resource of the equipment serving the opposite end of the unreliable physical transmission channel is limited.

Description

Multi-concurrent call type communication method under unreliable physical transmission channel
Technical Field
The invention belongs to the field of cloud computing, and particularly relates to a multi-concurrent call type communication method under an unreliable physical transmission channel.
Background
With the development of cloud computing and big data, data of a data center is more and more huge, and both structured data and unstructured data need to be processed by a server and stored by a large-capacity storage device. Cloud computing and big data provide a feasible solution to maximize the utilization of resources on physical servers. On the other hand, the existing high-performance hardware equipment is expensive, and on the premise of no high-performance communication technology, the performance of the expensive hardware equipment is not utilized to the maximum extent, and a PCI-E special data processing card is one of the expensive hardware equipment.
The traditional PCI-E special data processing card mostly adopts a special channel based on a PCI-E protocol to provide services in use, and has short development period and high stability. This provides the basis for the present invention.
Most PCI-E special data processing cards capable of providing calling service in the market at present have simple structure and single function, the service speed is generally from several to dozens of megabits per second, the speed is not high, mass data reading cannot be supported, and the requirements of high concurrency and high-speed special calculation of a large number of special service requests cannot be used.
Disclosure of Invention
Technical problem to be solved
The technical problem to be solved by the invention is how to provide a multi-concurrency call type communication method under an unreliable physical transmission channel, so as to solve the problems that the traditional PCI-E special data processing card has low speed, can not support mass data reading, and can not use a large amount of special service to request high concurrency and high-speed special calculation.
(II) technical scheme
In order to solve the above technical problem, the present invention provides a multi-concurrent call type communication method under an unreliable physical transmission channel, which comprises the following steps:
the user data processing service receives the request data sent by the user entity, records the user entity information and the I/O port information, correspondingly processes the request data sent by the user entity and sends the processed request data to the downlink data processing component;
the downlink data processing component receives the request data, decapsulates and encapsulates the request data according to the requirements of the service entity, organizes metadata in the request data into a request data packet which can be identified by the service entity, and delivers the processed request data packet to the equipment data processing service;
the equipment data processing service receives the request data packet, sends the request data packet to the service entity through a reliable transmission protocol between the equipment data processing service and the service entity, asynchronously receives a response data packet of the service entity, informs the uplink data processing assembly after receiving the response data packet, and sends the response data packet to the uplink data processing assembly;
the uplink data processing component receives the notification, receives the response data packet, analyzes the data packet, packages metadata in the response data into response data capable of being analyzed by the user entity, and delivers the response data to the user data processing service;
the user data processing service receives the response data, searches the matched I/0 port by searching the user information, and asynchronously sends the response data to the user;
the TCP server and the user API call are user entities, and the PCI-E data processing equipment is a service entity.
Further, the service establishment procedure of the user data processing service includes the following steps:
s21, initializing communication service and creating TCP monitoring service; and creates the following data structure:
creating a request data queue sort _ q _ s, initializing the request data queue sort _ q _ s, and serializing the request data queue sort _ q _ s for multi-user parallel data;
creating a sliding window queue wind _ q _ s, initializing the sliding window queue and realizing reliable transmission of user data processing service to physical equipment;
creating a client queue clients _ s, initializing the client queue clients _ s, and managing users and user data;
step S22, starting communication service, and waiting for receiving a client request, namely a service calling request of a user;
step S23, if there is a service request called by a user, a user connection information structure is created, which comprises the following information:
the handle communicated with the user is stored and used for receiving data of the user calling request and data of the response user;
user request data ID used for uniquely representing the connection established by the user request;
the user calls a service request packet queue req _ q _ cli which is used for storing effective user call request data;
a response message waits for a sending queue send _ paging _ cli;
a response message sending queue send _ do _ cli;
mounting the created user connection information structure into a data structure client queue clients _ s created by the communication service; if no user calls the service request, executing step S22;
step S23: and receiving user call request data. And executing the subsequent flow.
Further, the service processing flow of the device data processing service includes the steps of:
step S31: initializing equipment information, and creating a Device information structure according to the actual situation of the PCI-E data processing equipment, wherein the Device information structure comprises the following information:
the device unique identifier cid _ dev is used for identifying specific devices by the user data processing service;
initializing a device service channel read-write handle fd _ dev for communication between a device data processing service and a physical device service;
a synchronization mechanism, write _ cond, for obtaining a notification of a synchronized user data processing service;
a user calls a request data sending queue write _ q _ dev for acquiring user request data output by the user data processing service;
the synchronization mechanism write _ lock is used for synchronizing the user data processing service and the equipment data processing service and operating a user call request data sending queue;
a user calls a response data receiving queue read _ q _ dev for acquiring response data of the physical equipment service;
a synchronization mechanism resp _ lock, which is used for synchronizing the user data processing service and the equipment data processing service and operating the user to call the response data receiving queue;
step S32: creating an independent read-write thread, creating a device service channel read-write handle, and assigning to device information;
step S33: starting a read-write thread, wherein the thread respectively executes data receiving and sending flows of user call service downlink data processing and user call service uplink data processing;
step S34: the user is notified of the data processing service and returns to step S31.
Further, the write thread processing flow steps are as follows:
step S3311: the write thread waits for notification of write _ cond;
step S3312: the write thread is notified of write _ cond,
step S3313: acquiring a write _ lock, caching all req _ nodes in a user call request data sending queue, and releasing the write _ lock;
step S3314: sending the cached req _ node to an equipment service channel; return to step S3311.
Further, the read thread processing flow comprises the following steps:
step S3321: the read thread waits for the service channel of the equipment to return data;
step S3322: the equipment service channel returns service response data and creates a resp _ node structure;
step S3323: acquiring a resp _ lock, and calling a resp _ node enqueue user to respond to a data receiving queue;
step S34: the user is notified of the data processing service and returns to step S31.
Further, the reliable transmission protocol under the unreliable physical transmission channel is defined as follows:
initializing an A end:
initializing a packet mark of a sending buffer area;
starting an interval timer;
the A end sends data:
adding a message to the temporary buffer;
if the current sliding window buf is full, nothing is done;
obtaining a message from a temporary buffer;
a message construction packet is put into a sliding window buf and is sent to the B end;
setting the current packet TIMEOUT time as the current time plus TIMEOUT;
and B, receiving data:
receiving a data packet, and if the data packet is damaged, discarding the data packet;
calculating whether a response packet exists in the cache position according to the packet sequence number, if so, sending ACK, returning the packet to service processing by the function, sending the ACK, and caching the ACK to the designated position according to the request packet sequence number;
and the A end receives data:
receiving a data packet, and if the data packet is damaged, discarding the data packet;
if the acknowledgement sequence number of the ACK is not expected, do nothing;
detecting whether a packet exists at the earliest position of a sliding window buff and is confirmed, scanning all cache packets, and sending the packets with continuous packet sequence numbers to service processing, namely, simultaneously sliding the window;
checking whether the temporary buffer area has a message or not, and if so, continuing to call the A end to send data;
and (3) overtime processing of the A terminal:
maintaining a logic clock;
and scanning all the packets in the sliding window buf, if any packet is overtime, retransmitting the packet, and recalculating the next overtime logic time of the retransmitted packet.
Further, the downlink data processing component and the uplink data processing component together complete a reliable transmission protocol.
Further, the flow of the downlink data processing component comprises the following steps:
step S41: receiving request data sent by a user entity, creating a downlink data request packet req _ node, and enqueuing in a user request data queue;
step S42: meanwhile, a downlink data request packet is put into a queuing queue sort _ q, and the same request packet req _ node is associated with the queuing queue and a user request data queue;
step S43: checking whether the sliding window wind _ q is idle, if so, continuing and not, and executing step S47;
step S44: the request data in the dequeue sort _ q queue is enqueued in the wind _ q queue, and a timeout timer retry _ tm is applied for the request data packet;
step S45: acquiring write _ lock, enqueuing a req _ node to a sending queue, and releasing the write _ lock;
step S46: repeating the step S43, the step S44 and the step S45 until the window queue wind _ q is full or the queue is empty;
step S47: the device data processing service write thread is activated by write _ cond.
Further, the flow of the uplink data processing component includes the following steps:
step S51: the user data processing service receives the notification of the device data processing service read thread and activates the flow;
step S52: acquiring a resp _ queue lock;
step S53: all resp _ node data in a user call response data receiving queue are cached in a resp _ swap queue;
step S54: releasing the resp _ queue lock;
step S55: traversing the resp _ node data in the resp _ swap queue, if the matched request data is found in the wind _ q queue, stopping timing of the request data retry, executing the step S56, and discarding the resp _ node if no configuration is available;
step S56: if the matched request data is the head node of the wind _ q queue, sending the matched resp _ node data to a user;
step S57: steps S55 and S56 are repeated until the resp _ swap queue is empty.
Further, the service entity is a high-speed password card in the server password machine equipment.
(III) advantageous effects
The invention provides a multi-concurrent call type communication method under an unreliable physical transmission channel, which can realize reliable data transmission in the unreliable transmission channel by depending on software under the condition that equipment resources are limited and a complex communication protocol cannot be realized, and provides call type and concurrent communication service on the basis.
The communication method has high adaptability, relies on less high-speed hardware equipment resources, and is embodied in the reliable transmission protocol of the invention, and the high-speed hardware equipment only needs to provide a small amount of data packet cache and a passive retransmission mechanism.
The communication method supports high concurrency, and provides high concurrency communication service in a single server.
Drawings
FIG. 1 is a main flow of the present invention;
FIG. 2 is a flow chart of a user invocation request data processing service of the present invention;
FIG. 3 is a flow diagram of the device data processing service of the present invention;
FIG. 4 is a flow chart of downlink data processing according to the present invention;
fig. 5 is a flow chart of the uplink data processing of the present invention.
Detailed Description
In order to make the objects, contents and advantages of the present invention clearer, the following detailed description of the embodiments of the present invention will be made in conjunction with the accompanying drawings and examples.
The invention designs a multi-concurrent call type communication method under an unreliable physical transmission channel, and the communication method can realize the capability of multi-user concurrent call of equipment service at the opposite end of the unreliable physical transmission channel. The invention serializes the parallel data of multiple users through the user data processing service, simultaneously ensures the independence and the integrity of the concurrent data streams of all the users, and realizes the calling service with high concurrency supported by a single process; the invention can provide the virtualization of the equipment through the equipment management service, and improve the utilization rate of the equipment service; the invention can realize high-quality reliable transmission on the premise that the resources of the opposite-end service equipment are limited through the optimized reliable transmission protocol.
The method of the invention has higher improvement on the communication quality, the communication transmission efficiency and the service concurrency of the equipment service under the condition that the resource of the equipment service of the opposite end of the unreliable physical transmission channel is limited.
The multi-concurrent call type communication method under the unreliable physical transmission channel is mainly applied to a communication system without reliable transmission protocol protection, a communication scene between a user and a high-speed password card in server password machine equipment is typically applied, and the high-speed password card is usually a PCI-E data processing card. The communication method is described by taking a PCI-E data processing card as an example, and is shown in FIG. 1.
In the application scenario shown in fig. 1, the method designs a user data processing service, an uplink data processing component, an equipment data processing service, a downlink data processing component, and an equipment management component, which fulfill the requirement of high-speed data processing of the server cryptographic machine. The invention only describes a simple data processing method, and the communication between the TCP server and the user API and the access mode of the PCI-E device shown in the figure 1 are not in the scheme of the invention. For the convenience of description, parts such as a TCP server and a user API call are abstracted as a user entity, a PCI-E data processing device is a service entity, and multiple concurrent calls can be described as concurrent communication between a multi-user entity and a user data processing service, which can be understood as follows without special description.
The overall process of the application data processing of the PCI-E data processing card comprises five parts:
the user data processing service receives the request data sent by the user entity, records the user entity information and the I/O port (socket) information, correspondingly processes the request data sent by the user entity, and sends the processed request data to the downlink data processing component.
And the downlink data processing component receives the request data, decapsulates and encapsulates the request data according to the requirements of the service entity, organizes the metadata in the request data into a request data packet which can be identified by the service entity, and delivers the processed request data packet to the equipment data processing service.
The equipment data processing service receives the request data packet, sends the request data packet to the service entity through a reliable transmission protocol between the equipment data processing service and the service entity, asynchronously receives a response data packet of the service entity, informs the uplink data processing assembly after receiving the response data packet, and sends the response data packet to the uplink data processing assembly.
And the uplink data processing component receives the notification, receives the response data packet, analyzes the data packet, packages the metadata in the response data into response data which can be analyzed by the user entity, and delivers the response data to the user data processing service.
And the user data processing service receives the response data, searches the matched I/0 port (socket) by searching the user information, and asynchronously sends the response data to the user.
The method of the present invention in the above-described flow requires the user entity to provide the necessary communication interface and communication related information. The five parts of the PCI-E data processing card application processing flow will be described separately below.
The main flow of the user data processing service is shown in fig. 2, and the service establishing flow includes the following steps:
step S21, initializing communication service and establishing TCP monitoring service; and creates the following data structure:
a request data queue sort _ q _ s is created and initialized for serialization processing of multi-user parallel data.
A sliding window queue, wind _ q _ s, is created and initialized for reliable transmission of user data processing services to the physical device.
And creating a client queue clients _ s, initializing the client queue clients _ s, and managing the user and the user data.
Step S22, starting the communication service, and waiting for receiving a client request, namely a service calling request of a user.
Step S23, if there is a service request called by a user, a user connection information structure is created, which comprises the following information:
storing a handle communicated with a user, and receiving user call request data and data responding to the user;
the user request data ID is used for uniquely representing the connection established by the user request;
the user calls a service request packet queue req _ q _ cli which is used for storing effective user call request data;
the response message waits for sending a queue send _ scheduling _ cli;
a response message sending queue send _ do _ cli;
and mounting the created user connection information structure into a data structure client queue clients _ s created by the communication service. If there is no user invoking the service request, step S22 is performed.
Step S23: and receiving user call request data. And executing the subsequent flow.
And the equipment data processing service is responsible for processing PCI-E channel communication. The main flow of service processing is as shown in fig. 3, and service establishment includes write thread processing and read thread processing.
The write thread processing steps are as follows:
step S31: initializing equipment information, and creating a Device information structure according to the actual condition of the PCI-E data processing equipment, wherein the Device information structure comprises the following information:
the device unique identifier cid _ dev is used for identifying specific devices by the user data processing service;
initializing a device service channel read-write handle fd _ dev for communication between a device data processing service and a physical device service;
a synchronization mechanism, write _ cond, such as a condition variable, for obtaining notification of synchronized user data processing services;
a user calls a request data sending queue write _ q _ dev for obtaining user request data output by the user data processing service;
a synchronization mechanism write _ lock, such as a mutex lock, for synchronizing the user data processing service and the device data processing service, and operating the user call request data transmission queue;
a user calls a response data receiving queue read _ q _ dev for acquiring response data of the physical equipment service;
a synchronization mechanism resp _ lock, such as a mutual exclusion lock, for synchronizing the user data processing service with the device data processing service, and operating the user to call the response data receiving queue;
step S32: creating an independent read-write thread, creating a read-write handle of an equipment service channel, and assigning to equipment information;
step S33: and starting a read-write thread, wherein the thread respectively executes data receiving and sending processes of user call service downlink data processing and user call service uplink data processing.
The write thread processing steps are as follows:
step S3311: the write thread waits for notification of write _ cond.
Step S3312: the write thread is notified of write _ cond,
step S3313: and acquiring a write _ lock, caching all req _ nodes in the request data transmission queue called by the user, and releasing the write _ lock.
Step S3314: and sending the cached req _ node to a device service channel. Return to step S3311.
And a read thread processing flow:
step S3321: the read thread waits for the device service channel to return data.
Step S3322: the device service channel returns service response data and creates a resp _ node structure.
Step S3323: acquiring a resp _ lock, and enqueuing a resp _ node user to call a response data receiving queue
Step S34: the user is notified of the data processing service and the process returns to step S31.
The uplink data processing component mainly completes reliable transmission protocols under unreliable physical transmission channels. The protocol has less requirements on physical equipment resources, and the part of the protocol, which needs equipment to be realized, is simple. The uplink data processing component is referred to as a, the device service entity is referred to as B, and the reliable transmission protocol under the unreliable physical transmission channel is defined as follows:
initializing an A end:
initializing a packet mark of a sending buffer area;
starting an interval timer;
the A end sends data:
adding a message to the temporary buffer;
if the current sliding window buf is full, nothing is done;
obtaining a message from a temporary buffer;
the message construction packet is put into a sliding window buf and is sent to the B end;
setting the current packet TIMEOUT time as the current time plus TIMEOUT;
and B, receiving data:
receiving a data packet, and if the data packet is damaged, discarding the data packet;
calculating whether a response packet exists in the cache position according to the packet sequence number, if so, sending ACK, returning the packet to service processing by the function, sending the ACK, and caching the ACK to the designated position according to the request packet sequence number;
and the A end receives data:
receiving the data packet, and if the data packet is damaged, discarding the data packet;
if the acknowledgement sequence number of the ACK is not expected (1, not within the sliding window, 2, duplicate acknowledgement ACK), then nothing is done;
detecting whether a packet exists at the earliest position of a sliding window buff and is confirmed, scanning all cache packets, and sending the packets with continuous packet sequence numbers to service processing, namely, simultaneously sliding the window;
checking whether the temporary buffer area has a message or not, and if so, continuing to call the A end to send data;
and (3) overtime processing of the A terminal:
maintaining a logic clock;
and scanning all packets in the sliding window buff, if any packet is overtime, retransmitting the packet, and recalculating the logic time of next overtime of the retransmitted packet.
The downlink data processing assembly and the uplink data processing assembly together complete a reliable transmission protocol, the main flow of the downlink data processing assembly is as shown in figure 4, and the method comprises the following steps:
step S41: receiving request data sent by a user entity, creating a downlink data request packet req _ node, and enqueuing in a user request data queue;
step S42: meanwhile, the downlink data request packet is put into a queuing queue sort _ q, and the queuing queue and the user request data queue are associated with the same request packet req _ node;
step S43: checking whether the sliding window wind _ q is idle, if so, continuing to be idle, and executing step S47.
Step S44: the method comprises the steps that request data in a dequeue sort _ q queue are enqueued in a wind _ q queue, and meanwhile, an overtime timer retry _ tm is applied for a request data packet;
step S45: acquiring write _ lock, enqueuing a req _ node to a sending queue, and releasing the write _ lock;
step S46: repeating the step S43, the step S44 and the step S45 until the window queue wind _ q is full or the queue is empty;
step S47: the device data processing service write thread is activated by write _ cond.
The main flow of the uplink data processing component is shown in fig. 5, and includes the following steps:
step S51: the user data processing service receives the notification of the device data processing service read thread and activates the flow;
step S52: acquiring a resp _ queue lock;
step S53: all resp _ node data in a user call response data receiving queue are cached in a resp _ swap queue;
step S54: releasing the resp _ queue lock;
step S55: traversing the resp _ node data in the resp _ swap queue, if the matched request data is found in the wind _ q queue, stopping timing of the request data retry, executing the step S56, and discarding the resp _ node if no configuration is available;
step S56: if the matched request data is the head node of the wind _ q queue, the matched resp _ node data is sent to the user;
step S57: steps S55 and S56 are repeated until the resp _ swap queue is empty.
The invention provides a method for realizing reliable data transmission in an unreliable transmission channel by depending on software when the equipment resources are limited and a complex communication protocol cannot be realized, and provides a calling type communication service for supporting concurrence on the basis.
The communication method has high adaptability, relies on less high-speed hardware equipment resources, and is embodied in the reliable transmission protocol of the invention, and the high-speed hardware equipment only needs to provide a small amount of data packet cache and a passive retransmission mechanism.
The communication method supports high concurrency and provides high concurrency communication service in a single server.
The above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, it is possible to make various improvements and modifications without departing from the technical principle of the present invention, and those improvements and modifications should be considered as the protection scope of the present invention.

Claims (10)

1. A multi-concurrent call type communication method under an unreliable physical transmission channel is characterized by comprising the following steps:
the user data processing service receives the request data sent by the user entity, records the user entity information and the I/O port information, correspondingly processes the request data sent by the user entity and sends the processed request data to the downlink data processing component;
the downlink data processing assembly receives the request data, decapsulates and encapsulates the request data according to the requirements of the service entity, organizes metadata in the request data into a request data packet which can be identified by the service entity, and delivers the processed request data packet to equipment data processing service;
the equipment data processing service receives the request data packet, sends the request data packet to the service entity through a reliable transmission protocol between the equipment data processing service and the service entity, asynchronously receives a response data packet of the service entity, informs the uplink data processing assembly after receiving the response data packet, and sends the response data packet to the uplink data processing assembly;
the uplink data processing component receives the notification, receives the response data packet, analyzes the data packet, packages metadata in the response data into response data capable of being analyzed by the user entity, and delivers the response data to the user data processing service;
the user data processing service receives the response data, searches the matched I/0 port by searching the user information, and asynchronously sends the response data to the user;
the TCP server and the user API are called as user entities, and the PCI-E data processing equipment is a service entity.
2. The multi-concurrent call-type communication method under the unreliable physical transmission channel according to claim 1, wherein the service establishment procedure of the user data processing service comprises the steps of:
s21, initializing communication service and creating TCP monitoring service; and creates the following data structure:
creating a request data queue sort _ q _ s, initializing the request data queue sort _ q _ s, and serializing the request data queue sort _ q _ s for multi-user parallel data;
creating a sliding window queue wind _ q _ s, initializing the sliding window queue and realizing reliable transmission of user data processing service to physical equipment;
creating a client queue clients _ s, initializing the client queue clients _ s, and managing users and user data;
step S22, starting communication service, and waiting for receiving a client request, namely a service calling request of a user;
step S23, if the user calls the service request, establishing a user connection information structure, which comprises the following information:
the handle communicated with the user is stored and used for receiving data of the user calling request and data of the response user;
user request data ID used for uniquely representing the connection established by the user request;
the user calls a service request packet queue req _ q _ cli which is used for storing effective user call request data;
the response message waits for sending a queue send _ scheduling _ cli;
a response message sending queue send _ do _ cli;
mounting the created user connection information structure into a data structure client queue clients _ s created by the communication service; if no user calls the service request, executing step S22;
step S23: receiving user call request data; and executing the subsequent flow.
3. The multi-concurrent call-type communication method under the unreliable physical transmission channel according to claim 2, wherein the service processing flow of the device data processing service comprises the steps of:
step S31: initializing equipment information, and creating a Device information structure according to the actual condition of the PCI-E data processing equipment, wherein the Device information structure comprises the following information:
the device unique identifier cid _ dev is used for identifying specific devices by the user data processing service;
initializing a device service channel read-write handle fd _ dev for communication between a device data processing service and a physical device service;
a synchronization mechanism, write _ cond, for obtaining a notification of a synchronized user data processing service;
a user calls a request data sending queue write _ q _ dev for acquiring user request data output by the user data processing service;
the synchronization mechanism write _ lock is used for synchronizing the user data processing service and the equipment data processing service and operating a user call request data sending queue;
a user calls a response data receiving queue read _ q _ dev for acquiring response data of the physical equipment service;
a synchronization mechanism resp _ lock, which is used for synchronizing the user data processing service and the equipment data processing service and operating the user to call the response data receiving queue;
step S32: creating an independent read-write thread, creating a read-write handle of an equipment service channel, and assigning to equipment information;
step S33: starting a read-write thread, wherein the thread respectively executes data receiving and sending flows of user call service downlink data processing and user call service uplink data processing;
step S34: the user is notified of the data processing service and returns to step S31.
4. The multi-concurrent call communication method under an unreliable physical transmission channel as claimed in claim 3, wherein the write thread processing flow steps are as follows:
step S3311: the write thread waits for notification of write _ cond;
step S3312: the write thread is notified of write _ cond,
step S3313: acquiring a write _ lock, caching all req _ nodes in a user call request data sending queue, and releasing the write _ lock;
step S3314: sending the cached req _ node to an equipment service channel; return is made to step S3311.
5. The multi-concurrent call communication method under the unreliable physical transmission channel according to claim 3, wherein the read thread processing flow comprises the following steps:
step S3321: the read thread waits for the service channel of the equipment to return data;
step S3322: the equipment service channel returns service response data and creates a resp _ node structure;
step S3323: acquiring a resp _ lock, and calling a resp _ node enqueue user to a response data receiving queue;
step S34: the user is notified of the data processing service and the process returns to step S31.
6. The method for multi-concurrent-invocation communication method under an unreliable physical transmission channel according to any of claims 1-5, wherein the reliable transmission protocol under the unreliable physical transmission channel is defined as follows:
initialization of an A terminal:
initializing a packet tag of a sending buffer area;
starting an interval timer;
the A end sends data:
adding a message to the temporary buffer;
if the current sliding window buf is full, nothing is done;
obtaining a message from a temporary buffer;
a message construction packet is put into a sliding window buf and is sent to the B end;
setting the current packet TIMEOUT time as the current time plus TIMEOUT;
and B, receiving data by the terminal:
receiving the data packet, and if the data packet is damaged, discarding the data packet;
calculating whether a response packet exists in the cache position according to the packet sequence number, if so, sending ACK, returning the packet to service processing by the function, sending the ACK, and caching the ACK to an appointed position according to the request packet sequence number;
the A end receives data:
receiving a data packet, and if the data packet is damaged, discarding the data packet;
if the acknowledgement sequence number of the ACK is not expected, do nothing;
detecting whether a packet exists at the earliest position of a sliding window buff and is confirmed, scanning all cache packets, and sending the packets with continuous packet sequence numbers to service processing, namely, simultaneously sliding the window;
checking whether the temporary buffer area has a message or not, and if so, continuing to call the A end to send data;
and (3) overtime processing of the A terminal:
maintaining a logic clock;
and scanning all the packets in the sliding window buf, if any packet is overtime, retransmitting the packet, and recalculating the next overtime logic time of the retransmitted packet.
7. The method of claim 6, wherein the downstream data processing component and the upstream data processing component together implement a reliable transport protocol.
8. The method of claim 6, wherein the flow of the downstream data processing component comprises the steps of:
step S41: receiving request data sent by a user entity, creating a downlink data request packet req _ node, and enqueuing in a user request data queue;
step S42: meanwhile, a downlink data request packet is put into a queuing queue sort _ q, and the same request packet req _ node is associated with the queuing queue and a user request data queue;
step S43: checking whether the sliding window wind _ q is idle, if so, continuing to be idle, and if not, executing the step S47;
step S44: the method comprises the steps that request data in a dequeue sort _ q queue are enqueued in a wind _ q queue, and meanwhile, an overtime timer retry _ tm is applied for a request data packet;
step S45: acquiring write _ lock, enqueuing a req _ node to a sending queue, and releasing the write _ lock;
step S46: repeating the step S43, the step S44 and the step S45 until the window queue wind _ q is full or the queue is empty;
step S47: the device data processing service write thread is activated by write _ cond.
9. The method of claim 6, wherein the flow of the upstream data processing component comprises the steps of:
step S51: the user data processing service receives the notification of the device data processing service read thread and activates the flow;
step S52: acquiring a resp _ queue lock;
step S53: all resp _ node data in a user call response data receiving queue are cached in a resp _ swap queue;
step S54: releasing the resp _ queue lock;
step S55: traversing the resp _ node data in the resp _ swap queue, if the matched request data is found in the wind _ q queue, stopping the timing of retry of the request data, executing the step S56, and discarding the resp _ node if no configuration is available;
step S56: if the matched request data is the head node of the wind _ q queue, sending the matched resp _ node data to a user;
step S57: steps S55 and S56 are repeated until the resp _ swap queue is empty.
10. The method of claim 1, wherein the service entity is a high-speed cryptographic card in a server cryptographic engine device.
CN202210918020.1A 2022-08-01 2022-08-01 Multi-concurrent call type communication method under unreliable physical transmission channel Pending CN115297193A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210918020.1A CN115297193A (en) 2022-08-01 2022-08-01 Multi-concurrent call type communication method under unreliable physical transmission channel

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210918020.1A CN115297193A (en) 2022-08-01 2022-08-01 Multi-concurrent call type communication method under unreliable physical transmission channel

Publications (1)

Publication Number Publication Date
CN115297193A true CN115297193A (en) 2022-11-04

Family

ID=83825961

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210918020.1A Pending CN115297193A (en) 2022-08-01 2022-08-01 Multi-concurrent call type communication method under unreliable physical transmission channel

Country Status (1)

Country Link
CN (1) CN115297193A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117651233A (en) * 2024-01-29 2024-03-05 祥源智联(南京)科技有限公司 Low-delay uploading method for data of bus communication sensor

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117651233A (en) * 2024-01-29 2024-03-05 祥源智联(南京)科技有限公司 Low-delay uploading method for data of bus communication sensor
CN117651233B (en) * 2024-01-29 2024-04-05 祥源智联(南京)科技有限公司 Low-delay uploading method for data of bus communication sensor

Similar Documents

Publication Publication Date Title
CN110177118B (en) RDMA-based RPC communication method
Cheriton et al. VMTP as the transport layer for high-performance distributed systems
EP1581875B1 (en) Using direct memory access for performing database operations between two or more machines
US8935336B2 (en) Optimizing program requests over a wide area network
US8131881B2 (en) Completion coalescing by TCP receiver
US6941379B1 (en) Congestion avoidance for threads in servers
CN101459611B (en) Data transmission scheduling method, system and device for IP SAN storage
CN109547162B (en) Data communication method based on two sets of one-way boundaries
US20040054796A1 (en) Load balancer
Buonadonna et al. Queue pair IP: a hybrid architecture for system area networks
CN112631788B (en) Data transmission method and data transmission server
TW200814672A (en) Method and system for a user space TCP offload engine (TOE)
WO2024037296A1 (en) Protocol family-based quic data transmission method and device
CN104735077A (en) Method for realizing efficient user datagram protocol (UDP) concurrence through loop buffers and loop queue
CN115297193A (en) Multi-concurrent call type communication method under unreliable physical transmission channel
WO2004040819A2 (en) An apparatus and method for receive transport protocol termination
US20090292825A1 (en) Method and apparatus for in-kernel application-specific processing of content streams
CN111522663B (en) Data transmission method, device and system based on distributed storage system
WO2017032152A1 (en) Method for writing data into storage device and storage device
US20040039774A1 (en) Inter-process messaging using multiple client-server pairs
CN111131081A (en) Method and device for supporting multi-process high-performance unidirectional transmission
US20080040494A1 (en) Partitioning a Transmission Control Protocol (TCP) Control Block (TCB)
US7672239B1 (en) System and method for conducting fast offloading of a connection onto a network interface card
US20060282537A1 (en) System and method of responding to a full TCP queue
CN114371935A (en) Gateway processing method, gateway, device and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination