CN110247942B - Data sending method, device and readable medium - Google Patents

Data sending method, device and readable medium Download PDF

Info

Publication number
CN110247942B
CN110247942B CN201810194815.6A CN201810194815A CN110247942B CN 110247942 B CN110247942 B CN 110247942B CN 201810194815 A CN201810194815 A CN 201810194815A CN 110247942 B CN110247942 B CN 110247942B
Authority
CN
China
Prior art keywords
data
packet
subscriber
data packet
sent
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810194815.6A
Other languages
Chinese (zh)
Other versions
CN110247942A (en
Inventor
汪胜蕾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201810194815.6A priority Critical patent/CN110247942B/en
Publication of CN110247942A publication Critical patent/CN110247942A/en
Application granted granted Critical
Publication of CN110247942B publication Critical patent/CN110247942B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/56Queue scheduling implementing delay-aware scheduling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/622Queue service order
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1095Replication or mirroring of data, e.g. scheduling or transport for data synchronisation between network nodes

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention discloses a data transmitting method, a device and a readable medium, relating to the technical field of data processing, wherein a data source puts the version number of a data packet to be transmitted into the tail of a version number cache queue after acquiring the data packet to be transmitted, when the number of the version number in a packet transmitting window in the version number cache queue reaches a packet splicing threshold value, the corresponding number of the data packets to be transmitted are merged into a packet splicing data packet to be transmitted to a subscriber, and compared with the step of transmitting the data packets to be transmitted to the subscriber one by one, the subscriber can read a plurality of data packets from the receiving queue at one time, thereby reducing the time for reading the data packets and saving the resources occupied by reading the data packets, leading the subscriber to have more resources for processing the data packets and providing service, thereby accelerating the processing speed of the data packets and further reducing the time delay generated during data synchronization, the method is particularly suitable for the scene with high process load of the subscriber.

Description

Data sending method, device and readable medium
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to a data sending method, a data sending device, and a readable medium.
Background
In order to increase the number of loaded players, the network game server is usually split into a plurality of service processes according to functions. Different processes are responsible for different play functions and data storage. Different play modules are closely related, and the play of one process often depends on data in other play processes. While most network games place a great emphasis on the real-time nature of player interaction, the processing delay of a player's request must be less than the sensory reaction time, otherwise it results in a stuck game experience. This requires the computational logic of the play and the dependent data to be in the same process, avoiding the additional request processing delay added to cross-process data queries. Therefore, it is desirable to synchronize data of one process to other processes that depend on the data. Each process maintains data cache of the dependent data, and only the data cache in the process needs to be queried when querying the data.
The existing data synchronization method comprises the following steps: as shown in fig. 1, when receiving a data modification, a data source responsible for a process of a play module sends the modified data to a subscriber in a broadcast manner, and the subscriber is another process dependent on the data generated by the play module. The subscribers send confirmations to the data source only after receiving and processing data, however, since different subscribers carry different plays and some plays have higher CPU overhead, the subscribers who provide the plays are in a high-load state, and the speed of receiving and processing the synchronous data sent by the data source is slowed down, so that more data is accumulated in the receiving queue of the subscribers, and the data synchronization delay is increased.
Therefore, how to reduce the delay generated when synchronizing data between processes becomes one of the problems to be solved in the prior art.
Disclosure of Invention
The embodiment of the invention provides a data sending method, a data sending device and a readable medium, which are used for reducing the time delay of inter-process data synchronization in the prior art.
In a first aspect, an embodiment of the present invention provides a data sending method, including:
a data source acquires a data packet to be sent; and are
According to the first-in first-out sequence, the acquired version number of the data to be sent is put into the tail of a version number cache queue;
when the number of the version numbers falling into the packet sending window in the version number cache queue reaches a packet splicing threshold value, merging the corresponding number of data packets to be sent into a packet splicing data packet and sending the packet splicing data packet to a subscriber, wherein the packet head of the packet splicing data packet carries number information, so that the subscriber splits the packet splicing data packet into the corresponding number of data packets according to the number information; and writing the split data packet into a data cache of the process of the subscriber.
Thus, compared with sending data packets to be sent to a subscriber one by one, the subscriber can read a plurality of data packets from the receiving queue once, thereby reducing the time for reading the data packets and saving resources occupied by reading the data packets, so that the subscriber has more resources to process the data packets and provide service, thereby accelerating the processing speed of the data packets, further reducing the delay generated when synchronizing data, particularly when the process load of the subscriber is higher, the method provided by the invention can prevent the subscriber from being in a slow consumption state for a long time, in addition, by introducing the version number cache queue, determining whether the number of the version number falling into the parallel packet sending window in the version number cache queue reaches the threshold value of the split packet, and sending the split packet data packets only when the split packet threshold value is reached, thereby the subscriber can have more time to process the sent but unacknowledged data packets, and the speed of processing the data packet by the subscriber is increased. .
Preferably, the version number of the data packet which is sent but not acknowledged by the subscriber falls into a pre-sending window in the version number cache queue; the method further comprises the following steps:
when receiving a data packet acknowledgement indication of a subscriber, synchronously sliding the pre-sending window and the parallel packet sending window to enable the version number of the acknowledged data packet to be shifted out of the pre-sending window; and are
And when the version number of the data packet is supplemented from the parallel packet sending window to the pre-sending window, if the data packet corresponding to the version number of the supplemented pre-sending window is determined not to be sent, sending the data packet to be sent corresponding to the version number of the supplemented pre-sending window to a subscriber.
Since the subscriber sends an acknowledgement indication to the subscriber after processing the data packet, in order to monitor the data processing state of the subscriber in real time, the pre-sending window and the parallel packet sending window in the version number cache queue need to be slid after receiving the acknowledgement indication, so that the current consumption capacity of the subscriber is accurately determined according to the number of the version numbers falling into the pre-sending window and the parallel packet sending window in the current version number cache queue.
Optionally, the data source is configured to obtain the data packet to be sent from a first process, and the subscriber is configured to write the received data packet into a data cache of a second process.
The data source and the subscriber can be applied to data synchronization among multiple processes with data interdependent in the same server.
Preferably, the size of the pre-transmission window is:
y=max(0.1*WPS,(RTT+tdelay)*WPS)
wherein,yrepresenting the size of the pre-send window;
WPS represents the speed of writing a data packet to be sent per second;
RTT represents round trip delay;
tdelayrepresenting the delay caused by a subscriber sending a single indication of acknowledgement of processing at least two data packets.
When the subscriber sends the acknowledgement indication to the data source, the data source sends the acknowledgement indication after processing one data packet or processing a plurality of data packets, so that the acknowledgement delay exists, and the situation that the data source sends the packet data packet in a packet sending mode in advance due to the delay caused by the acknowledgement can be avoided by configuring the size of the pre-sending window.
Optionally, if the received acknowledgement indicates that the subscriber sends the acknowledgement after processing one data packet each time, the size of the pre-send window is:
y=max(0.1*WPS,RTT*WPS)
wherein y represents the size of the pre-send window;
WPS represents the speed of writing a data packet to be sent per second;
RTT represents the round trip delay.
Because the data source sends the data packet to be sent to the subscriber and the RTT delay is generated between the data source and the acknowledgement indication returned by the subscriber, in order to ensure that the data source can receive the acknowledgement indication of the data packet to be sent corresponding to the first version number in the pre-sending window when the pre-sending window is filled, the pre-sending window can be allowed to slide normally, thereby avoiding the delay caused by the data source sending the data packet in a packet splicing sending mode too early.
In a second aspect, an embodiment of the present invention provides another data sending method, including:
a subscriber acquires a packet data packet sent by a data source from a receiving queue, wherein the packet data packet is the tail of a version number cache queue, and the version number of the acquired data packet to be sent is put into the tail of the version number cache queue according to the first-in first-out sequence after the data source acquires the data packet to be sent; when the number of the version numbers falling into the packet sending window in the version number cache queue reaches a packet splicing threshold value, merging corresponding number of data packets to be sent, wherein the packet heads of the packet splicing data packets carry number information;
splitting the packet data packet into a corresponding number of data packets according to the number information; and are
And writing the split data packet into a data cache of the process of the subscriber.
When a subscriber receives a spliced packet data packet, splitting the spliced packet data packet into a corresponding number of data packets according to the number information carried in the packet header of the spliced packet data packet; the split data packets are written into the data cache of the process where the subscriber is located, and the data source is subjected to packet splicing in a packet splicing mode, so that the subscriber has more resources to process the data packets and provide service, the processing speed of the data packets is increased, the delay generated during data synchronization is reduced, and particularly when the load of the process where the subscriber is located is high, the method provided by the invention can prevent the subscriber from being in a slow consumption state for a long time.
Preferably, the method further comprises:
and sending an acknowledgement indication to the data source, wherein the acknowledgement indication carries the version number of the acknowledged data packet.
The method comprises the steps that the confirmation indication sent to the data source carries the version number of the confirmed data packet, so that the data source slides the pre-sending window and the parallel packet sending window in the version number cache queue according to the version number carried in the confirmation indication, and therefore the current consumption capacity of a subscriber is accurately determined according to the number of the version numbers falling into the pre-sending window and the parallel packet sending window in the current version number cache queue.
In a third aspect, an embodiment of the present invention provides a data transmission apparatus, including:
an obtaining unit, configured to obtain a data packet to be sent;
the first processing unit is used for placing the acquired version number of the data packet to be sent into the tail of the version number cache queue according to the first-in first-out sequence;
a sending unit, configured to, when it is determined that the number of version numbers falling into a packet sending window in the version number cache queue reaches a packet splicing threshold, merge a corresponding number of data packets to be sent into one packet splicing data packet and send the packet splicing data packet to a subscriber, where a packet header of the packet splicing data packet carries number information, so that the subscriber splits the packet splicing data packet into a corresponding number of data packets according to the number information; and writing the split data packet into a data cache of the process of the subscriber.
Preferably, the version number of the data packet which is sent but not acknowledged by the subscriber falls into a pre-sending window in the version number cache queue; and the apparatus, further comprising:
the sliding unit is used for synchronously sliding the pre-sending window and the parallel packet sending window when receiving a data packet acknowledgement indication of a subscriber each time so as to enable the version number of the acknowledged data packet to move out of the pre-sending window;
and the second processing unit is used for sending the data packet to be sent corresponding to the version number of the supplemented pre-sending window to the subscriber if the data packet corresponding to the version number of the supplemented pre-sending window is determined not to be sent when the version number of the data packet is supplemented from the parallel packet sending window to the pre-sending window.
Preferably, the data source is configured to obtain the data packet to be sent from a first process, and the subscriber is configured to write the received data packet into a data cache of a second process.
Preferably, the size of the pre-send window is:
y=max(0.1*WPS,(RTT+tdelay)*WPS)
wherein y represents the size of the pre-send window;
WPS represents the speed of writing a data packet to be sent per second;
RTT represents round trip delay;
tdelayrepresenting the delay caused by a subscriber sending a single indication of acknowledgement of processing at least two data packets.
Optionally, if the received acknowledgement indicates that the subscriber sends the acknowledgement after processing one data packet each time, the size of the pre-send window is:
y=max(0.1*WPS,RTT*WPS)
wherein y represents the size of the pre-send window;
WPS represents the speed of writing a data packet to be sent per second;
RTT represents the round trip delay.
In a fourth aspect, an embodiment of the present invention provides another data transmission apparatus, including:
the device comprises an acquisition unit, a receiving unit and a processing unit, wherein the acquisition unit is used for acquiring a packet splicing data packet sent by a data source from a receiving queue, the packet splicing data packet is obtained by putting the version number of the acquired data packet to be sent into the tail of a version number cache queue according to the first-in first-out sequence after the data source acquires the data packet to be sent, and merging the corresponding number of the data packets to be sent when the number of the version numbers falling into a packet merging sending window in the version number cache queue reaches a packet splicing threshold value, and the packet head of the packet splicing data packet carries number information;
the splitting unit is used for splitting the packet splicing data packet into a corresponding number of data packets according to the number information;
and the writing unit is used for writing the split data packet into a data cache of the process where the subscriber is located.
Preferably, the apparatus further comprises:
a sending unit, configured to send an acknowledgement indication to the data source, where the acknowledgement indication carries a version number of an acknowledged data packet.
In a fifth aspect, an embodiment of the present invention provides a computer-readable medium storing a computer program executable by a computing apparatus, the program, when running on the computing apparatus, causing the computing apparatus to perform a step of a data transmitting method provided by a data source side or a step of a data transmitting method provided by a subscriber side.
In a sixth aspect, an embodiment of the present invention provides an electronic device, including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a data transmission method provided by a data source side or to perform a data transmission method provided by a subscriber side.
The invention has the beneficial effects that:
according to the data transmission method, the data transmission device and the readable medium provided by the embodiment of the invention, after a data source obtains a data packet to be transmitted, the obtained version number of the data packet to be transmitted is firstly put at the tail of a version number cache queue, when the number of the version numbers falling into a packet-combining transmission window in the version number cache queue reaches a packet-combining threshold value, the corresponding number of the data packets to be transmitted are combined into one packet-combining data packet and then transmitted to a subscriber, and compared with the case that the data packets to be transmitted are transmitted to the subscriber one by one, the subscriber can read a plurality of data packets from the receiving queue at one time, so that the time for reading the data packets is reduced, and resources occupied by reading the data packets are saved; in addition, when a subscriber receives a spliced packet data packet, splitting the spliced packet data packet into a corresponding number of data packets according to the number information carried in the packet header of the spliced packet data packet; the split data packet is written into the data cache of the process where the subscriber is located, and the subscriber has more resources to process the data packet and provide service, so that the processing speed of the data packet is increased, the delay generated during data synchronization is reduced, and particularly when the process where the subscriber is located has a high load, the method provided by the invention can prevent the subscriber from being in a slow consumption state for a long time.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the invention and not to limit the invention. In the drawings:
FIG. 1 is a diagram illustrating data synchronization between a data source and a subscriber in the prior art;
FIG. 2a is an application architecture diagram of a network game server implementing the data transmission method provided by the present invention;
FIG. 2b is a diagram illustrating inter-process synchronization in a network game server according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a data transmission structure between processes in a server according to the present invention;
fig. 4 is a schematic flowchart of a data transmission method according to an embodiment of the present invention;
fig. 5 is a schematic diagram of a version number cache queue according to an embodiment of the present invention;
fig. 6a is a schematic diagram of a version number falling into a parallel packet sending window according to an embodiment of the present invention;
fig. 6b is a schematic diagram of a sending queue after a version number falls into a parallel packet sending window according to an embodiment of the present invention;
fig. 6c is a schematic diagram illustrating a comparison between a packet sending method and a real-time sending method according to an embodiment of the present invention;
fig. 7a is a schematic diagram of a version number falling into a pre-send window according to an embodiment of the present invention;
fig. 7b is a schematic diagram of a sending queue after a version number falls into a pre-sending window according to an embodiment of the present invention;
fig. 8a is a schematic diagram of before and after a pre-send window and a parallel packet sending serial port in a version number cache queue slide when an acknowledgement indication provided by the embodiment of the present invention carries a version number;
fig. 8b is a schematic diagram of before and after a pre-send window and a parallel packet sending serial port in a version number cache queue slide when an acknowledgement indication provided by the embodiment of the present invention carries 5 version numbers;
FIG. 9a is a schematic diagram illustrating a variation of the consumer consumption capability of the subscriber in different pre-send windows according to an embodiment of the present invention;
fig. 9b is a schematic diagram of delays caused under different pre-sending windows when the subscriber load changes according to an embodiment of the present invention;
fig. 10 is a schematic diagram of a version number cache queue including a non-routable window according to an embodiment of the present invention;
fig. 11a is a schematic diagram of data transmission when a subscriber is in a normal consumption state according to an embodiment of the present invention;
FIG. 11b is a schematic diagram of data transmission when a subscriber is in a slow consumption state according to an embodiment of the present invention;
fig. 12a is a schematic diagram illustrating a data processing effect of a subscriber after the data transmission method provided by the present invention is adopted;
fig. 12b is a second schematic diagram illustrating a data processing effect of the subscriber after the data transmission method provided by the present invention is adopted;
fig. 13 is a schematic structural diagram of a data sending device on a data source side according to an embodiment of the present invention;
fig. 14 is a schematic structural diagram of a data sending apparatus on a subscriber side according to an embodiment of the present invention;
fig. 15 is a schematic structural diagram of a computing apparatus for implementing a data transmission method according to an embodiment of the present invention.
Detailed Description
The data transmission method, the data transmission device and the readable medium provided by the embodiment of the invention are used for solving the problem that the data synchronization delay is increased due to the data transmission method adopted in the prior art.
The preferred embodiments of the present invention will be described below with reference to the accompanying drawings of the specification, it being understood that the preferred embodiments described herein are merely for illustrating and explaining the present invention, and are not intended to limit the present invention, and that the embodiments and features of the embodiments in the present invention may be combined with each other without conflict.
In order to facilitate understanding of the present invention, some terms related to the embodiments of the present invention are explained:
1. a data source: the source of data is to provide some kind of component that needs data, i.e., the data source is used to provide data synchronization services for processes that need to synchronize data with other processes.
2. A subscriber: the data change in the data source is concerned for a data dependent party, and a notification of the data change needs to be received, namely, the subscriber is used for receiving the data packet sent by the data source and writing the data packet into a data cache of a process where the subscriber is located, so that the process can directly inquire data from the data cache maintained by the process when the process needs the data.
3. Network games, also known as online games, generally refer to electronic games in which a plurality of players play interactive entertainment via a network.
4. The network game client is a program corresponding to the network game server and providing local services for the player, and is generally installed on the player's mobile phone and needs to be operated in cooperation with the server.
5. The network game server is a software program which corresponds to the network game client, is installed in an Internet Data Center (IDC) and provides data forwarding and logic processing services for the network game client. In network gaming, complex and critical logic requires computation on the network game server, since clients installed on player devices are easily hacked and cheated with the game.
6. Consumption capability refers to the amount of data a subscriber can receive and process per unit of time.
7. Data synchronization delay, refers to the time difference between the sending of data from a data source to the completion of data reception by a subscriber.
8. The problem of slow consumption of data synchronization refers to the problem that when the data sending speed of a data source exceeds the data receiving speed of a subscriber, data can be accumulated in a receiving queue of the subscriber, so that the synchronization delay is increased.
9. The tail of the version number cache queue refers to a position which is adjacent to the last falling version number in the queue and is not filled with the version number.
In order to solve the problem of high data synchronization delay in the prior art, an embodiment of the present invention provides a data transmission method for solving the above problem, which can be applied to a server, for example, the method provided by the present invention can be applied to a network game server, an application architecture diagram of the network game server can be shown in fig. 2a, when the network game server of a network game provides services for a player, according to the functions implemented by the game, corresponding scene processes are set to provide services for the functions, the most common services in the game are various play methods, the player can select different play methods to play, for each play method, a scene process corresponding to the play method is set in the network game server to provide the services for the play method, for example, the services provided by the game in fig. 2a include play method, ranking list, auction and cache services, wherein the playing services comprise group service, battle team service, match service, team forming service and the like. For these services, fig. 2a shows a schematic diagram of providing services for a play, an auction, a ranking list and the like by corresponding scene processes, and if a special scene process provides services for a group play, a player can perform a group game. In order to ensure the real-time performance of player interaction in the game process, corresponding caches are arranged in scene processes corresponding to different playing methods, so that data in other processes can be synchronized into the caches of the corresponding caches, and extra request processing delay is avoided from being added to cross-process data query. When data in any process in the online game is synchronized to other processes, in order to reduce time delay of data synchronization between the processes, the data transmission method provided by the invention can be implemented, namely, a data source is arranged in the process needing data synchronization, a subscriber is arranged in the process needing data generated by other processes, the data source can obtain a data packet to be transmitted generated in the process needing data synchronization, the version number of the data packet to be transmitted is put into the tail of a version number cache queue, when the number of the version numbers in a packet transmission window in the cache queue is determined to reach a packet splicing threshold value, the corresponding number of the data packets to be transmitted are merged into one packet splicing data packet to be transmitted to the subscriber, the data packet to be transmitted can not be transmitted to the subscriber firstly after being obtained by adopting the method provided by the invention, when the number of the version numbers in a packet combining transmission window reaches the packet splicing threshold value, and then the corresponding number of data packets to be sent are spliced into one packet to be sent to the subscriber, so that the processing time delay of the subscriber can be reduced, the data consumption capability of the subscriber is greatly increased, and the subscriber does not have the problem of slow consumption. The structural schematic diagram shown in fig. 3 may be referred to in the process-to-process data synchronization, a data source in an upper layer process may synchronize data to be synchronized in the process to a subscriber in the scenario process 1 and/or 2 by using the data transmission method provided by the present invention, and the subscriber 1 and/or 2 writes the acquired data packet into a data cache in the scenario process.
In the following, in connection with the application scenarios of fig. 2a, 2b and 3, the data transmission method provided according to the exemplary embodiment of the present invention is described with reference to fig. 4-15. It should be noted that the above application scenarios are merely illustrated for the convenience of understanding the spirit and principles of the present invention, and the embodiments of the present invention are not limited in this respect. Rather, embodiments of the present invention may be applied to any scenario where applicable.
As shown in fig. 4, a flow diagram of a data transmission method provided in an embodiment of the present invention may include the following steps:
and S11, the data source acquires a data packet to be sent.
The data source is configured to obtain the data packet to be sent from a first process, where the first process is a process that needs to synchronize data in any one of servers including multiple processes, for example, when the server is an online game server, the first process may be, but is not limited to, a process of a party playing method, and the like.
As shown in fig. 3, an upper process in the server is a first process, and a service module in the upper process has a snapshot creation/loading interface and a command creation/loading interface. The snapshot refers to the full data of the service module at a certain moment, and the command refers to the incremental data generated by the service module in a certain time period. The data source may obtain data in the service module in the upper layer process through the snapshot generating interface or the command generating interface, and for convenience of description, the embodiment of the present invention may simply refer to data obtained from the service module once as a data packet to be sent.
Specifically, a service module commonly found in a network game server is taken as an assistant module for illustration, and the assistant module provides a snapshot and command generation and loading interface, and initializes a subscriber and a data source in a scene process and an assistant playing process (an upper process), respectively. When the network game server is started, a subscriber in a scene process sends a data synchronization request to a data source in a group module (a service module) in a group playing process, and the data source calls a snapshot generating interface of the group module to acquire group data needing synchronization in the group module after receiving the data synchronization request. For example, when the network game server starts to provide game services, subscribers in the scene processes of each play are triggered to send data synchronization requests to data sources in respective upper-layer processes, so that the data sources of the upper-layer processes synchronize initialization data to the subscribers by calling the snapshot generating interface, and then the subscribers write the initialization data into the data cache in the scene processes through the snapshot loading interface.
On the other hand, when there is new data in the service module in the upper process, an instruction for acquiring the new data is sent to the data source, and after receiving the instruction, the data source acquires the new data through the command generation interface, that is, the data packet to be sent in the present invention. For example, after detecting the data modification instruction, the group play process generates modified data, that is, new data, and then sends an instruction for obtaining the new data to a data source in the group play process, and then the data source may obtain a data packet to be sent (the new data) through a command generation interface, where the new data may be, but is not limited to, ranking list data in a group.
And S12, the data source puts the acquired version number of the data packet to be sent into the tail of the version number cache queue according to the first-in first-out sequence.
Specifically, as shown in fig. 5, the version number cache queue in the present invention is used for storing the obtained version number of the data packet to be sent, and the version number cache queue maintains three sending windows, including a confirmed window, a pre-sending window, and a parallel packet sending window, where the version number in the confirmed window is the version number of the data packet that has been sent and is confirmed by the subscriber; the version number in the pre-sending window is the version number of the data packet which is sent but not confirmed by the subscriber; the version number in the parallel packet sending window is the version number of the data packet which is possibly sent but not confirmed by the subscriber, and also the version number of the data packet waiting to be sent. When the subscriber is determined to be in the slow consumption state, the version number only exists in the parallel packet sending window, otherwise, the version number only falls into the pre-sending window, so that the version number corresponding to the tail of the version number cache queue may fall into the pre-sending window or the parallel packet sending window. For example, fig. 5 shows a schematic diagram of the queue tail falling into the pre-sending window and the parallel packet sending window respectively. Of course, if it is determined that the version numbers in the version number cache queue are all received and processed by the subscriber, no version number exists in both the pre-send window and the parallel packet send window, and in this case, the version number corresponding to the tail of the version number cache queue falls into the confirmed window.
Preferably, the version number is generated by the data source when the data source acquires the data packet to be sent, and the version number is used for representing the state of the data packet to be sent. After receiving the data packet, the subscriber can send the version number of the acknowledged data packet to the data source in real time or at regular time, and after receiving the version number of the acknowledgement, the data source can know which data packets corresponding to the version numbers are acknowledged by the subscriber and which data packets are not acknowledged by the subscriber. Specifically, after the data source acquires the data packet to be sent each time, a version number is configured for the data packet to be sent, where the version number is adjacent to the version number configured by the data source for the data packet to be sent acquired last time, and if the version number configured by the data source for the data packet to be sent acquired last time is 10, the version number configured by the data source for the data packet to be sent acquired this time is 11, and so on. After configuring the version number for the data packet to be sent, the data source carries the version number in the data packet to be sent and sends the version number to the subscriber according to the method provided by the invention, and after receiving and processing the data packet from the receiving queue, the subscriber sends an acknowledgement indication to the data source, wherein the acknowledgement indication carries the version number of the data packet to be acknowledged, so that after receiving the acknowledgement indication, the data source can determine the processing state of the sent data packet at the subscriber side according to the version number carried in the acknowledgement indication.
Moreover, the to-be-sent data packet sent by the data source to the subscriber includes, in addition to the version number, a data processing instruction and required data, for example, the data processing instruction included in the to-be-sent data packet corresponding to one ranking list in the group is an instruction for generating the ranking list, and the corresponding data is data required for generating the ranking list, such as each player identifier and a parameter value required for measuring ranking; for another example, a player adds a group, the data processing instruction contained in the corresponding to-be-sent data packet is a group adding instruction, and the required data is the identifier of the player to be added and the identifier of the group to be added.
And S13, when the data source determines that the number of the version numbers falling into the packet-sending window in the version number cache queue reaches the packet-splicing threshold, merging the corresponding number of data packets to be sent into a packet-splicing data packet and sending the packet-splicing data packet to the subscriber.
Specifically, the pre-send window is adjacent to the parallel packet send window, and the pre-send window appears before the parallel packet send window, as shown in fig. 5, the pre-send window appears first, and when it is determined that the number of the version numbers in the pre-send window reaches the size of the pre-send window, the data source obtains the data packet to be sent from the service module again, and the version number of the data packet to be sent falls into the parallel packet send window. For example, referring to fig. 5, if the size of the pre-transmission window in the version number cache queue is 50, when the data source determines that the number of version numbers 11-60 falling into the pre-transmission window reaches 50, that is, it indicates that the current subscriber is in a diffuse consumption state, in order to avoid increasing the delay of synchronous data, after the data source acquires a data packet to be transmitted at the next time, the version number 61 of the data packet to be transmitted falls into the parallel packet transmission window, as shown in fig. 6a, until the number of version numbers in the parallel packet transmission window reaches the packet splicing threshold, the data packets to be transmitted corresponding to the version number meeting the packet splicing threshold are merged into one packet data packet according to the sequence of the version numbers from small to large, and then the packet data packet is transmitted to the subscriber, that is, the data transmission pointer in the transmission queue of the data source points to the data packet to be transmitted corresponding to the version number 61, as shown in fig. 6b, when the number of the version numbers falling into the parallel packet sending window reaches a packet splicing threshold value, the method provided by the invention is used for splicing packets, and the spliced packet data packet obtained by splicing packets is sent to a subscriber. For example, when the packet matching threshold is 10, when the version number falling into the parallel packet transmission window is 61-70, that is, the number of the version numbers falling into the parallel packet transmission window reaches 10, merging the data packets to be transmitted corresponding to the version numbers 61-70 into one packet matching data packet, and then transmitting the packet matching data packet to the subscriber, at this time, the version numbers 61-70 in the parallel packet transmission window may be characterized as the version number of the data packet that has been transmitted but not acknowledged by the subscriber, and the version number 71 may be characterized as the version number of the data packet waiting to be transmitted. Therefore, when the packet data packet is sent to the subscriber in the packet sending mode, the subscriber can read the data packets with the number corresponding to the packet threshold value at one time by only reading the packet data packet once from the receiving queue, and compared with the prior art that the packet data packet is not adopted and only one data packet is read from the receiving queue at one time, the reading time and the resources occupied by reading the data packet for multiple times are reduced, and the data packet can be processed by the saved time and resources, so that the speed of processing the data packet by the subscriber can be increased. For example, taking the package splicing threshold as 10 for example, after the data source sends the package splicing data package obtained by merging 10 data packages to be sent to the receiving queue of the subscriber, the subscriber can read the package splicing data package from the receiving queue at one time, that is, read 10 data packages at one time, and does not need to execute 10 times to read 10 data packages, so that the time and resources occupied by reading the data packages for 9 times can be reduced, so that the subscriber can process the data package by using the saved time and resources, the speed of processing the data package by the subscriber is increased, and the subscriber can end the diffuse consumption state quickly.
In addition, when the threshold value of the piecing package is reached, the data source pieces together the data packets to be sent corresponding to the version number that meets the threshold value of the piecing package into one package to be sent in a package combining manner, so that a subscriber can receive a batch of data packets at a time and process the data packets uniformly, and tests show that the data source sending the piecing package data packets in the piecing package manner can actually greatly increase the data packet consumption capacity of the subscriber, and when the CPU utilization rate of the process where the subscriber is located reaches 80%, the data source combines 10 data packets to be sent into one piecing package data packet to be sent, and compared with the unpiecing package sending, the consumption speed of the subscriber is increased by about 5 times, which can be specifically shown in fig. 6 c. As can be seen from fig. 6c, the data packet is sent to the subscriber by using the packet sending method, so that the subscriber has more time and resources to process the data packet, thereby increasing the data processing speed, ending the diffuse consumption processing state early, and reducing the delay caused by the synchronous processing by using the packet sending method.
Preferably, the threshold value of the piecemeal package is adjustable according to the consumption processing state of the subscriber.
Preferably, after the data source obtains the data packet to be sent, if it is determined that the number of the version numbers falling into the pre-sending window does not exceed the size of the pre-sending window, that is, the pre-sending window is not full, in the present invention, placing the version number of the data packet to be sent into the queue tail of the version number cache queue can be understood as that the version number of the data packet to be sent falls into the pre-sending window and is next to the version number of the data packet to be sent, and the data packet to be sent is directly sent to the subscriber. For example, when the version number of the currently acquired data packet to be sent is 57 and it is determined that the pre-transmission window is not full, the version number of the data packet to be sent falls into the tail of the queue in the pre-transmission window, that is, the version number 57 is filled into the position adjacent to the version number 56 in the pre-transmission window, as shown in fig. 7a, and at the same time, the data transmission pointer in the transmission queue of the data source is positioned to the data packet to be sent corresponding to the version number 57, and the data packet to be sent is directly sent to the subscriber, as shown in fig. 7 b.
Specifically, when the data source sends the package data packet or the data packet to be sent to the subscriber, the package data packet or the data packet to be sent is sent to a receiving queue of the subscriber. When the data source sends the spliced packet data packet to the subscriber, the packet head of the spliced packet data packet carries quantity information, and the quantity information is used for indicating the quantity of the data packets contained in the spliced packet data packet. When the spliced data packet is obtained, the data source is spliced according to a protocol agreed with a subscriber, so that the subscriber unpacks the spliced data packet according to a preset protocol after reading the spliced data packet, and correctly reads each data packet. Then the subscriber reads out the needed data from the data package according to the format agreed with the data source.
It should be noted that, when there is more than one subscriber, the data source may respectively maintain version number cache queues for each subscriber, each subscriber is used to execute different services, a corresponding relationship between a service identifier of the subscriber and the version number cache queues may be set to distinguish the version number cache queues maintained by the data source for each subscriber, and then a manner of sending data packets to each subscriber is selected according to the number of version numbers in each window in each version number cache queue.
And S14, the subscriber acquires the spliced packet data packet sent by the data source from the receiving queue.
The data consumption of the subscriber can be divided into two processes, one is to receive the data, namely to obtain a data packet from a receiving queue; the other is to process the data, i.e. to execute data modification logic.
When the step is executed, a special module in the scene process is responsible for informing the subscriber that a newly added data packet exists in a receiving queue, so that the subscriber can read the data packet sent by the data source from the receiving queue, but the subscriber also needs to process the previously read data packet; if the CPU load of the current subscriber is larger, after the subscriber receives the notice of the newly added data packet, it may be that other data packets are processed or other services are provided currently, at this time, the subscriber cannot obtain the newly added data packet from the receiving queue, and after the subscriber finishes processing the current data packet or the load of the currently provided service is not too high, the data packets are sequentially read from the receiving queue according to the first-in first-out principle, in this case, if the subscriber does not send the data package in a way of piecing together, the speed of sending the data package by the subscriber is far higher than the speed of processing the data package by the subscriber, so that the subscriber is in a slow consumption processing state, if the data source determines that the subscriber is in the slow consumption state, the data source sends the piecing together data package in a way of piecing together, the slow consumption processing state of the subscriber can be effectively relieved, and the speed of finishing the slow consumption state of the subscriber is accelerated.
And S15, the subscriber splits the pieced-together data packet into a corresponding number of data packets according to the number information carried in the packet header of the pieced-together data packet.
In this step, after the subscriber reads the packet data packet from the receiving queue, the subscriber needs to split the packet data packet into a corresponding number of data packets according to the number information carried in the packet header of the packet data packet, and if the number information is 10, the subscriber splits the packet data packet into 10 data packets.
And S16, writing the split data packet into a data cache of the process of the subscriber by the subscriber.
The subscriber is used for writing the received data packet into a data cache of a second process, the second process can be a process which needs to depend on data in other processes in a multi-process server and is in the same server with the first process, and when the server is a network game server and the first process is a party playing process, the second process can be a scene process for providing party playing service.
In this step, after the subscriber performs corresponding processing according to the data processing instruction and the required data in each split data packet, the data to be loaded into the data cache in the scene process is loaded into the data cache by calling the snapshot/command loading interface shown in fig. 3, where the scene process in fig. 3 is the second process. If the data is the data of the initialized scene process, the data of the initialized scene process is loaded into the data cache by calling the snapshot loading interface, and if the data in the data packet is the data related to the generation of the ranking list, the data is loaded into the data cache through the calling command loading interface on one hand, and the ranking list is generated by utilizing the data on the other hand and displayed to the player.
If the subscriber reads a single data packet from the receiving queue, corresponding operations are directly performed according to the data processing indication and the required data in the data packet, which may specifically refer to the above processing procedure for multiple data packets.
And S17, the subscriber sends a data packet acknowledgement indication to the data source.
In this step, after the subscriber reads the data packet from the receiving queue and processes the data packet, an acknowledgement indication is sent to the data source. Specifically, the subscriber may send an acknowledgement indication of the data packet to the subscriber immediately every time the subscriber processes one data packet, where the acknowledgement indication carries a version number of the data packet; on the other hand, after processing a plurality of data packets, the subscriber may send the acknowledgement indication of the data packets to the data source at one time, that is, the acknowledgement indication carries the version numbers of the processed data packets.
And S18, when the data source receives the data packet acknowledgement indication of the subscriber, synchronously sliding the pre-sending window and the parallel packet sending window to make the version number of the acknowledged data packet move out of the pre-sending window.
In this step, when the acknowledgement indicator received by the data source carries the version number of one data packet, the pre-send window and the parallel packet send window are slid, and the version number carried in the acknowledgement indicator is shifted out from the pre-send window. For example, if the version number carried in the currently received acknowledgement indication is 11, the schematic diagram before and after the pre-send window and the parallel packet send window in the version number buffer queue slide may be as shown in fig. 8 a. After sliding the pre-send window and the parallel packet send window in fig. 8a, version number 11 falls into the acknowledged window and version number 61 falls into the pre-send window.
When the acknowledgement indication carries multiple version numbers, shifting out multiple version numbers from the pre-sending window by sliding the pre-sending window and the parallel packet sending window, and if the parallel packet sending window has the version numbers, shifting out the version numbers not exceeding the corresponding number from the parallel packet sending window and supplementing the version numbers into the pre-sending window, for example, if the acknowledgement indication has N version numbers, shifting out the N version numbers from the pre-sending window, if the parallel packet sending window has more than N version numbers, beginning to take N version numbers from the version number adjacent to the pre-sending window in the parallel packet sending window and supplementing the N version numbers into the pre-sending window, and if the number of the version numbers in the parallel packet sending window is less than N version numbers, supplementing all the version numbers in the parallel packet sending window into the pre-sending window. For example, if the acknowledgement indication carries 5 version numbers, which are 11 to 15, the pre-transmission window and the parallel packet transmission window may be slid, and the 5 version numbers are shifted out from the pre-transmission window, and if there are version numbers in the parallel packet transmission window, no more than 5 version numbers in the parallel packet transmission window are complemented into the pre-transmission window, which may be specifically referred to as fig. 8b, and since the number of version numbers in the parallel packet transmission window in fig. 8b exceeds 5, when the pre-transmission window and the parallel packet transmission window are slid, the 5 version numbers of 61 to 65 may be complemented into the pre-transmission window. Of course, if the number of the version numbers in the parallel packet transmission window is not more than 5, all the version numbers in the parallel packet transmission window are added into the pre-transmission window.
In particular, the version number in the pre-send window indicates that its corresponding packet has been sent but not received by the subscriber. The packet processing status at the subscriber side can thus be known by the number of version numbers in the pre-send window, whereas the size of the pre-send window determines the sensitivity of the slow consumption detection. When the size of the pre-sending window is 0, the data source is indicated to send the spliced packet data packet to the subscriber in a spliced packet sending mode all the time; and when the size of the pre-sending window is set to be infinite, the data source is indicated to send the data packet to be sent in real time. Therefore, the setting of the size of the pre-send window directly affects the detection result of the slow consumption state of the subscriber. Fig. 9a shows a schematic diagram of changes in the subscriber consumption capacity under different pre-sending windows when the subscriber load changes when the data source writes a data packet to be sent at a fixed writing speed, and fig. 9a shows a schematic diagram of changes in the subscriber CPU utilization rate and the subscriber consumption capacity under the conditions that the size of the pre-sending window is 0, the writing speed is 0.1, the writing speed is 0.2, and the writing speed is 10.
However, when the packet data package is sent by using the packet sending method, the data source side will query from the version number of the starting position in the parallel packet sending window only when the number of the version numbers in the parallel packet sending window reaches the packet threshold value until the version number corresponding to the packet threshold value minus 1 is queried, combine the data packages to be sent corresponding to the version numbers between the version numbers into one packet data package, and send the packet data package to the subscriber, so there is a sending delay, fig. 9b shows a delay diagram brought by different pre-sending windows when the load of the subscriber changes when the data source writes in the data package to be sent at a fixed writing speed, it can be seen that when the CPU utilization of the subscriber is smaller than the CPU utilization corresponding to the region where the label 1 is located, that is, when the subscriber load is relatively lower than the normal consumption, the consumption delay when the packet data package to be sent is not sent by using the packet sending method is the smallest, that is to say, in fig. 9b, when the CPU load is low, the delay of sending the data packet to be sent in the pre-sending window corresponding to the 10 times writing speed is smaller than the delay caused by the packet sending method when the size of the pre-sending window is 0, because the extra and large delay is generated by waiting and sending the packet in the packet sending method regardless of the consumption capability of the subscriber. When the subscriber has a large load and the CPU utilization is high and slow consumption is caused, the packet data packet is sent in the packet sending manner, and it can be found that the smaller the pre-sending window is, the smaller the delay is, that is, when the CPU utilization in the area where the reference numeral 1 is located in fig. 9b is greater than the corresponding CPU utilization in the area where the reference numeral 1 is located, the consumption delay caused by the size of the pre-sending window being 0 is smaller than the delay caused by the size of the pre-sending window being 10 times the writing speed. In summary, as can be seen from fig. 9a and 9b, when the subscriber is in a normal consumption state, delay can be reduced by setting a certain pre-transmission window to transmit the data packet to be transmitted in real time, and when the subscriber is in a slow consumption state, the data packet to be transmitted is transmitted in a packet transmission manner, so that delay can be reduced and consumption capability of the subscriber can be accelerated.
Specifically, when the subscriber sends the acknowledgement indication to the data source, the data source is sent after processing one data packet or processing a plurality of data packets, so that there is an acknowledgement delay, and in order to avoid the occurrence of a situation that the data source sends a packet data packet in a packet sending manner in advance due to the delay caused by the acknowledgement, the size of the pre-sending window is configured, and the two factors of the subscriber consumption capability and the sending delay are integrated, the size of the pre-sending window is set as shown in formula (1):
y=max(0.1*WPS,(RTT+tdelay)*WPS) (1)
wherein y represents the size of the pre-send window;
WPS represents the speed of writing a data packet to be sent per second;
RTT represents round trip delay;
tdelayrepresenting the delay caused by a single transmission by the subscriber of an indication of processed acknowledgements for at least two data packets.
In the formula (1), RTT indicates a time delay caused by normal transmission of a data packet to be transmitted to a subscriber, and the subscriber transmits an acknowledgement indication after receiving the data packet, and the time delay caused by normal transmission of the data packet to be transmitted and a time delay caused by a single transmission of an acknowledgement indication by the subscriber for processing at least two data packets are taken into account in the formula (1), so that it is possible to prevent a data source from transmitting a packet data packet to the subscriber in advance in a packet transmission manner due to a time delay caused by transmission of the acknowledgement indication.
When a subscriber receives and processes a data packet and immediately sends an acknowledgement indication, the two factors of the subscriber consumption capacity and the delay are integrated, and the size of the configured pre-sending window is shown in formula (2):
y=max(0.1*WPS,RTT*WPS) (2)
wherein y represents the size of the pre-send window;
WPS represents the speed of writing a data packet to be sent per second;
RTT represents the round trip delay.
Because the data source sends the data packet to be sent to the subscriber and RTT delay is generated between the subscriber and the acknowledgement indication returned by the subscriber, in order to ensure that the data source can receive the acknowledgement indication of the data packet to be sent corresponding to the first version number in the pre-sending window when the pre-sending window is filled, the pre-sending window can be allowed to slide normally, and thus, the sending delay caused by the data source sending the spliced data packet in a spliced packet sending mode too early is avoided.
Preferably, when the subscriber is in the slow consumption state for a long time, if the data source continues to send the bundled data packets to the subscriber in a bundled mode, the slow consumption state of the subscriber cannot be released, so that in order to avoid the problem, the size of the parallel packet transmission window is limited, and when the number of version numbers falling into the parallel packet transmission window exceeds the size of the parallel packet transmission window, then the version number of the subsequently acquired data packet to be sent is dropped into the non-sending window, as shown in fig. 10 in particular, the version number shown in fig. 10 is only an example, it is not to say that the size of the pre-send window is 50, the size of the parallel sending window is 950, etc., the size of the pre-send window may be determined according to the number of version numbers carried in the acknowledgement returned by the subscriber, and the size of the parallel sending window may be, but is not limited to, 5 WPS. Similarly, when the data source receives the acknowledgement indication sent by the subscriber, the pre-sending window, the parallel packet sending window and the non-sending window are slid according to the version number carried in the acknowledgement indication, so that the version number in the acknowledgement indication is removed from the pre-sending window, and the corresponding number of version numbers in the parallel packet sending window are supplemented into the pre-sending window, the corresponding number of version numbers in the non-sending window are fallen into the parallel packet sending window, if the data packets to be sent corresponding to the version numbers supplemented into the pre-sending window are determined not to be sent, the data packets to be sent are directly sent to the subscriber, if the number of the version numbers added into the parallel packet sending window and the number of the remaining version numbers in the parallel packet sending window reach the packet splicing threshold value, and according to the sequence of the parallel packet sending window from left to right, sequentially combining the data packets to be sent corresponding to the version numbers meeting the splicing packet threshold to obtain splicing packet data packets and sending the splicing packet data packets to the subscriber.
It should be noted that, when the subscriber sends the acknowledgement indication to the data source, the number of version numbers carried in the acknowledgement indication is variable, for example, the acknowledgement indication for 10 data packets is sent once, or the acknowledgement indication for 15 data packets is sent once, that is, the number of version numbers carried in the sent acknowledgement indication may be changed according to the actual situation. However, for the bundled packet data packet, when the acknowledgement indication is sent, the version number in the bundled packet data packet needs to be carried in one acknowledgement indication, for example, 10 version numbers are carried in the acknowledgement indication sent last time, the current subscriber acquires the bundled packet data packet, the bundled packet data packet is obtained by combining 15 data packets, and the subscriber can carry the version numbers of the 15 data packets in the acknowledgement indication and send the version numbers to the data source only after processing all the data packets in the bundled packet data packet.
Preferably, the size of the pre-transmission window and the size of the parallel packet transmission window can also be determined by using the average transmission speed of the data source for transmitting the data packets to be transmitted instead of the writing speed.
In addition, according to the data transmission method provided by the present invention, fig. 11a and 11b show schematic diagrams of data transmission when a subscriber is in a normal consumption state and in a slow consumption state. Fig. 11a and 11b illustrate an example of a size of a pre-send window being 50, where the pre-send window includes two parts, i.e. sent and remaining sent, where the sent version number is used to represent a version number of a packet sent to a subscriber, and the remaining sent version number is used to represent a version number that may also fall in the pre-send window. In fig. 11a, the data source sends the data packets with version numbers of 1-11 to the subscriber, where the version numbers 1-11 in the pre-sending window are sent and 12-50 are the remaining data to be sent; after an acknowledgement indication sent by a subscriber is received, according to the version number 1-10 carried in the acknowledgement indication, triggering a data source to slide a confirmed window and a pre-sending window, so that the version number 1-10 falls into the confirmed window, the version number 11 in the slid pre-sending window is sent, and the version number 12-60 is left to be sent; when a subsequent data source acquires a data packet to be sent, the version number of the data packet to be sent can fall into a pre-sending window, and the data packet to be sent is sent to a subscriber, that is, the data packet to be sent corresponding to the version number falling into the pre-sending window can be directly sent to the subscriber, and the subscriber can continue to perform batch confirmation after receiving the data packet, that is, the sent confirmation indication carries the confirmation of the data packet corresponding to the version number of 11-20.
In fig. 11b, the data source sends the data packets to be sent corresponding to the version numbers 1 to 12 to the subscriber, and the version numbers 1 to 10 are all acknowledged by the subscriber, so that the version numbers 1 to 10 fall into the acknowledged window, the version numbers 11 to 12 fall into the pre-sending window and are sent, and 13 to 60 in the pre-sending window are left to be sent. When a data source sends a data packet to be sent corresponding to the version number of 13-60 to a subscriber, 11-60 in a pre-sending window are sent, and the rest can be sent to be empty, which indicates that the pre-sending window is filled up, and at this time, when the load of the subscriber is high due to the influence of other playing methods, the subscriber does not obtain the data packet from a receiving queue in time and sends an acknowledgement indication to the data source, so that when the data source determines that the pre-sending window is filled up, the subscriber can know that the subscriber is in a slow consumption state, when the data source subsequently obtains the data packet to be sent, the version number 61 of the data packet to be sent falls into a parallel packet sending window, and at this time, the data packet to be sent can not be sent to the subscriber in real time, and the version number falling into the parallel packet sending window needs to reach a packet splicing threshold value. If the acknowledgement indication sent by the subscriber is not received, the subsequent version numbers 62 and the like fall into the parallel packet sending window, when the number of the version numbers in the parallel packet sending window reaches the splicing threshold value, if the splicing threshold value is 10, the data source combines the data packets to be sent corresponding to the version numbers 61-70 into a splicing packet data packet, and then sends the splicing packet data packet to the subscriber. When the load of a subscriber is gradually reduced, starting to acquire a data packet from a receiving queue and sending an acknowledgement indication to a data source, sliding a pre-sending window and a parallel packet sending window after the data source receives the acknowledgement indication, if the remaining available sending in the pre-sending window is not empty, determining that the subscriber is not in a slow consumption state, and after the data packet to be sent is acquired subsequently, directly sending the data packet to be sent to the subscriber while the version number of the data packet to be sent falls into the pre-sending window.
And S19, after the data source slides the pre-sending window and the parallel packet sending window, when the version number of the data packet is supplemented from the parallel packet sending window to the pre-sending window, if the data packet corresponding to the version number supplemented to the pre-sending window is determined not to be sent, sending the data packet to be sent corresponding to the version number supplemented to the pre-sending window to a subscriber.
In this step, in sliding the pre-send window and the parallel packet send window, because the version number in the pre-send window before sliding is used to indicate that all the data packets corresponding to the internal version number have been sent but are not acknowledged, and only when the version number in the parallel packet send window reaches the packet splicing threshold value, the data packets corresponding to these version numbers will be merged into one packet splicing data packet to be sent to the subscriber, which indicates that the data packet corresponding to the version number in the parallel packet send window may or may not have been sent. Therefore, after the version number in the parallel packet sending window is supplemented into the pre-sending window, whether the data packets corresponding to the version number supplemented into the pre-sending window are sent to the subscriber needs to be determined, and if the data packets are not sent to the subscriber, the data packets corresponding to the version numbers are sequentially sent to the subscriber.
Preferably, fig. 12a and 12b show schematic diagrams of data processing effects of a subscriber after the data sending method provided by the present invention is adopted, and it can be seen that when a process load of the subscriber is low, all version numbers of data packets represented by a merging packet rate of 1 in fig. 12a fall into a pre-sending window, and do not fall into a merging packet sending window. And when the load of the process where the subscriber is positioned increases, the sending and packaging rate also increases, which indicates that the subscriber is in a slow consumption state, and the data source sends the spliced data package to the subscriber in a spliced and packaged sending mode. As can be seen from the delay comparison diagram shown in fig. 12b, as time goes by, when the load of the process where the subscriber is located changes, the consumption delay caused by not adopting the packet sending manner but adopting the manner of sending the data packet to be sent in real time fluctuates very sharply, and the consumption delay generated after the manner of sending the data packet is adjusted according to the change of the load of the process where the subscriber is located (the data source is sent in real time when the load of the subscriber is low, and the data source is sent in the packet sending manner when the load of the subscriber is high) is relatively stable as a whole and relatively low, so that the completion of the slow consumption state by the subscriber can be accelerated and the delay can be reduced to a certain extent by adjusting the data packet sending manner according to the data processing state of the subscriber correspondingly.
According to the data transmission method provided by the invention, after a data source obtains a data packet to be transmitted, the obtained version number of the data packet to be transmitted is put at the tail of a version number cache queue according to the first-in first-out sequence, when the number of the version numbers falling into a parallel packet transmission window in the version number cache queue reaches a packet splicing threshold value, the corresponding number of the data packets to be transmitted are combined into one packet splicing data packet and then transmitted to a subscriber, and compared with the case that the data packets to be transmitted are transmitted to the subscriber one by one, the subscriber can read a plurality of data packets from a receiving queue at one time, so that the time for reading the data packets is reduced, and resources occupied by reading the data packets are saved; in addition, when a subscriber receives a spliced packet data packet, splitting the spliced packet data packet into a corresponding number of data packets according to the number information carried in the packet header of the spliced packet data packet; the split data packet is written into the data cache of the process where the subscriber is located, and the subscriber has more resources to process the data packet and provide service, so that the processing speed of the data packet is increased, the delay generated during data synchronization is reduced, and particularly when the process where the subscriber is located has a high load, the method provided by the invention can prevent the subscriber from being in a slow consumption state for a long time.
Based on the same inventive concept, the embodiment of the present invention further provides a data transmission apparatus, and as the principle of the apparatus for solving the problem is similar to that of data transmission, the implementation of the apparatus may refer to the implementation of the method, and repeated details are not repeated.
As shown in fig. 13, a schematic structural diagram of a data sending apparatus on a data source side according to an embodiment of the present invention includes:
an obtaining unit 21, configured to obtain a data packet to be sent;
the first processing unit 22 is configured to put the acquired version number of the to-be-sent data packet into the tail of the version number cache queue according to a first-in first-out sequence;
a sending unit 23, configured to, when it is determined that the number of version numbers falling into a packet sending window in the version number cache queue reaches a packet splicing threshold, merge a corresponding number of data packets to be sent into one packet splicing data packet and send the packet splicing data packet to a subscriber, where a packet header of the packet splicing data packet carries quantity information, so that the subscriber splits the packet splicing data packet into a corresponding number of data packets according to the quantity information; and writing the split data packet into a data cache of the process of the subscriber.
Preferably, the version number of the data packet which is sent but not acknowledged by the subscriber falls into a pre-sending window in the version number cache queue; and the apparatus, further comprising:
the sliding unit is used for synchronously sliding the pre-sending window and the parallel packet sending window when receiving a data packet acknowledgement indication of a subscriber each time so as to enable the version number of the acknowledged data packet to move out of the pre-sending window;
and the second processing unit is used for sending the data packet to be sent corresponding to the version number of the supplemented pre-sending window to the subscriber if the data packet corresponding to the version number of the supplemented pre-sending window is determined not to be sent when the version number of the data packet is supplemented from the parallel packet sending window to the pre-sending window.
Preferably, the data source is configured to obtain the data packet to be sent from a first process, and the subscriber is configured to write the received data packet into a data cache of a second process.
Preferably, the size of the pre-transmission window is:
y=max(0.1*WPS,(RTT+tdelay)*WPS)
wherein y represents the size of the pre-send window;
WPS represents the speed of writing a data packet to be sent per second;
RTT represents round trip delay;
tdelayrepresenting the delay caused by a subscriber sending a single indication of acknowledgement of processing at least two data packets.
Optionally, if the received acknowledgement indicates that the subscriber sends the acknowledgement after processing one data packet each time, the size of the pre-send window is:
y=max(0.1*WPS,RTT*WPS)
wherein y represents the size of the pre-send window;
WPS represents the speed of writing a data packet to be sent per second;
RTT represents the round trip delay.
For convenience of description, the above parts are separately described as modules (or units) according to functional division. Of course, the functionality of the various modules (or units) may be implemented in the same or in multiple pieces of software or hardware in practicing the invention.
As shown in fig. 14, a schematic structural diagram of a data sending apparatus on a subscriber side is further provided for an embodiment of the present invention, including:
an obtaining unit 31, configured to obtain a packet data packet sent by a data source from a receiving queue, where the packet data packet is obtained by the data source according to a first-in first-out sequence after obtaining a data packet to be sent, and placing an obtained version number of the data packet to be sent at a tail of a version number cache queue, and when it is determined that the number of the version numbers falling into a packet sending window in the version number cache queue reaches a packet threshold, merging corresponding numbers of the data packets to be sent, where a packet header of the packet data packet carries number information;
a splitting unit 32, configured to split the packet data packet into a corresponding number of data packets according to the number information;
and the writing unit 33 is configured to write the split data packet into a data cache of a process in which the subscriber is located.
Preferably, the apparatus further comprises:
a sending unit, configured to send an acknowledgement indication to the data source, where the acknowledgement indication carries a version number of an acknowledged data packet.
For convenience of description, the above parts are separately described as modules (or units) according to functional division. Of course, the functionality of the various modules (or units) may be implemented in the same or in multiple pieces of software or hardware in practicing the invention.
Having described the data transmission method and apparatus according to an exemplary embodiment of the present invention, a computing apparatus according to another exemplary embodiment of the present invention is described next.
As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or program product. Thus, various aspects of the invention may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
In some possible embodiments, a computing device according to the present invention may comprise at least one processing unit, and at least one memory unit. Wherein the storage unit stores program code which, when executed by the processing unit, causes the processing unit to perform the steps in the data transmission method according to various exemplary embodiments of the present invention described above in this specification. For example, the processing unit may perform steps S11 to S19 as shown in FIG. 4.
The computing means 41 according to this embodiment of the invention is described below with reference to fig. 15. The computing device 41 shown in fig. 15 is only an example, and should not bring any limitation to the function and the scope of use of the embodiment of the present invention.
As shown in fig. 15, the computing apparatus 41 is embodied in the form of a general purpose computing device. Components of computing device 41 may include, but are not limited to: the at least one processing unit 411, the at least one storage unit 412, and a bus 413 connecting various system components (including the storage unit 412 and the processing unit 411).
Bus 413 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, a processor, or a local bus using any of a variety of bus architectures.
The storage unit 412 may include readable media in the form of volatile memory, such as Random Access Memory (RAM)4121 and/or cache memory 4122, and may further include Read Only Memory (ROM) 4123.
The memory unit 412 may also include a program/utility 4125 having a set (at least one) of program modules 4124, such program modules 4124 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Computing device 41 may also communicate with one or more external devices 414 (e.g., keyboard, pointing device, etc.), may also communicate with one or more devices that enable a user to interact with computing device 41, and/or may communicate with any devices (e.g., router, modem, etc.) that enable computing device 41 to communicate with one or more other computing devices. Such communication may occur via input/output (I/O) interfaces 415. Moreover, computing device 41 may also communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) through network adapter 416. As shown, network adapter 416 communicates with other modules for computing device 41 over bus 413. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with computing device 41, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
In some possible embodiments, various aspects of the data transmission method provided by the present invention may also be implemented in the form of a program product, which includes program code for causing a computer device to perform the steps in the data transmission method according to various exemplary embodiments of the present invention described above in this specification when the program product runs on the computer device, for example, the computer device may perform the steps S11 to S19 shown in fig. 4.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The program product for a data transmission method of an embodiment of the present invention may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a computing device. However, the program product of the present invention is not limited in this regard and, in the present document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device over any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., over the internet using an internet service provider).
It should be noted that although several units or sub-units of the apparatus are mentioned in the above detailed description, such division is merely exemplary and not mandatory. Indeed, the features and functions of two or more of the units described above may be embodied in one unit, according to embodiments of the invention. Conversely, the features and functions of one unit described above may be further divided into embodiments by a plurality of units.
Moreover, while the operations of the method of the invention are depicted in the drawings in a particular order, this does not require or imply that the operations must be performed in this particular order, or that all of the illustrated operations must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
The terminal positioning device provided by the embodiment of the application can be realized by a computer program. It should be understood by those skilled in the art that the above-mentioned division of modules is only one of many divisions of modules, and if the division into other modules or no division into modules is performed, it is within the scope of the present application as long as the positioning apparatus has the above-mentioned functions.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (12)

1. A data transmission method, comprising:
a data source acquires a data packet to be sent which needs to be synchronized; and are
According to the first-in first-out sequence, the obtained version number of the data packet to be sent is put into the tail of a version number cache queue, and the version number is used for representing the state of the data packet to be sent;
when the number of the version numbers falling into the packet sending window in the version number cache queue reaches a packet splicing threshold value, merging the corresponding number of data packets to be sent into a packet splicing data packet and sending the packet splicing data packet to a subscriber, wherein the packet head of the packet splicing data packet carries number information, so that the subscriber splits the packet splicing data packet into the corresponding number of data packets according to the number information; and writing the split data packet into a data cache of the process of the subscriber.
2. The method of claim 1, wherein the version number of the data packet sent but not acknowledged by the subscriber falls into a pre-sending window in the version number cache queue; and the method, further comprising:
when receiving a data packet acknowledgement indication of a subscriber, synchronously sliding the pre-sending window and the parallel packet sending window to enable the version number of the acknowledged data packet to be shifted out of the pre-sending window; and are
And when the version number of the data packet is supplemented from the parallel packet sending window to the pre-sending window, if the data packet corresponding to the version number of the supplemented pre-sending window is determined not to be sent, sending the data packet to be sent corresponding to the version number of the supplemented pre-sending window to a subscriber.
3. A method according to claim 1 or 2, wherein the data source is adapted to obtain the data packet to be sent from a first process, and the subscriber is adapted to write the received data packet to a data cache of a second process.
4. The method of claim 2, wherein the size of the pre-send window is:
y=max(0.1*WPS,(RTT+tdelay)*WPS)
wherein y represents the size of the pre-send window;
WPS represents the speed of writing a data packet to be sent per second;
RTT represents round trip delay;
tdelayrepresenting the delay caused by a subscriber sending a single indication of acknowledgement of processing at least two data packets.
5. The method of claim 2, wherein if the received acknowledgement indicates that the subscriber sent each time a packet is processed, the size of the pre-send window is:
y=max(0.1*WPS,RTT*WPS)
wherein y represents the size of the pre-send window;
WPS represents the speed of writing a data packet to be sent per second;
RTT represents the round trip delay.
6. A data transmission apparatus, comprising:
the device comprises an acquisition unit, a synchronization unit and a sending unit, wherein the acquisition unit is used for acquiring a data packet to be sent which needs to be synchronized;
the first processing unit is used for placing the acquired version number of the data packet to be sent into the tail of a version number cache queue according to the first-in first-out sequence, wherein the version number is used for representing the state of the data packet to be sent;
a sending unit, configured to, when it is determined that the number of version numbers falling into a packet sending window in the version number cache queue reaches a packet splicing threshold, merge a corresponding number of data packets to be sent into one packet splicing data packet and send the packet splicing data packet to a subscriber, where a packet header of the packet splicing data packet carries number information, so that the subscriber splits the packet splicing data packet into a corresponding number of data packets according to the number information; and writing the split data packet into a data cache of the process of the subscriber.
7. The apparatus of claim 6, wherein the version number of the packet sent but not acknowledged by the subscriber falls into a pre-send window in the version number cache queue; and the apparatus, further comprising:
the sliding unit is used for synchronously sliding the pre-sending window and the parallel packet sending window when receiving a data packet acknowledgement indication of a subscriber each time so as to enable the version number of the acknowledged data packet to move out of the pre-sending window;
and the second processing unit is used for sending the data packet to be sent corresponding to the version number of the supplemented pre-sending window to the subscriber if the data packet corresponding to the version number of the supplemented pre-sending window is determined not to be sent when the version number of the data packet is supplemented from the parallel packet sending window to the pre-sending window.
8. The apparatus of claim 6 or 7, wherein a data source is configured to obtain the data packet to be sent from a first process, and the subscriber is configured to write the received data packet to a data cache of a second process.
9. The apparatus of claim 7, wherein the size of the pre-send window is:
y=max(0.1*WPS,(RTT+tdelay)*WPS)
wherein y represents the size of the pre-send window;
WPS represents the speed of writing a data packet to be sent per second;
RTT represents round trip delay;
tdelayrepresenting the delay caused by a subscriber sending a single indication of acknowledgement of processing at least two data packets.
10. The apparatus of claim 7, wherein if the received acknowledgement indicates that the subscriber sent each time a packet is processed, the size of the pre-send window is:
y=max(0.1*WPS,RTT*WPS)
wherein y represents the size of the pre-send window;
WPS represents the speed of writing a data packet to be sent per second;
RTT represents the round trip delay.
11. A computer-readable medium, in which a computer program is stored which is executable by a computing device, the program, when run on the computing device, causing the computing device to perform the steps of the method of any one of claims 1 to 5.
12. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1 to 5.
CN201810194815.6A 2018-03-09 2018-03-09 Data sending method, device and readable medium Active CN110247942B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810194815.6A CN110247942B (en) 2018-03-09 2018-03-09 Data sending method, device and readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810194815.6A CN110247942B (en) 2018-03-09 2018-03-09 Data sending method, device and readable medium

Publications (2)

Publication Number Publication Date
CN110247942A CN110247942A (en) 2019-09-17
CN110247942B true CN110247942B (en) 2021-09-07

Family

ID=67882242

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810194815.6A Active CN110247942B (en) 2018-03-09 2018-03-09 Data sending method, device and readable medium

Country Status (1)

Country Link
CN (1) CN110247942B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110851455A (en) * 2019-11-06 2020-02-28 尚娱软件(深圳)有限公司 Data landing method and device, mobile terminal and computer readable storage medium
CN111569435B (en) * 2020-06-16 2022-11-25 腾讯科技(深圳)有限公司 Ranking list generation method, system, server and storage medium
CN112418667A (en) * 2020-11-23 2021-02-26 南京星邺汇捷网络科技有限公司 Automatic task order dispatching method and system
CN113663338B (en) * 2021-08-31 2023-09-26 腾讯科技(深圳)有限公司 Subscription method and device for virtual service and electronic equipment
CN113824651B (en) * 2021-11-25 2022-02-22 上海金仕达软件科技有限公司 Market data caching method and device, storage medium and electronic equipment
CN116170385A (en) * 2023-04-21 2023-05-26 四川汉科计算机信息技术有限公司 Gateway information forwarding system, method, equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101827033A (en) * 2010-04-30 2010-09-08 北京搜狗科技发展有限公司 Method and device for controlling network traffic and local area network system
CN101841545A (en) * 2010-05-14 2010-09-22 中国科学院计算技术研究所 TCP stream restructuring and/or packetizing method and device
CN102456069A (en) * 2011-08-03 2012-05-16 中国人民解放军国防科学技术大学 Incremental aggregate counting and query methods and query system for data stream
CN102461324A (en) * 2009-06-29 2012-05-16 诺基亚公司 Resource allocation
CN101330472B (en) * 2008-07-28 2013-01-16 中兴通讯股份有限公司 Method for caching and processing stream medium data
CN107528789A (en) * 2016-06-22 2017-12-29 新华三技术有限公司 Method for dispatching message and device
CN107645455A (en) * 2017-09-12 2018-01-30 天津津航计算技术研究所 A kind of message transmission dispatching method of CAN

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110299597A1 (en) * 2010-06-07 2011-12-08 Sony Corporation Image processing method using motion estimation and image processing apparatus
US20150112853A1 (en) * 2013-10-18 2015-04-23 Wonga Technology Limited Online loan application using image capture at a client device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101330472B (en) * 2008-07-28 2013-01-16 中兴通讯股份有限公司 Method for caching and processing stream medium data
CN102461324A (en) * 2009-06-29 2012-05-16 诺基亚公司 Resource allocation
CN101827033A (en) * 2010-04-30 2010-09-08 北京搜狗科技发展有限公司 Method and device for controlling network traffic and local area network system
CN101841545A (en) * 2010-05-14 2010-09-22 中国科学院计算技术研究所 TCP stream restructuring and/or packetizing method and device
CN102456069A (en) * 2011-08-03 2012-05-16 中国人民解放军国防科学技术大学 Incremental aggregate counting and query methods and query system for data stream
CN107528789A (en) * 2016-06-22 2017-12-29 新华三技术有限公司 Method for dispatching message and device
CN107645455A (en) * 2017-09-12 2018-01-30 天津津航计算技术研究所 A kind of message transmission dispatching method of CAN

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
流量控制与拥塞控制技术;吴礼发;《网络原理与技术教程》;20020930;参见第8章 *

Also Published As

Publication number Publication date
CN110247942A (en) 2019-09-17

Similar Documents

Publication Publication Date Title
CN110247942B (en) Data sending method, device and readable medium
US7953915B2 (en) Interrupt dispatching method in multi-core environment and multi-core processor
US10754686B2 (en) Method and electronic device for application migration
EP2902914B1 (en) Data transmission method and device
CN110049361B (en) Display control method and device, screen projection equipment and computer readable medium
US10381047B2 (en) Method, device, and system of synchronously playing media file
GB2496681A (en) A publish/subscribe system with time-sensitive message delivery to subscribers
CN113179327B (en) High concurrency protocol stack unloading method, equipment and medium based on large-capacity memory
CN111818632B (en) Method, device, equipment and storage medium for equipment synchronization
US8448172B2 (en) Controlling parallel execution of plural simulation programs
CN111405336B (en) Multi-device synchronous playing method and system, electronic device and storage medium
CN111490947A (en) Data packet transmitting method, data packet receiving method, system, device and medium
US9330033B2 (en) System, method, and computer program product for inserting a gap in information sent from a drive to a host device
CN112861091B (en) Login method, login device, electronic equipment and storage medium
CN111404842B (en) Data transmission method, device and computer storage medium
US20240205463A1 (en) Recording and push-based streaming method and apparatus, device, and medium
CN113806035B (en) Distributed scheduling method and service server
US10216672B2 (en) System and method for preventing time out in input/output systems
KR20220148490A (en) Remote terminal and monitoring apparatus for monitoring of automatic driving and opperating method of thereof
CN112667359A (en) Data transparent transmission method, electronic equipment and storage medium
US12063287B1 (en) Methods, systems, and computer readable media for determining an internal time of a time-sensitive networking (TSN) network card
US20150071070A1 (en) Injecting congestion in a link between adaptors in a network
CN114124754B (en) Method for processing media data packets in a multimedia network and related products
US9330036B2 (en) Interrupt reduction by dynamic application buffering
CN115567459B (en) Flow control system and method based on buffer area

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant