CN118041863A - Data active pulling and transmitting method and system based on cache matching - Google Patents

Data active pulling and transmitting method and system based on cache matching Download PDF

Info

Publication number
CN118041863A
CN118041863A CN202311838614.2A CN202311838614A CN118041863A CN 118041863 A CN118041863 A CN 118041863A CN 202311838614 A CN202311838614 A CN 202311838614A CN 118041863 A CN118041863 A CN 118041863A
Authority
CN
China
Prior art keywords
data
receiving end
transmitting
fragments
data receiving
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311838614.2A
Other languages
Chinese (zh)
Inventor
栾明君
曹孝元
刘旭
栾凯
李志�
刘学
李云峰
苏梓豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CETC 15 Research Institute
Northwest Institute of Nuclear Technology
Original Assignee
CETC 15 Research Institute
Northwest Institute of Nuclear Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CETC 15 Research Institute, Northwest Institute of Nuclear Technology filed Critical CETC 15 Research Institute
Priority to CN202311838614.2A priority Critical patent/CN118041863A/en
Publication of CN118041863A publication Critical patent/CN118041863A/en
Pending legal-status Critical Current

Links

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The application relates to a data active pulling and transmitting method and system based on cache matching, which belong to the technical field of computer communication, and the method comprises the following steps: the data receiving end searches a data transmitting end storing the required data in the whole network; matching the number of the data fragments to be transmitted and the transmission path, and transmitting a data request; the data receiving end actively pulls the data fragments, and re-sends the data request based on the pulling capacity and the current path state, and the data sending end re-sends the data fragments to the data receiving end; repeating the steps, and collecting and integrating all the received data fragments. The method and the system provided by the application can effectively reduce excessive data fragment transmission caused by network congestion and limited processing capacity, and improve the effectiveness and reliability of information transmission; meanwhile, the problem that the receiving end cannot process in time due to the fact that a plurality of data sending ends send data fragments to the receiving end at the same time can be effectively avoided.

Description

Data active pulling and transmitting method and system based on cache matching
Technical Field
The present invention relates to the field of computer communications technologies, and in particular, to a method and a system for actively pulling and transmitting data based on cache matching.
Background
With the rapid development of information technology, the body of network application gradually evolves to content acquisition and information service. The internet has transitioned from an initially end-to-end communication network implemented in a host design to an infrastructure that primarily provides content sharing and retrieval. The traditional protocol design of the network is based on an end-to-end communication model, is applied to information interaction between terminals, and is accompanied by rapid development of network technology and continuous emergence of various novel applications, and the main application of the network has been converted into content distribution and acquisition from traditional end-to-end information interaction.
In the prior art, a sending end is mainly used as an active initiator to realize information transmission, and a TCP/IP network and a UDP protocol are mainly adopted to realize information transmission, however, the traditional TCP/IP network focuses on end-to-end host communication, and can lead to multiple transmission of data on a physical link under application scenes such as file transmission under an unstable network environment, so that the waste of link bandwidth is caused, and meanwhile, under the conditions that congestion, flashover and the like occur on the end-to-end link, the data can be blocked at the last section of link, so that the end-to-end retransmission is initiated, the end-to-end delay is increased, and the waste of link bandwidth resources is caused; when the transmission of information is realized based on the UDP protocol, a transmitting end transmits data at a fixed rate without considering congestion conditions, and when the transmitted data flow is larger than the available network bandwidth, a large amount of packet loss is generated.
Disclosure of Invention
The invention aims to provide a data active pulling and transmitting method and system based on cache matching, which solve the defects in the prior art.
The invention provides a data active pulling and transmitting method based on cache matching, which comprises the following steps:
The data receiving end searches a data transmitting end storing the required data in the whole network;
The data receiving end matches the number of the data fragments sent by each data sending end to the data receiving end and the transmission path between each data sending end and the data receiving end according to the number of the data fragments which can be received by the local caching device at one time and the corresponding path state, and sends a data request to one or more data sending ends;
The data sending end sends the data fragments to the data receiving end through a transmission path after receiving the data request, the data receiving end pulls the data fragments, and sends the data request to one or more data sending ends again based on the current pulling capacity and the current path state of the data receiving end, and the data sending end sends the data fragments to the data receiving end again;
And repeating the steps, and when the sum of the number of the data fragments pulled by the data receiving end is equal to the sum of the data amounts which can be pulled and correspond to the data receiving end, stopping sending the data request to the data sending end by the data receiving end, and collecting and integrating all the received data fragments.
In the above scheme, the path state includes available bandwidth, packet loss rate and transmission delay of the path between the data receiving end and the data transmitting end.
In the above scheme, the data request includes the number of data fragments sent by each data sending end to the data receiving end.
In the above scheme, when the data receiving end sends the data request to the plurality of data sending ends in the information center network, the data request is sent according to a sequential polling mode or according to the priority order of the data to be sent corresponding to the plurality of data sending ends.
In the above scheme, when the plurality of data sending terminals store the data required by the data receiving terminal, the data receiving terminal synchronously pulls different data fragments from the plurality of sending terminals and assembles the data fragments at the data receiving terminal on the premise that the number of the data fragments which can be received at one time by the local buffer device corresponding to the data receiving terminal is not exceeded.
In the above scheme, the number of data fragments that each data sending terminal sends to the data receiving terminal and the transmission path between each data sending terminal and the data receiving terminal are matched according to the number of data fragments that the local buffer device can receive at one time and the corresponding path state, and the following formula is adopted:
;
Wherein B i is the available bandwidth from the data transmitting end S i to the data receiving end D, L i is the packet loss rate from the data transmitting end S i to the data receiving end D, T i is the transmission delay from the data transmitting end Si to the data receiving end D, P i is the number of data slices to be transmitted from the data transmitting end S i to the data receiving end D, n is the total number of data transmitting ends, and P is the number of data slices that can be received at one time by the local buffer device corresponding to the data receiving end.
In the above scheme, retransmitting the data request to one or more data transmitting ends based on the current pulling capability and the current path state of the data receiving end includes:
And calculating the difference value between the number of the data fragments currently pulled by the data receiving end and the sum of the data amounts which can be pulled and correspond to the data receiving end, and retransmitting the data request to one or more data transmitting ends in the information center network according to the calculated difference value and the path state between the data receiving end and the data transmitting end.
In the above scheme, when the calculated difference is greater than the number of data fragments that can be received by the local buffer device corresponding to the data receiving end at one time and the path state between the data receiving end and the data transmitting end is better than the path state when the data request is transmitted last time, the number of data fragments transmitted to the data receiving end by one or more data transmitting ends in the data request is increased when the data request is transmitted again.
In the above scheme, when the calculated difference value is smaller than or equal to the number of data fragments that can be received by the local buffer device corresponding to the data receiving end at a time or the path state difference between the data receiving end and the data transmitting end is smaller than the path state when the data request is transmitted last time, the number of data fragments that are transmitted to the data receiving end by one or more data transmitting ends in the data request is reduced when the data request is transmitted again.
The invention provides a data active pulling and transmitting system based on cache matching, which adopts the data active pulling and transmitting method based on cache matching to carry out information transmission, and the system comprises the following steps:
The data request sending module is used for searching a data sending end storing required data in the whole network by the data receiving end, and the data receiving end matches the number of data fragments sent by each data sending end to the data receiving end and the transmission path between each data sending end and the data receiving end according to the number of the data fragments which can be received by the local caching device at one time and the corresponding path state and sends a data request to one or more data sending ends;
The data pulling module is used for sending the data fragments to the data receiving end through a transmission path after the data sending end receives the data request, the data receiving end pulls the data fragments, and the data request is sent to one or more data sending ends again based on the current pulling capacity and the current path state of the data receiving end, and the data sending end sends the data fragments to the data receiving end again;
And the integration processing module is used for pulling the data fragments for a plurality of times by the data receiving end, stopping sending the data request to the data sending end by the data receiving end when the sum of the number of the pulled data fragments by the data receiving end is equal to the sum of the corresponding data amounts which can be pulled by the data receiving end, and collecting and integrating all the received data fragments.
The embodiment of the invention has the following advantages:
According to the data active pulling and transmitting method and system based on cache matching, the data receiving end actively pulls data to the data transmitting end according to the need, and compared with a traditional mode of actively pushing the data by the transmitting end, the data active pulling and transmitting method and system based on cache matching can effectively reduce excessive data transmission caused by network congestion and limited processing capacity, and improve information transmission effectiveness and reliability; meanwhile, for the data content stored in the plurality of data sending terminals in a distributed mode, the problem that the receiving terminal cannot process in time due to the fact that the plurality of data sending terminals send data to the receiving terminal at the same time can be effectively avoided through the data receiving terminal active pulling mode.
Drawings
FIG. 1 is a flow chart of an active data pulling and transmitting method based on cache matching according to the present invention;
FIG. 2 is a flow chart of an active data pulling and transmission method based on cache matching in an embodiment of the invention;
FIG. 3 is a schematic diagram illustrating an active data pulling and transmitting system based on cache matching according to the present invention.
Detailed Description
It should be noted that, without conflict, the embodiments of the present application and features of the embodiments may be combined with each other. The application will be described in detail below with reference to the drawings in connection with embodiments.
As shown in fig. 1, the method for actively pulling and transmitting data based on cache matching of the present invention includes the following steps:
Step S1, a data receiving end firstly searches one or more node servers which store required data, namely data sending ends, in a whole network, acquires the number of data fragments which can be received at one time by a local caching device corresponding to the data receiving end in an information center network and the path state between the data receiving end in the information center network and the data sending end in the information center network, matches the number of the data fragments sent by each data sending end to the data receiving end and the transmission path between each data sending end and the data receiving end according to the acquired number of the data fragments and the path state, and the data sending end sends a data request to one or more data sending ends in the information center network.
Specifically, the path state includes an available bandwidth, a packet loss rate, and a transmission delay of a path between the data receiving end and the data transmitting end.
Specifically, the following formulas are adopted to match the number of the data fragments sent by each data sending terminal to the data receiving terminal and the transmission paths between each data sending terminal and the data receiving terminal according to the obtained number of the data fragments and the path states:
Wherein B i is the available bandwidth from the data transmitting end S i to the data receiving end D, L i is the packet loss rate from the data transmitting end S i to the data receiving end D, T i is the transmission delay from the data transmitting end Si to the data receiving end D, P i is the number of data slices to be transmitted from the data transmitting end S i to the data receiving end D, n is the total number of data transmitting ends, and P is the number of data slices that can be received at one time by the local buffer device corresponding to the data receiving end.
According to the formula, the number of the data fragments sent by each data sending end to the data receiving end and the transmission paths between each data sending end and the data receiving end are obtained through solving, and the obtained transmission paths and the number of the data fragments sent by each data sending end to the data receiving end are guaranteed to minimize the time required by one-time transmission.
Specifically, the data request includes the number of data fragments sent by each data sending end to the data receiving end.
Specifically, when the plurality of data sending terminals store the data required by the data receiving terminal, the data receiving terminal synchronously pulls different data fragments from the plurality of sending terminals and assembles the data fragments at the data receiving terminal on the premise that the number of the data fragments which can be received at one time by the local buffer device corresponding to the data receiving terminal is not exceeded, so that the total time of data transmission is saved.
Specifically, when the data receiving end sends data requests to a plurality of data sending ends in the information center network, the data requests are sent according to a sequential polling mode or according to the priority order of data to be sent corresponding to the plurality of data sending ends.
Specifically, when data requests are transmitted in a sequential polling manner, the data requests are sequentially transmitted to each data transmitting end in accordance with the discovery order of the data transmitting ends.
Specifically, when data requests are sent according to the priority order of data to be sent corresponding to a plurality of data sending terminals, the data requests are sent according to whether the current data sending terminal is idle, whether the sent data fragments contain high priority information such as file header and check information, the transmission path bandwidth corresponding to the data sending terminal, whether the sending order of the data fragments is sensitive, and other conditions such as application explicit markers.
And step S2, the data sending end sends the data fragments to the data receiving end through the selected transmission path after receiving the data request, the data receiving end pulls the data fragments, calculates the difference value between the number of pulled data fragments and the sum of the pulled data amounts corresponding to the data receiving end, and sends the data request to one or more data sending ends in the information center network again according to the calculated difference value and the path state between the data receiving end and the data sending end, and the data sending end sends the data fragments to the data receiving end again.
Specifically, when the calculated difference is greater than the number of data fragments that can be received by the local buffer device corresponding to the data receiving end at one time and the path state between the data receiving end and the data transmitting end is better than the path state acquired in the step S1, when the data request is sent again, the number of data fragments sent to the data receiving end by one or more data transmitting ends in the data request is increased, so that the number of data fragments pulled again by the data transmitting end is increased; when the calculated difference value is smaller than or equal to the number of data fragments which can be received by the local buffer device corresponding to the data receiving end at one time or the path state difference between the data receiving end and the data transmitting end is smaller than the path state acquired in the step S1, and when the data request is sent again, the number of data fragments which are sent to the data receiving end by one or more data transmitting ends in the data request is reduced, so that the number of data fragments which are pulled again by the data transmitting end is reduced, and the problem of packet loss caused by excessive data fragments sent to the data receiving end by the data transmitting end is avoided.
And S3, repeating the steps, calculating the sum of the number of the pulled data fragments in the process of pulling the data fragments for a plurality of times by the data receiving end, stopping sending the data request to the data sending end by the data receiving end when the sum of the number of the pulled data fragments by the data receiving end is equal to the sum of the corresponding pulled data amounts by the data receiving end, and collecting and integrating all the received data fragments.
As shown in fig. 2, in an embodiment of the present invention, in an information center network using four nodes, the information center network includes three data sending ends and one data receiving end, where the three data sending ends include a data sending end 1, a data sending end 2, and a data sending end 3, and the data receiving end needs to obtain data stored in the information center network in a distributed manner from the data sending end 1, the data sending end 2, and the data sending end 3, respectively, and the specific steps include:
Step S101: acquiring the number of data fragments which can be received at one time by a local caching device corresponding to a data receiving end in an information center network and the path state between the data receiving end and a data sending end 1, a data sending end 2 and a data sending end 3 in the information center network, respectively, sending a data request to the data sending end 1, the data sending end 2 and the data sending end 3 according to the acquired number of the data fragments and the path state in a sequential polling mode or according to the priority sequence of data to be sent corresponding to the data sending end 1, the data sending end 2 and the data sending end 3, selecting a transmission path between the data sending end 1, the data sending end 2 and the data sending end 3 and between the data receiving end 3 and respectively forming transmission connection, and generating a corresponding data receiving queue at the data receiving end;
specifically, the data receiving terminals respectively form 3 logically independent receiving queues according to the number of the matched data sending terminals, so that on one hand, normal receiving of other queues can be prevented from being influenced when one queue is abnormal such as congestion, packet loss, overflow and the like; on the other hand, when the processing capacity is suddenly limited due to the abnormal condition of the data receiving end, the data receiving end preferentially processes the receiving queue which is cached with the high-priority data fragments under the condition that the data fragments are pulled in a priority mode.
Step S102: when the calculated difference is greater than the number of data fragments which can be received once by a local caching device corresponding to the data receiving end and the path state between the data receiving end and the data sending end is better than the path state acquired in the step S101, the number of data fragments which are sent to the data sending end 1, the data sending end 2 and the data sending end 3 by the data receiving end in the data request is increased when the data request is sent again, and when the calculated difference is less than or equal to the path state between the data receiving end and the data sending end which can be received once by the local caching device corresponding to the data receiving end and the path state acquired in the step S101, the number of data fragments which are sent to the data sending end 1, the data sending end 2 and the data sending end 3 by the data receiving end in the data request is reduced when the calculated difference is greater than the path state between the local caching device corresponding to the data receiving end and the data sending end which can be pulled once, and the data sending end is sent again;
And step 103, repeating the steps, calculating the sum of the number of the pulled data fragments in the process of pulling the data fragments for a plurality of times by the data receiving end, stopping sending the data request to the data sending end 1, the data sending end 2 and the data sending end 3 by the data receiving end when the sum of the number of the pulled data fragments by the data receiving end is equal to the sum of the corresponding pulled data amount of the data receiving end, and collecting and integrating all the received data fragments.
As shown in fig. 3, the data active pulling and transmitting system based on cache matching of the present invention adopts the data active pulling and transmitting method based on cache matching as described above to perform information transmission, and the system includes:
The data request sending module is used for searching one or more node servers which store required data, namely data sending ends, in a whole network, acquiring the number of data fragments which can be received at one time by a local caching device corresponding to the data receiving ends in an information center network and the path state between the data receiving ends in the information center network and the data sending ends in the information center network, matching the number of the data fragments sent by each data sending end to the data receiving ends and the transmission path between each data sending end to the data receiving ends according to the acquired number of the data fragments and the path state, and sending a data request to one or more data sending ends in the information center network by the data sending ends;
The data pulling module is used for sending the data fragments to the data receiving end through the selected transmission path after the data sending end receives the data request, the data receiving end pulls the data fragments, calculates the difference value of the total sum of the pulled data fragments corresponding to the data receiving end and the pulled data amount, and resends the data request to one or more data sending ends in the information center network according to the calculated difference value and the path state between the data receiving end and the data sending end, and the data sending end sends the data fragments to the data receiving end again;
and the integration processing module is used for pulling the data fragments for a plurality of times by the data receiving end, calculating the sum of the number of the pulled data fragments in the process of pulling the data fragments for a plurality of times by the data receiving end, stopping sending the data request to the data sending end by the data receiving end when the sum of the number of the pulled data fragments by the data receiving end is equal to the sum of the corresponding pulled data amounts by the data receiving end, and collecting and integrating all the received data fragments.
It should be noted that the foregoing detailed description is exemplary and is intended to provide further explanation of the application. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments according to the present application. As used herein, the singular is intended to include the plural unless the context clearly indicates otherwise. Furthermore, it will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, steps, operations, devices, components, and/or groups thereof.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or otherwise described herein.
Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those elements but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Spatially relative terms, such as "above … …," "above … …," "upper surface on … …," "above," and the like, may be used herein for ease of description to describe one device or feature's spatial location relative to another device or feature as illustrated in the figures. It will be understood that the spatially relative terms are intended to encompass different orientations in use or operation in addition to the orientation depicted in the figures. For example, if the device in the figures is turned over, elements described as "above" or "over" other devices or structures would then be oriented "below" or "beneath" the other devices or structures. Thus, the exemplary term "above … …" may include both orientations "above … …" and "below … …". The device may also be positioned in other different ways, such as rotated 90 degrees or at other orientations, and the spatially relative descriptors used herein interpreted accordingly.
In the above detailed description, reference is made to the accompanying drawings, which form a part hereof. In the drawings, like numerals typically identify like components unless context indicates otherwise. The illustrated embodiments described in the detailed description, drawings, and claims are not meant to be limiting. Other embodiments may be utilized, and other changes may be made, without departing from the spirit or scope of the subject matter presented herein.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. The data active pulling and transmitting method based on cache matching is characterized by comprising the following steps of:
The data receiving end searches a data transmitting end storing the required data in the whole network;
The data receiving end matches the number of the data fragments sent by each data sending end to the data receiving end and the transmission path between each data sending end and the data receiving end according to the number of the data fragments which can be received by the local caching device at one time and the corresponding path state, and sends a data request to one or more data sending ends;
The data sending end sends the data fragments to the data receiving end through a transmission path after receiving the data request, the data receiving end pulls the data fragments, and sends the data request to one or more data sending ends again based on the current pulling capacity and the current path state of the data receiving end, and the data sending end sends the data fragments to the data receiving end again;
And repeating the steps, and when the sum of the number of the data fragments pulled by the data receiving end is equal to the sum of the data amounts which can be pulled and correspond to the data receiving end, stopping sending the data request to the data sending end by the data receiving end, and collecting and integrating all the received data fragments.
2. The method for actively pulling and transmitting data based on cache matching according to claim 1, wherein the path status includes available bandwidth, packet loss rate and transmission delay of a path between a data receiving end and a data transmitting end.
3. The method for actively pulling and transmitting data based on cache matching according to claim 1, wherein the data request includes the number of data fragments transmitted from each data transmitting end to the data receiving end.
4. The method for actively pulling and transmitting data based on cache matching according to claim 1, wherein when the data receiving end transmits a data request to a plurality of data transmitting ends in the information center network, the data request is transmitted according to a sequential polling mode or according to a priority order of data to be transmitted corresponding to the plurality of data transmitting ends.
5. The method for actively pulling and transmitting data based on cache matching according to claim 1, wherein when a plurality of data transmitting terminals store data required by a data receiving terminal, the data receiving terminal simultaneously pulls different data fragments from the plurality of transmitting terminals synchronously and assembles the data fragments at the data receiving terminal on the premise that the number of the data fragments which can be received at one time by a local cache device corresponding to the data receiving terminal is not exceeded.
6. The data active pulling and transmitting method based on cache matching according to claim 1, wherein the number of data fragments that each data transmitting end transmits to the data receiving end and the transmission path between each data transmitting end and the data receiving end are matched according to the number of data fragments that the local cache device can receive at one time and the corresponding path state by adopting the following formula:
;
Wherein B i is the available bandwidth from the data transmitting end S i to the data receiving end D, L i is the packet loss rate from the data transmitting end S i to the data receiving end D, T i is the transmission delay from the data transmitting end Si to the data receiving end D, P i is the number of data slices to be transmitted from the data transmitting end S i to the data receiving end D, n is the total number of data transmitting ends, and P is the number of data slices that can be received at one time by the local buffer device corresponding to the data receiving end.
7. The cache matching-based data active pulling and transmitting method according to claim 1, wherein retransmitting the data request to one or more data transmitting terminals based on the current pulling capability and the current path state of the data receiving terminal comprises:
And calculating the difference value between the number of the data fragments currently pulled by the data receiving end and the sum of the data amounts which can be pulled and correspond to the data receiving end, and retransmitting the data request to one or more data transmitting ends in the information center network according to the calculated difference value and the path state between the data receiving end and the data transmitting end.
8. The method for actively pulling and transmitting data based on cache matching according to claim 7, wherein when the calculated difference is greater than the number of data fragments that can be received by the local cache device corresponding to the data receiving end at a time and the path state from the data receiving end to the data transmitting end is better than the path state when the data request is transmitted last time, the number of data fragments that are transmitted from one or more data transmitting ends to the data receiving end in the data request is increased when the data request is transmitted again.
9. The method for actively pulling and transmitting data based on cache matching according to claim 7, wherein when the calculated difference is less than or equal to the number of data fragments that can be received by the local cache device corresponding to the data receiving end at a time or the path state difference between the data receiving end and the data transmitting end is less than the path state when the data request is transmitted last time, the number of data fragments that are transmitted from one or more data transmitting ends to the data receiving end in the data request is reduced when the data request is transmitted again.
10. A data active pulling and transmitting system based on cache matching, which adopts the data active pulling and transmitting method based on cache matching as claimed in any one of claims 1-9 for information transmission, and is characterized in that the system comprises:
The data request sending module is used for searching a data sending end storing required data in the whole network by the data receiving end, and the data receiving end matches the number of data fragments sent by each data sending end to the data receiving end and the transmission path between each data sending end and the data receiving end according to the number of the data fragments which can be received by the local caching device at one time and the corresponding path state and sends a data request to one or more data sending ends;
The data pulling module is used for sending the data fragments to the data receiving end through a transmission path after the data sending end receives the data request, the data receiving end pulls the data fragments, and the data request is sent to one or more data sending ends again based on the current pulling capacity and the current path state of the data receiving end, and the data sending end sends the data fragments to the data receiving end again;
And the integration processing module is used for pulling the data fragments for a plurality of times by the data receiving end, stopping sending the data request to the data sending end by the data receiving end when the sum of the number of the pulled data fragments by the data receiving end is equal to the sum of the corresponding data amounts which can be pulled by the data receiving end, and collecting and integrating all the received data fragments.
CN202311838614.2A 2023-12-28 2023-12-28 Data active pulling and transmitting method and system based on cache matching Pending CN118041863A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311838614.2A CN118041863A (en) 2023-12-28 2023-12-28 Data active pulling and transmitting method and system based on cache matching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311838614.2A CN118041863A (en) 2023-12-28 2023-12-28 Data active pulling and transmitting method and system based on cache matching

Publications (1)

Publication Number Publication Date
CN118041863A true CN118041863A (en) 2024-05-14

Family

ID=90997746

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311838614.2A Pending CN118041863A (en) 2023-12-28 2023-12-28 Data active pulling and transmitting method and system based on cache matching

Country Status (1)

Country Link
CN (1) CN118041863A (en)

Similar Documents

Publication Publication Date Title
JP3321043B2 (en) Data terminal in TCP network
US9736872B2 (en) Data transmission method, apparatus, and system
US20100182911A1 (en) System, Method and Computer Program for In-Place, Lightweight Ack Promotion in Network Environment
US20060271617A1 (en) Network data distribution system and method
US9564960B2 (en) Decentralized caching system
CN110621040B (en) Method and system for realizing multipath parallel transmission communication
CN100418314C (en) Wireless mobile terminal and telecommunication system
US20070226347A1 (en) Method and apparatus for dynamically changing the TCP behavior of a network connection
WO2002045322A2 (en) Method and apparatus for combining dedicated and shared networks for efficient data transmission
JP7487316B2 (en) Service level configuration method and apparatus
US11502956B2 (en) Method for content caching in information-centric network virtualization
EP1570369A2 (en) An apparatus and method for receive transport protocol termination
CN105262836A (en) Information push method of server and push information reception method of client
CN117278628B (en) Data transmission method, device, system, computer equipment and storage medium
KR20100041181A (en) Method and terminal for pdu reordering in wireless communication system
CN112202681B (en) Data congestion processing method and device, computer equipment and storage medium
US20110044332A1 (en) Communication apparatus, communication system, and communication method
US10673648B1 (en) Network interface device that sets an ECN-CE bit in response to detecting congestion at an internal bus interface
US7783784B1 (en) Method and apparatus for adaptive selection of algorithms to load and spread traffic on an aggregation of network interface cards
CN118041863A (en) Data active pulling and transmitting method and system based on cache matching
EP1279313B1 (en) Wireless channel allocation in a base station processor
CN114024917B (en) Method, device, equipment and storage medium for guaranteeing internet service bandwidth
CN111328106B (en) Congestion control method and device
CN107196819B (en) Network connection method and system and computer readable storage medium
US20130346601A1 (en) Network device, method of controlling the network device, and network system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination