CN104539727A - Cache method and system based on AP platform - Google Patents
Cache method and system based on AP platform Download PDFInfo
- Publication number
- CN104539727A CN104539727A CN201510020636.7A CN201510020636A CN104539727A CN 104539727 A CN104539727 A CN 104539727A CN 201510020636 A CN201510020636 A CN 201510020636A CN 104539727 A CN104539727 A CN 104539727A
- Authority
- CN
- China
- Prior art keywords
- file
- file destination
- matched
- read
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/06—Protocols specially adapted for file transfer, e.g. file transfer protocol [FTP]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/10—File systems; File servers
- G06F16/18—File system types
- G06F16/182—Distributed file systems
- G06F16/1834—Distributed file systems implemented based on peer-to-peer networks, e.g. gnutella
- G06F16/1837—Management specially adapted to peer-to-peer storage networks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/104—Peer-to-peer [P2P] networks
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Information Transfer Between Computers (AREA)
Abstract
The invention discloses a cache method based on an AP platform. A cache of a target file is executed in a request server through the preset bit mapping cache technology according to the preset service mode, the preset service level and the user requirement. The method comprises the steps that a data memorizer is searched for a data file matched with the target file; under the situation that the searching result shows that the data file matched with the target file does not exist, a source server matched with the target file is accessed through the preset infohash algorithm, and a peer address and a port which are matched with the target file are found and determined; when the peer address and the port which are matched with the target file exist, the target file is downloaded according to the peer address, and the target file is cached into the data memorizer; the downloaded target file is returned to a user. The cache method has the advantages that the response time for receiving a request of the user is effectively shortened, and the user experience effect is greatly improved.
Description
Technical field
The present invention relates to mobile communication technology, specifically, relate to a kind of caching method based on AP platform and system.
Background technology
Bit torrent: bit stream (Bit Torrent) is a kind of contents distribution agreement, by Bradley Mu Keen independent development; It adopts efficient software distribution system and P-2-P technology to share large volume file (as a movie or television program), and makes each user provide upload service as network redistributes node.The user that general download server sends download request for each provides download service, and the working method of Bit Torrent is different with it, file is sent to a wherein user by the holder of distributor or file, other user is transmitted to again by this user, the file part oneself had mutually is forwarded between user, until the download of each user all completes, this method can make download server process the download request of multiple large volume file simultaneously, and need not take massive band width.
AP: WAP (wireless access point) and wireless aps (Access Point) it be the access point of a wireless network, mainly contain route switching access equipment integrating and pure access point apparatus, equipment integrating performs access and route work, pure access device is only responsible for the access of wireless client, pure access device uses usually used as wireless network expansion, be connected with other AP or main AP, to expand wireless coverage, and equipment integrating is generally the core of wireless network.
Along with the growth of mobile device and the increase at full speed of mobile subscriber, the content of mobile device is also more and more abundanter, popularizing of shopping at network, network direct broadcasting/video, network social intercourse, social software etc., mobile flow is surged, but hardware configuration does not thereupon have corresponding follow-up, campus network also costly, because wireless signal is greatly affected by environment, there is the shortcomings such as signal cuts in and out, low-response, when cable network is just popularized, also there is similar problem, therefore, solution by cable network inspires, and carries out wireless speed-raising.The P2P technology of cable network, as the appearance of Bit torrent, makes wire transmission improve a grade, and so far, people no longer complain the problems such as network speed is slow, and resource is rapidly shared, and the value of the Internet is had been further upgraded.
Prior art has had the application of spider lines, as well-known download softwares such as BT, a sudden peal of thunder, electric donkey, u torrent, for opening up express passway between user and resource, and provides services such as advancing hot point resource.Scheme is varied, but all be unable to do without the most basic ICP/IP protocol; Such as, each company cooperated with electric business comprising Video service is technical goal mainly with increase flow, using flow as the standard weighing service quality, obtain user's request content, with time cache user request content more early, the data that user is obtained are more more stable.
But also there is following technical problem in prior art: physically, not of uniform size due to box, but be mobile device, therefore be limited to hardware performance, platform transplantation is the first problem needing to consider; In performance, consider the QoS parameter such as speed, definition, to correspond on technical parameter the both file size of transmitted per unit time and the packet loss of message transmission low, a comprehensive judgement example can be embodied on video definition, and existing box is nowhere near; Instantly on the market being the various wifi promoted, having sacrifice speed to carry high-quality, having the Quality advance speed of sacrificing, have and sacrifice size and carry high performance, what reach far away user is satisfied with demand; Cable network is exploitation or existing more ripe technology comparatively early, substantially there is not problems such as can not considering packet loss, message size, memory size and hard-disk capacity.
For the relevant issues in correlation technique, at present effective solution is not yet proposed.
Summary of the invention
The object of this invention is to provide a kind of caching method based on AP platform and system, to overcome currently available technology above shortcomings.
The object of the invention is to be achieved through the following technical solutions:
According to an aspect of the present invention, provide a kind of caching method based on AP platform, should based on the caching method of AP platform according to the service mode pre-set and the grade of service, and according to the demand of user, by the buffer memory of pre-configured bit mapping caching technology performance objective file in request server, comprising:
The data file matched with file destination is searched in the data storage pre-set;
Be when there is not the data file matched with file destination at lookup result, the source server matched by the infohash algorithm accesses that pre-sets and file destination, is searched and determines the peer address that matches with file destination and port;
When there is the peer address and port that match with file destination, download described file destination according to described peer address, and by described file destination buffer memory to described data storage;
The file destination of download is returned to user.
Further, the buffer memory of described performance objective file in request server also comprises:
Received by the thread that pre-sets and/or send file destination, and checking whether the pre-configured message chained list matched with file destination exists the read-write requests of user by described thread;
When there is read-write requests, according to pre-configured function, read-write requests is joined the read-write structure queue matched with file destination;
Check the global variable pre-set, described file destination is joined in pre-configured variable queue;
Described read-write structure is joined in described global variable;
According to described read-write structure, described read-write structure is classified, comprise file reading content, writing in files content, file reading catalogue, and perform the operational order matched with described read-write structure be pre-existing in.
Further, also comprise in described file destination buffer memory to described data storage:
When file destination is video file, plays feature according to the video file that pre-sets, described video file is carried out Segment-based caching by bit mapping caching technology, and be relocated onto the video message head that video file matches.
Further, in the data storage pre-set, search the data file matched with file destination also to comprise:
Be when there is the data file matched with file destination, then described data file is returned to user at lookup result.
Further, search and determine that the peer address that matches with file destination and port also comprise:
When there is not the peer address and port that match with file destination, then send the error coded that pre-sets to pre-configured nginx system.
According to a further aspect in the invention, provide a kind of caching system based on AP platform, should based on the caching system of AP platform according to the service mode pre-set and the grade of service, and according to the demand of user, by the buffer memory of pre-configured bit mapping caching technology performance objective file in request server, comprising:
Data search module, for searching the data file matched with file destination in the data storage pre-set;
Address search module, for being when there is not the data file matched with file destination at lookup result, by the source server that the infohash algorithm accesses that pre-sets and file destination match, search and determine the peer address that matches with file destination and port;
Downloading cache module, for when there is the peer address and port that match with file destination, downloading described file destination according to described peer address, and by described file destination buffer memory to described data storage;
Data transmission blocks, for returning to user by the file destination of download.
Further, the buffer memory of described performance objective file in request server also comprises:
Message request judges submodule, receives and/or send file destination for the thread by pre-setting, and checks whether the pre-configured message chained list matched with file destination exists the read-write requests of user by described thread;
Message request editor submodule, for when there is read-write requests, according to pre-configured function, joins the read-write structure queue matched with file destination by read-write requests;
File destination editor submodule, for checking the global variable pre-set, joins described file destination in pre-configured variable queue;
Global variable editor submodule, for joining in described global variable by described read-write structure;
Classification implementation sub-module, for according to described read-write structure, classifies described read-write structure, comprises file reading content, writing in files content, file reading catalogue, and performs the operational order matched with described read-write structure be pre-existing in.
Further, also comprise in described file destination buffer memory to described data storage:
Video segmentation cache module, for when file destination is video file, play feature according to the video file that pre-sets, described video file is carried out Segment-based caching by bit mapping caching technology, and be relocated onto the video message head that video file matches.
Further, in the data storage pre-set, search the data file matched with file destination also to comprise:
Data send submodule, for being when there is the data file matched with file destination, then described data file is returned to user at lookup result.
Further, search and determine that the peer address that matches with file destination and port also comprise:
Error code display module, for when there is not the peer address and port that match with file destination, then sends the error coded that pre-sets to pre-configured nginx system.
Beneficial effect of the present invention is:
1, effectively shorten the response time of the request that receives of user, make user's experience effect have very large raising, almost can foreshorten to 0 from the response time being cached to user;
2, multiple threads disk read-write task chain, single task is integrated into multitask, a large amount of disk read-write task can be focused on, be separated disk I/O operation and socket transmission operation, by disk read-write time and network latency parallelization, if the disk read-write time is m, network latency is n, then original m+n is become n (m>n);
3, time-out check mechanism eliminates idle task, and three kinds of deletion actions save EMS memory occupation space;
4, the way to manage of neighbor node is doubly linked list structure, and be 2 times of single-track link table search efficiency, internal memory increases to O (1) order of magnitude;
5, anticipation function and task grade classification have used for reference cpu task tupe, and effective message of Class1/4 is preferentially transmitted, and can account for more than 62% of message total.
Accompanying drawing explanation
In order to be illustrated more clearly in the embodiment of the present invention or technical scheme of the prior art, be briefly described to the accompanying drawing used required in embodiment below, apparently, accompanying drawing in the following describes is only some embodiments of the present invention, for those of ordinary skill in the art, under the prerequisite not paying creative work, other accompanying drawing can also be obtained according to these accompanying drawings.
Fig. 1 is the schematic flow sheet of a kind of caching method based on AP platform according to the embodiment of the present invention;
Fig. 2 is a kind of schematic flow sheet performed file destination classification based on the caching method of AP platform according to the embodiment of the present invention.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, be clearly and completely described the technical scheme in the embodiment of the present invention, obviously, described embodiment is only the present invention's part embodiment, instead of whole embodiments.Based on the embodiment in the present invention, the every other embodiment that those of ordinary skill in the art obtain, all belongs to the scope of protection of the invention.
As shown in Figure 1-2, according to an aspect of the present invention, provide a kind of caching method based on AP platform, should based on the caching method of AP platform according to the service mode pre-set and the grade of service, and according to the demand of user, by the buffer memory of pre-configured bit mapping caching technology performance objective file in request server, comprising:
The data file matched with file destination is searched in the data storage pre-set;
Be when there is not the data file matched with file destination at lookup result, the source server matched by the infohash algorithm accesses that pre-sets and file destination, is searched and determines the peer address that matches with file destination and port;
When there is the peer address and port that match with file destination, download described file destination according to described peer address, and by described file destination buffer memory to described data storage;
The file destination of download is returned to user.
Wherein, the buffer memory of described performance objective file in request server also comprises:
Received by the thread that pre-sets and/or send file destination, and checking whether the pre-configured message chained list matched with file destination exists the read-write requests of user by described thread;
When there is read-write requests, according to pre-configured function, read-write requests is joined the read-write structure queue matched with file destination;
Check the global variable pre-set, described file destination is joined in pre-configured variable queue;
Described read-write structure is joined in described global variable;
According to described read-write structure, described read-write structure is classified, comprise file reading content, writing in files content, file reading catalogue, and perform the operational order matched with described read-write structure be pre-existing in.
In addition, also comprise in described file destination buffer memory to described data storage:
When file destination is video file, plays feature according to the video file that pre-sets, described video file is carried out Segment-based caching by bit mapping caching technology, and be relocated onto the video message head that video file matches.
In addition, in the data storage pre-set, search the data file matched with file destination also to comprise:
Be when there is the data file matched with file destination, then described data file is returned to user at lookup result.
Finally, search and determine that the peer address that matches with file destination and port also comprise:
When there is not the peer address and port that match with file destination, then send the error coded that pre-sets to pre-configured nginx system.
According to a further aspect in the invention, provide a kind of caching system based on AP platform, should based on the caching system of AP platform according to the service mode pre-set and the grade of service, and according to the demand of user, by the buffer memory of pre-configured bit mapping caching technology performance objective file in request server, comprising:
Data search module, for searching the data file matched with file destination in the data storage pre-set;
Address search module, for being when there is not the data file matched with file destination at lookup result, by the source server that the infohash algorithm accesses that pre-sets and file destination match, search and determine the peer address that matches with file destination and port;
Downloading cache module, for when there is the peer address and port that match with file destination, downloading described file destination according to described peer address, and by described file destination buffer memory to described data storage;
Data transmission blocks, for returning to user by the file destination of download.
Wherein, the buffer memory of described performance objective file in request server also comprises:
Message request judges submodule, receives and/or send file destination for the thread by pre-setting, and checks whether the pre-configured message chained list matched with file destination exists the read-write requests of user by described thread;
Message request editor submodule, for when there is read-write requests, according to pre-configured function, joins the read-write structure queue matched with file destination by read-write requests;
File destination editor submodule, for checking the global variable pre-set, joins described file destination in pre-configured variable queue;
Global variable editor submodule, for joining in described global variable by described read-write structure;
Classification implementation sub-module, for according to described read-write structure, classifies described read-write structure, comprises file reading content, writing in files content, file reading catalogue, and performs the operational order matched with described read-write structure be pre-existing in.
Wherein, also comprise in described file destination buffer memory to described data storage:
Video segmentation cache module, for when file destination is video file, play feature according to the video file that pre-sets, described video file is carried out Segment-based caching by bit mapping caching technology, and be relocated onto the video message head that video file matches.
Wherein, in the data storage pre-set, search the data file matched with file destination also to comprise:
Data send submodule, for being when there is the data file matched with file destination, then described data file is returned to user at lookup result.
Wherein, search and determine that the peer address that matches with file destination and port also comprise:
Error code display module, for when there is not the peer address and port that match with file destination, then sends the error coded that pre-sets to pre-configured nginx system.
During embody rule, first, user task is divided into TW_TASK, UP_TASK, LIVE_TASK and USER_TASK tetra-kinds of service modes, the different TASK grades of service is provided respectively;
In addition, comprehensive http and p2p two kinds of cache way, carry out buffer memory to the file of particular type;
In addition, file caching technology of the present invention adopts bit mapping techniques;
Further, adopt thread to receive and send message, reception transmission is read with disk and is separated, implementation is employing global variable, and preserve message chained list, thread inner loop checks whether this chained list has file read-write demand, there are the transmission and reception of event-driven message thread outside, and flow process is as follows:
Wherein, the particularly important is, according to video playback feature, add video separation and map, by video file fragmentation buffer memory, by re-constructing video message head, video cache hop count is reduced to ten parts, the larger effect of file is better.
First, following operation is performed according to the storage condition of video in box:
Drag step:
1) in safari, test video is opened; 2) wait for that video starts; 3) drag video to any non-viewing time point, observe load condition; 4) check log, judge whether to load video from box; 5) be dragged to viewing part, check load condition (response speed/video quality); 6) be intensively dragged to random time point, check load condition.
According to aforesaid operations, the test result obtained is in terms of existing technologies, has following characteristics:
1) cache file loading velocity is exceedingly fast;
2) test under wireless environment can because network problem be interrupted;
3) intensive dragging response speed block, there will not be card/situation;
4) check the feedback of log, the response of cache file is almost within one second;
5) being dragged to viewing part, there will not be response circle when opening cookie;
6) intensive dragging responsive status is good, instant response.
In sum, by means of technique scheme of the present invention, the technical program effectively shortens the response time of the request that receives of user, makes user's experience effect have very large raising, almost can foreshorten to 0 from the response time being cached to user;
And by multiple threads disk read-write task chain, single task is integrated into multitask, a large amount of disk read-write task can be focused on, be separated disk I/O operation and socket transmission operation, by disk read-write time and network latency parallelization, if the disk read-write time is m, network latency is n, then original m+n is become n (m>n);
Have time-out check mechanism and eliminate idle task, three kinds of deletion actions save EMS memory occupation space;
The way to manage of neighbor node is doubly linked list structure, and be 2 times of single-track link table search efficiency, internal memory increases to O (1) order of magnitude;
Used for reference cpu task tupe according to anticipation function and task grade classification, effective message of Class1/4 is preferentially transmitted, and can account for more than 62% of message total, effectively raises quality and the efficiency of data transmission, is conducive to the propagation and employment in market.
The foregoing is only preferred embodiment of the present invention, not in order to limit the present invention, within the spirit and principles in the present invention all, any amendment done, equivalent replacement, improvement etc., all should be included within protection scope of the present invention.
Claims (10)
1. based on a caching method for AP platform, it is characterized in that, comprise the following steps:
The data file matched with file destination is searched in the data storage pre-set;
Be when there is not the data file matched with file destination at lookup result, the source server matched by the infohash algorithm accesses that pre-sets and file destination, is searched and determines the peer address that matches with file destination and port;
When there is the peer address and port that match with file destination, download described file destination according to described peer address, and by described file destination buffer memory to described data storage;
The file destination of download is returned to user.
2. the caching method based on AP platform according to claim 1, is characterized in that, the buffer memory of described performance objective file in request server also comprises:
Received by the thread that pre-sets and/or send file destination, and checking whether the pre-configured message chained list matched with file destination exists the read-write requests of user by described thread;
When there is read-write requests, according to pre-configured function, read-write requests is joined the read-write structure queue matched with file destination;
Check the global variable pre-set, described file destination is joined in pre-configured variable queue;
Described read-write structure is joined in described global variable;
According to described read-write structure, described read-write structure is classified, comprise file reading content, writing in files content, file reading catalogue, and perform the operational order matched with described read-write structure be pre-existing in.
3. the caching method based on AP platform according to claim 2, is characterized in that, also comprises in described file destination buffer memory to described data storage:
When file destination is video file, plays feature according to the video file that pre-sets, described video file is carried out Segment-based caching by bit mapping caching technology, and be relocated onto the video message head that video file matches.
4. the caching method based on AP platform according to claim 3, is characterized in that, searches the data file matched with file destination and also comprise in the data storage pre-set:
Be when there is the data file matched with file destination, then described data file is returned to user at lookup result.
5. the caching method based on AP platform according to claim 4, is characterized in that, searches and determines that the peer address that matches with file destination and port also comprise:
When there is not the peer address and port that match with file destination, then send the error coded that pre-sets to pre-configured nginx system.
6. based on a caching system for AP platform, it is characterized in that, comprising:
Data search module, for searching the data file matched with file destination in the data storage pre-set;
Address search module, for being when there is not the data file matched with file destination at lookup result, by the source server that the infohash algorithm accesses that pre-sets and file destination match, search and determine the peer address that matches with file destination and port;
Downloading cache module, for when there is the peer address and port that match with file destination, downloading described file destination according to described peer address, and by described file destination buffer memory to described data storage;
Data transmission blocks, for returning to user by the file destination of download.
7. the caching system based on AP platform according to claim 6, is characterized in that, the buffer memory of described performance objective file in request server also comprises:
Message request judges submodule, receives and/or send file destination for the thread by pre-setting, and checks whether the pre-configured message chained list matched with file destination exists the read-write requests of user by described thread;
Message request editor submodule, for when there is read-write requests, according to pre-configured function, joins the read-write structure queue matched with file destination by read-write requests;
File destination editor submodule, for checking the global variable pre-set, joins described file destination in pre-configured variable queue;
Global variable editor submodule, for joining in described global variable by described read-write structure;
Classification implementation sub-module, for according to described read-write structure, classifies described read-write structure, comprises file reading content, writing in files content, file reading catalogue, and performs the operational order matched with described read-write structure be pre-existing in.
8. the caching system based on AP platform according to claim 7, is characterized in that, also comprises in described file destination buffer memory to described data storage:
Video segmentation cache module, for when file destination is video file, play feature according to the video file that pre-sets, described video file is carried out Segment-based caching by bit mapping caching technology, and be relocated onto the video message head that video file matches.
9. the caching system based on AP platform according to claim 8, is characterized in that, searches the data file matched with file destination and also comprise in the data storage pre-set:
Data send submodule, for being when there is the data file matched with file destination, then described data file is returned to user at lookup result.
10. the caching system based on AP platform according to claim 9, is characterized in that, searches and determines that the peer address that matches with file destination and port also comprise:
Error code display module, for when there is not the peer address and port that match with file destination, then sends the error coded that pre-sets to pre-configured nginx system.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510020636.7A CN104539727A (en) | 2015-01-15 | 2015-01-15 | Cache method and system based on AP platform |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510020636.7A CN104539727A (en) | 2015-01-15 | 2015-01-15 | Cache method and system based on AP platform |
Publications (1)
Publication Number | Publication Date |
---|---|
CN104539727A true CN104539727A (en) | 2015-04-22 |
Family
ID=52855194
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510020636.7A Pending CN104539727A (en) | 2015-01-15 | 2015-01-15 | Cache method and system based on AP platform |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104539727A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105262680A (en) * | 2015-10-21 | 2016-01-20 | 浪潮(北京)电子信息产业有限公司 | Multi-threaded NAS Gateway applied to cloud storage system |
CN106331125A (en) * | 2016-08-29 | 2017-01-11 | 迈普通信技术股份有限公司 | File downloading method and apparatus |
WO2018127013A1 (en) * | 2017-01-03 | 2018-07-12 | 北京奇虎科技有限公司 | Method and device for concurrent transmission of stream data |
WO2018153202A1 (en) * | 2017-02-21 | 2018-08-30 | 中兴通讯股份有限公司 | Data caching method and apparatus |
CN113542373A (en) * | 2021-06-30 | 2021-10-22 | 深圳市云网万店电子商务有限公司 | Routing service discovery device and method for PAAS platform |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101026543A (en) * | 2007-03-28 | 2007-08-29 | 华为技术有限公司 | Point-tor point P2P content sharing method and system |
CN101106503A (en) * | 2007-08-31 | 2008-01-16 | 华为技术有限公司 | Autonomous method for peer-to-peer network, node device and system |
EP2216958A1 (en) * | 2009-02-10 | 2010-08-11 | Alcatel Lucent | Method and device for reconstructing torrent content metadata |
CN102655512A (en) * | 2011-03-01 | 2012-09-05 | 腾讯科技(深圳)有限公司 | Mobile equipment-based downloading method and system |
CN102664938A (en) * | 2012-04-12 | 2012-09-12 | 北京蓝汛通信技术有限责任公司 | Method and device for controlling downloading of resources |
-
2015
- 2015-01-15 CN CN201510020636.7A patent/CN104539727A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101026543A (en) * | 2007-03-28 | 2007-08-29 | 华为技术有限公司 | Point-tor point P2P content sharing method and system |
CN101106503A (en) * | 2007-08-31 | 2008-01-16 | 华为技术有限公司 | Autonomous method for peer-to-peer network, node device and system |
EP2216958A1 (en) * | 2009-02-10 | 2010-08-11 | Alcatel Lucent | Method and device for reconstructing torrent content metadata |
CN102655512A (en) * | 2011-03-01 | 2012-09-05 | 腾讯科技(深圳)有限公司 | Mobile equipment-based downloading method and system |
CN102664938A (en) * | 2012-04-12 | 2012-09-12 | 北京蓝汛通信技术有限责任公司 | Method and device for controlling downloading of resources |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105262680A (en) * | 2015-10-21 | 2016-01-20 | 浪潮(北京)电子信息产业有限公司 | Multi-threaded NAS Gateway applied to cloud storage system |
CN106331125A (en) * | 2016-08-29 | 2017-01-11 | 迈普通信技术股份有限公司 | File downloading method and apparatus |
WO2018127013A1 (en) * | 2017-01-03 | 2018-07-12 | 北京奇虎科技有限公司 | Method and device for concurrent transmission of stream data |
WO2018153202A1 (en) * | 2017-02-21 | 2018-08-30 | 中兴通讯股份有限公司 | Data caching method and apparatus |
US11226898B2 (en) | 2017-02-21 | 2022-01-18 | Zte Corporation | Data caching method and apparatus |
CN113542373A (en) * | 2021-06-30 | 2021-10-22 | 深圳市云网万店电子商务有限公司 | Routing service discovery device and method for PAAS platform |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10083177B2 (en) | Data caching among interconnected devices | |
US10999215B2 (en) | Software-defined network-based method and system for implementing content distribution network | |
US8756296B2 (en) | Method, device and system for distributing file data | |
US8068512B2 (en) | Efficient utilization of cache servers in mobile communication system | |
US9660922B2 (en) | Network assisted rate shifting for adaptive bit rate streaming | |
US7684396B2 (en) | Transmission apparatus having a plurality of network interfaces and transmission method using the same | |
CN104539727A (en) | Cache method and system based on AP platform | |
JP2018517341A (en) | System for improved mobile internet speed and security | |
CN103024593A (en) | Online VOD (video on demand) acceleration system and online VOD playing method | |
US9774651B2 (en) | Method and apparatus for rapid data distribution | |
US10063893B2 (en) | Controlling the transmission of a video data stream over a network to a network user device | |
US20130326133A1 (en) | Local caching device, system and method for providing content caching service | |
CN109600388A (en) | Data transmission method, device, computer-readable medium and electronic equipment | |
CN108932277B (en) | Webpage loading method, webpage loading system and server | |
US20170311209A1 (en) | Hypertext transfer protocol support over hybrid access | |
CN103001964A (en) | Cache acceleration method under local area network environment | |
US8000720B2 (en) | Reducing bandwidth when transmitting content to a cellular device | |
CN103125108A (en) | System and method of establishing transmission control protocol connections | |
CN106330994A (en) | User message publishing method and system | |
CN105338654A (en) | Network sharing method, apparatus and system | |
WO2023246488A1 (en) | Content providing method and apparatus | |
KR20170040739A (en) | Requesting and receiving a media stream within a networked system | |
US20040107293A1 (en) | Program obtainment method and packet transmission apparatus | |
KR20130134911A (en) | Method for providing content caching service in adapted streaming service and local caching device thereof | |
KR101632068B1 (en) | Method, system and computer-readable recording medium for transmitting contents by using unique indentifier of contents |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20150422 |
|
RJ01 | Rejection of invention patent application after publication |