CN105791366A - Large file HTTP-Range downloading method, cache server and system - Google Patents

Large file HTTP-Range downloading method, cache server and system Download PDF

Info

Publication number
CN105791366A
CN105791366A CN201410827530.3A CN201410827530A CN105791366A CN 105791366 A CN105791366 A CN 105791366A CN 201410827530 A CN201410827530 A CN 201410827530A CN 105791366 A CN105791366 A CN 105791366A
Authority
CN
China
Prior art keywords
entity
range
request
source station
scope
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410827530.3A
Other languages
Chinese (zh)
Other versions
CN105791366B (en
Inventor
汤新
侯光华
李建军
广小明
汪鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianyi Cloud Technology Co Ltd
Original Assignee
China Telecom Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Telecom Corp Ltd filed Critical China Telecom Corp Ltd
Priority to CN201410827530.3A priority Critical patent/CN105791366B/en
Publication of CN105791366A publication Critical patent/CN105791366A/en
Application granted granted Critical
Publication of CN105791366B publication Critical patent/CN105791366B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The invention discloses a large file HTTP-Range downloading method, a cache server and a system. The method comprises the following steps of: when a Range request of a client side is received, checking whether the whole entity is stored locally or not by the cache server at first, and if so, directly responding; and if the whole entity is not stored, checking whether an entity, which is greater than or equal to a request range, is stored locally or not, if so, responding, otherwise, judging whether a part of entity stored locally is greater than a threshold value or not, if so, initiating the request in the whole range to a source station, otherwise, initiating the request in a part of range, and responding with the help of a local content. Therefore, response of the cache server to the Range request is optimized; and the pressure of the source station is reduced.

Description

A kind of big file HTTP-Range method for down loading, caching server and system
Technical field
The present invention relates to big file download field, particularly relate to a kind of big file HTTP-Range method for down loading, caching server and system.
Background technology
Along with developing rapidly of the Internet, it then follows Moore's Law, the information transmitted on the internet just increases at explosion type, and due to being on the increase of the types such as high definition, super clear, blue light, the mean size of single file is also being continuously increased.
While file increases, the tolerance postponed but constantly is declined by user, so the player of the current overwhelming majority, downloaded software all can adopt the mode of multithreading, open multiple thread sending HTTP request, each HTTP request simply requests a part for resource entity, i.e. scope (Range) request, finally merges the part entity that each thread is downloaded, thus substantially reducing download time.
But multithreading is bringing quick while, also huge pressure is caused to server, especially for the content of burst focus, because the request response of multiple " fragments " is incomplete entity, so will not buffer memory such " fragment " for current caching technology.
Such as, as shown in Figure 1, when many clients multithreading initiates Range request to caching server, and when caching server does not have requested entity, intact for all of Range request source station that is forwarded back to can be taken the part entity content of each request and the response of any Range request of caching server not buffer memory by caching server, while forwarding request, caching server can initiate the full dose request of a non-Range to source station, and the response of this request can be buffered.
In this process, when running into burst Hot Contents, source station can be severely affected power, and content distributing network (CDN) is helpless, even also has retroaction.Such as, because each Range asks by intactly Hui Yuan, the request of whole file extra in addition, and caching server can retry after unsuccessfully asking for high reliability, and general cache server is all in Internet data center (IDC) network environment, network speed quickly, so after adding number of request, improving network environment, it is exaggerated back source bandwidth, is namely exaggerated source station pressure.
Summary of the invention
The technical problem to be solved in the present invention is current http protocol and caching technology cannot alleviate the substantial amounts of Range pressure asking to bring that happens suddenly to source station.
According to an aspect of the present invention, it is proposed to a kind of big file HTTP-Range method for down loading, including:
Receive the Range request of client;
Caching server checks whether this locality preserves whole entity, if having, then directly in response to;
If not preserving whole entity, then checking whether this locality preserves the entity be more than or equal to request scope, if having, then responding, otherwise, initiate request to source station.
Further, check whether this locality preserves the entity within the scope of Range request, if not having, then initiate Range to source station and ask and obtain Range entity, and, initiate to source station obtain the request of whole entity and obtain whole entity.
Further, check whether this locality preserves the entity within the scope of Range request, if having, then check request scope whether more than threshold value, existence range entity whether by forming less than N number of entity and/or whether the entity of existence range exceedes the half of request scope, if, then caching server initiates existing extraneous Range request to source station, and is merged with the entity in existing scope by the existing extraneous entity obtained;
Otherwise, caching server is initiated Range and is asked and obtain Range entity to source station, and, initiate to source station obtain the request of whole entity and obtain whole entity.
Further, when receive client multiple Range ask time, if request scope adjacent or overlapping time, then by multiple Range request merge, and to source station initiate maximum magnitude Range request, respond respectively after fetching maximum magnitude entity each request.
Further, whole entity division is become multiple logic data block, and using each Range entity sub-block record as logic data block;
When the multiple entity of caching server buffer memory, and entity have region adjacent or overlapping time, adjacent or lap is merged;
When entity range spans logic data block, increase the upper limit of previous logic data block, the lower limit of a logic data block after reduction, and logic data block lap is merged;
Obtain whole entity or multiple entity merges into whole entity, then replace each entity preserved.
Further, doubly linked list link is adopted between described logic data block and/or between described sub-block.
According to a further aspect in the invention, it is also proposed that a kind of big file HTTP-Range download system, including:
Receiver module, for receiving the Range request of client;
Processing module, is used for checking whether this locality preserves whole entity, if having, then directly in response to;If not preserving whole entity, then checking whether this locality preserves the entity be more than or equal to request scope, if having, then responding;Otherwise, request is initiated to source station.
Further, described processing module is for checking whether this locality preserves the entity within the scope of Range request;If no, then initiate Range to source station and ask and obtain Range entity, and, initiate to source station obtain the request of whole entity and obtain whole entity.
Further, described processing module is for checking whether this locality preserves the entity within the scope of Range request, if having, then check request scope whether more than threshold value, existence range entity whether by forming less than N number of entity and/or whether the entity of existence range exceedes the half of request scope, if, then initiate existing extraneous Range request to source station, and the existing extraneous entity obtained is merged with the entity in existing scope;Otherwise, initiate Range to source station and ask and obtain Range entity, and, initiate to source station obtain the request of whole entity and obtain whole entity.
Further, described processing module for when receive client multiple Range ask time, if request scope adjacent or overlapping time, then by multiple Range request merge, and the Range initiating maximum magnitude to source station asks, after fetching maximum magnitude entity, respond each request respectively.
Further, cache module, for being divided into multiple logic data block, and using each Range entity sub-block record as logic data block by file;When the multiple entity of buffer memory, and entity have region adjacent or overlapping time, adjacent or lap is merged;When entity range spans logic data block, increase the upper limit of previous logic data block, the lower limit of a logic data block after reduction, and logic data block lap is merged;Obtain whole entity or multiple entity merges into whole entity, then replace each entity preserved.
Further, doubly linked list link is adopted between described logic data block and/or between described sub-block.
According to a further aspect in the invention, it is also proposed that a kind of big file HTTP-Range download system, including any of the above-described described caching server and source station.
Compared with prior art, the present invention can at local cache part entity, and the scope of the part entity of institute's buffer memory be more than or equal to the Range scope asked time, it is possible to be made directly response, it is not necessary to again to source station initiate request.Therefore, optimize caching server for the Range response asked, decrease the pressure of source station.
By referring to the accompanying drawing detailed description to the exemplary embodiment of the present invention, the further feature of the present invention and advantage thereof will be made apparent from.
Accompanying drawing explanation
The accompanying drawing of the part constituting description describes embodiments of the invention, and is used for together with the description explaining principles of the invention.
With reference to accompanying drawing, according to detailed description below, it is possible to be more clearly understood from the present invention, wherein:
Fig. 1 is that prior art document downloads request responding process schematic diagram.
Fig. 2 is the schematic flow sheet of the present invention big file HTTP-Range method for down loading embodiment.
Fig. 3 is the structural representation of the embodiment of file of the present invention storage.
Fig. 4 is the file of the present invention structural representation in the embodiment of locally stored index.
Fig. 5 is the structural representation of the embodiment of caching server of the present invention.
Fig. 6 is the schematic flow sheet of the specific embodiment of HTTP-Range method for down loading of the present invention.
Fig. 7 is the schematic flow sheet of the specific embodiment of HTTP-Range download system of the present invention.
Detailed description of the invention
The various exemplary embodiments of the present invention are described in detail now with reference to accompanying drawing.It should also be noted that unless specifically stated otherwise, the parts otherwise set forth in these embodiments and positioned opposite, the numerical expression of step and numerical value do not limit the scope of the invention.
Simultaneously, it should be appreciated that for the ease of describing, the size of the various piece shown in accompanying drawing is not draw according to actual proportionate relationship.
Description only actually at least one exemplary embodiment is illustrative below, never as any restriction to the present invention and application or use.
The known technology of person of ordinary skill in the relevant, method and apparatus are likely to be not discussed in detail, but in the appropriate case, described technology, method and apparatus should be considered to authorize a part for description.
Shown here with in all examples discussed, any occurrence should be construed as merely exemplary, not as restriction.Therefore, other example of exemplary embodiment can have different values.
It should also be noted that similar label and letter below figure represent similar terms, therefore, once a certain Xiang Yi accompanying drawing is defined, then it need not be further discussed in accompanying drawing subsequently.
For making the object, technical solutions and advantages of the present invention clearly understand, below in conjunction with specific embodiment, and with reference to accompanying drawing, the present invention is described in more detail.
Fig. 2 is the schematic flow sheet of the present invention big file HTTP-Range method for down loading embodiment.The method includes:
In step 210, receive the Range request of client.
Wherein, Range request includes the entity scope of request, i.e. the start bit of byte and stop bits, and the size of whole entity.Such as: Content-Range:bytes [unitfirstbytepos]-[lastbytepos]/[entitylegth].
In step 220, caching server checks whether this locality preserves whole entity, if having, then performing step 230, otherwise, performing step 240.
In step 230, directly in response to.
In step 240, check whether this locality preserves the entity be more than or equal to request scope, if having, then performing step 250, otherwise, performing step 260.
In step 250, respond.
Such as, the ranging for of request:
Content-Range:bytes52396032-52418066/52418067, there is the entity of 50000000-52418066 this locality, then extract the content response that byte position is from 2396032 to 2418066 from entity.
In step 260, initiate request to source station.
Wherein, described entity and data content.
In this embodiment it is possible at local cache part entity, and the scope of the part entity of institute's buffer memory be more than or equal to the Range scope asked time, it is possible to be made directly response, it is not necessary to initiate request again to source station.Therefore, optimize caching server for the Range response asked, decrease the pressure of source station.
The process that to step 260, will initiate request below to source station is described in detail.
One embodiment of the present of invention, checks whether this locality preserves the entity within the scope of Range request, if not having, caching server is initiated Range and asked and obtain Range entity to source station, and, initiate to source station obtain the request of whole entity and obtain whole entity.
Wherein, Range entity is given user terminal by caching server, and, also this Range entity is cached in local hard drive.
In this embodiment, if the entity not preserved within the scope of Range request, then obtain this Range entity to source station, and obtain whole entity, respond with the request to client.
An alternative embodiment of the invention, check whether this locality preserves the entity within the scope of Range request, if having, then check that whether request scope is more than threshold value, whether existence range entity is formed by less than N number of entity, and/or whether the entity of existence range exceedes the half of request scope, if, caching server initiates existing extraneous Range request to source station, and the existing extraneous entity obtained is merged with the entity in existing scope, otherwise, caching server initiates Range to source station asks and obtains Range entity, and, initiate to source station obtain the request of whole entity and obtain whole entity.
In this embodiment, existing extraneous Range request can be initiated to source station, and the existing extraneous entity obtained is merged with the entity in existing scope, such that it is able to reduce the byte number obtained from source station, decrease the pressure of source station, and add response speed.
Judge whether that request Hui Yuan illustrates when with each embodiment this locality preserved entity within the scope of Range request below.
First embodiment
Whether inspection request scope is more than threshold value, and wherein, threshold calculations formula is: min (D/106,20B)
D is total hard disk size, and B is file system block size, and this document system block size refers to the size of the physical storage block that first floor system sets, and is generally between [512,65535].
When Range request is less than threshold value, then direct Hui Yuan.Such as, simply request a byte, directly return source station acquisition Range entity than retrieving location in the present system, initiating existing extraneous Range request, the existing extraneous entity obtained is merged the required time again short with the entity in existing scope, therefore, when Range request is less than threshold value, then direct Hui Yuan.
Second embodiment
Checking whether existence range entity is formed by less than N number of entity, wherein, the computing formula of N is: max (Mf/104,3S/104)
MfFor free memory, S is the Range size of request.
Such as, when N is calculated as 8, what Range asked ranges for 0-1000, and locally stored trifling Range entity has 0-100,101-200 ... ten scopes compositions of 901-1000, then direct Hui Yuan.
Owing to each entity is currently being used, each continuous print entity does not have enough time to merge, and client initiates Range asks, then this request needs an edge responses to merge, and required time is likely to more longer than directly time source.Consider server stress and cost performance, therefore set point entity is formed then directly Hui Yuan by more than N number of entity.It should be apparent to a person skilled in the art that and be served only for citing herein, should not be construed as limitation of the present invention.
3rd embodiment
Check whether the entity of existence range exceedes the half of request scope, wherein, the physical size of existence range is sued for peace, ∑I=0Ri< S
RiRepresenting the range size of already present i-th record, S is the Range size of request.
Such as, ranging for 0-1000 when what ask, and this locality only ranges for 0-100,201-300, the entity of 401-500, being equivalent to existing range size only has 300, it is desirable that 1000, then existence range deficiency has asked the half of scope, then direct Hui Yuan.Obtain sub-fraction 300 if local, the source of returning ask most 700, then assemble response, it is possible to not as directly returning the entity of source acquisition 1000 directly in response to more efficient.
In an embodiment of the present invention, related algorithm is passed through, it may be judged whether initiate request to source station, reduce further source station pressure.
Another embodiment of the present invention, when receive client multiple Range ask time, if request scope adjacent or overlapping time, then by multiple Range request merge, and the Range initiating maximum magnitude to source station asks, after fetching maximum magnitude entity, respond each request respectively.
Such as, the original position of multiple Range request is consistent, end position is likely to inconsistent, and caching server had not both had the buffer memory buffer memory also without partial document of whole file, the present invention can optimize and takes source policy, only initiate the Range that in these requests, scope is maximum and ask Hui Yuan, rather than by all of request all Hui Yuan, greatly reduce source station pressure.
In this embodiment, multiple Range request is merged, and initiate the Range request of maximum magnitude to source station, after fetching maximum magnitude entity, respond each request respectively.Therefore, decrease the number of Range request, greatly reduce source station pressure further.
Fig. 3 is the structural representation of the embodiment of file of the present invention storage.
As shown in Figure 3, in caching system, in order to prevent frequently moving file when increasing scope, improve efficiency, reduce IO, whole entity division is become multiple logic data block, and using each Range entity sub-block record as logic data block, between described logic data block and/or between described sub-block, adopts doubly linked list link.Linked by doubly linked list, facilitate additions and deletions, amendment and Search and Orientation.
When the multiple entity of caching server buffer memory, and entity have region adjacent or overlapping time, adjacent or lap is merged, to reduce fragment and to increase response efficiency.
Such as, the value of file Hash keys (key) correspondence refers to the address of first entity in file, after multiple entities merge into whole entity, value corresponding for file Hash key is revised as the initial address of whole file, it is designated the address of whole entity, and deletes the list structure of each entity and response.When happening suddenly Hot Contents, cache hierarchy can each Range entity of buffer memory in a short period of time, for responding Range request later.
When entity range spans logic data block, increase the upper limit of previous logic data block, the lower limit of a logic data block after reduction, and logic data block lap is merged, improve memory space utilization rate.
After whole entity to be obtained or after multiple entity merges into whole entity, replace each entity preserved, further increase memory space utilization rate.
Fig. 4 is the file of the present invention structural representation in the embodiment of locally stored index.
In order to construct cluster, node serial number is set;
Different domain names is stored separately;
Another level first grade structure catalogue, different directories is stored separately;
File under catalogue stores this document initial address in space.
In this embodiment, with file index by name, the initial address in space is introduced, the catalog directory tree built, it is easy to retrieval and management, it is easy to just can find and have which content under the path of certain uniform resource locator (URL) correspondence, and deposits and be absent from some content.When caching server receives the Range request of client, according to catalog directory tree, with file index by name, finding the file initial address to index, caching server can give client one response, further reduces the pressure of source station.
Fig. 5 is the structural representation of the embodiment of caching server of the present invention.This caching server includes: receiver module 510 and processing module 520.Wherein
Receiver module 510, for receiving the Range request of client.
Wherein, Range request includes the entity scope of request, i.e. the start bit of byte and stop bits, and the size of whole entity.Such as: Content-Range:bytes [unitfirstbytepos]-[lastbytepos]/[entitylegth].
Processing module 520, is used for checking whether this locality preserves whole entity, if having, then directly in response to;If not preserving whole entity, then checking whether this locality preserves the entity be more than or equal to request scope, if having, then responding;Otherwise, request is initiated to source station.
Wherein, described entity and data content.
Such as, the ranging for of request:
Content-Range:bytes52396032-52418066/52418067, there is the entity of 50000000-52418066 this locality, then extract the content response that byte position is from 2396032 to 2418066 from entity.
In this embodiment it is possible at local cache part entity, and the scope of the part entity of institute's buffer memory be more than or equal to the Range scope asked time, it is possible to be made directly response, it is not necessary to initiate request again to source station.Therefore, optimize caching server for the Range response asked, decrease the pressure of source station.
The process initiating request to source station will be described in detail below.
One embodiment of the present of invention, processing module 520 is for checking whether this locality preserves the entity within the scope of Range request;If no, then initiate Range to source station and ask and obtain Range entity, and, initiate to source station obtain the request of whole entity and obtain whole entity.
Wherein, Range entity is given user terminal by processing module 520, and, also this Range entity is cached in local hard drive.
In this embodiment, if the entity not preserved within the scope of Range request, then obtain this Range entity to source station, and obtain whole entity, respond with the request to client.
An alternative embodiment of the invention, processing module 520 is for checking whether this locality preserves the entity within the scope of Range request, if having, then check request scope whether more than threshold value, existence range entity whether by forming less than N number of entity and/or whether the entity of existence range exceedes the half of request scope, if, then initiate existing extraneous Range request to source station, and the existing extraneous entity obtained is merged with the entity in existing scope;Otherwise, initiate Range to source station and ask and obtain Range entity, and, initiate to source station obtain the request of whole entity and obtain whole entity.
In this embodiment, processing module 520 can initiate existing extraneous Range request to source station, and is merged with the entity in existing scope by the existing extraneous entity obtained, such that it is able to reduce the byte number obtained from source station, decrease the pressure of source station, and add response speed.
Judge whether that request Hui Yuan illustrates when with each embodiment this locality preserved entity within the scope of Range request below.
First embodiment
Whether inspection request scope is more than threshold value, and wherein, threshold calculations formula is: min (D/106,20B)
D is total hard disk size, and B is file system block size, and this document system block size refers to the size of the physical storage block that first floor system sets, and is generally between [512,65535].
When Range request is less than threshold value, then direct Hui Yuan.Such as, simply request a byte, directly return source station acquisition Range entity than retrieving location in the present system, initiating existing extraneous Range request, the existing extraneous entity obtained is merged the required time again short with the entity in existing scope, therefore, when Range request is less than threshold value, then direct Hui Yuan.
Second embodiment
Checking whether existence range entity is formed by less than N number of entity, wherein, the computing formula of N is: max (Mf/104,3S/104)
MfFor free memory, S is the Range size of request.
Such as, when N is calculated as 8, what Range asked ranges for 0-1000, and locally stored trifling Range entity has 0-100,101-200 ... ten scopes compositions of 901-1000, then direct Hui Yuan.
Owing to each entity is currently being used, each continuous print entity does not have enough time to merge, and client initiates Range asks, then this request needs an edge responses to merge, and required time is likely to more longer than directly time source.Consider server stress and cost performance, therefore set point entity is formed then directly Hui Yuan by more than N number of entity.It should be apparent to a person skilled in the art that and be served only for citing herein, should not be construed as limitation of the present invention.
3rd embodiment
Check whether the entity of existence range exceedes the half of request scope, wherein, the physical size of existence range is sued for peace, ∑I=0Ri< S
RiRepresenting the range size of already present i-th record, S is the Range size of request.
Such as, ranging for 0-1000 when what ask, and this locality only ranges for 0-100,201-300, the entity of 401-500, being equivalent to existing range size only has 300, it is desirable that 1000, then existence range deficiency has asked the half of scope, then direct Hui Yuan.Obtain sub-fraction 300 if local, the source of returning ask most 700, then assemble response, it is possible to not as directly returning the entity of source acquisition 1000 directly in response to more efficient.
In an embodiment of the present invention, related algorithm is passed through, it may be judged whether initiate request to source station, reduce further source station pressure.
Another embodiment of the present invention, described processing module 520 is when the multiple Range receiving client ask, if request scope adjacent or overlapping time, then multiple Range request is merged, and the Range initiating maximum magnitude to source station asks, after fetching maximum magnitude entity, respond each request respectively.
Such as, the original position of multiple Range request is consistent, end position is likely to inconsistent, and caching server had not both had the buffer memory buffer memory also without partial document of whole file, the present invention can optimize and takes source policy, only initiate the Range that in these requests, scope is maximum and ask Hui Yuan, rather than by all of request all Hui Yuan, greatly reduce source station pressure.
In this embodiment, multiple Range request is merged, and initiate the Range request of maximum magnitude to source station, after fetching maximum magnitude entity, respond each request respectively.Therefore, decrease the number of Range request, greatly reduce source station pressure further.
An alternative embodiment of the invention, in caching system, in order to prevent from frequently moving when increasing scope file, improves efficiency, reduces IO, and this system also includes cache module 530.
Cache module 530, for whole entity division becomes multiple logic data block, and using each Range entity sub-block record as logic data block, adopts doubly linked list link between described logic data block and/or between described sub-block.Linked by doubly linked list, facilitate additions and deletions, amendment and Search and Orientation.
When the multiple entity of buffer memory, and entity have region adjacent or overlapping time, adjacent or lap is merged, to reduce fragment and to increase response efficiency.
Such as, the value of file Hash keys (key) correspondence refers to the address of first entity in file, after multiple entities merge into whole entity, value corresponding for file Hash key is revised as the initial address of whole file, it is designated the address of whole entity, and deletes the list structure of each entity and response.When happening suddenly Hot Contents, cache hierarchy can each Range entity of buffer memory in a short period of time, for responding Range request later.
When entity range spans logic data block, increase the upper limit of previous logic data block, the lower limit of a logic data block after reduction, and logic data block lap is merged, improve memory space utilization rate.
After whole entity to be obtained or after multiple entity merges into whole entity, replace each entity preserved, further increase memory space utilization rate.
Fig. 4 is the file of the present invention structural representation in the embodiment of locally stored index.
In order to construct cluster, node serial number is set;
Different domain names is stored separately;
Another level first grade structure catalogue, different directories is stored separately;
File under catalogue stores this document initial address in space.
In this embodiment, with file index by name, the initial address in space is introduced, the catalog directory tree built, it is easy to retrieval and management, it is easy to just can find and have which content under the path of certain uniform resource locator (URL) correspondence, and deposits and be absent from some content.When caching server receives the Range request of client, according to catalog directory tree, with file index by name, finding the file initial address to index, caching server can give client one response, further reduces the pressure of source station.
Below in conjunction with Fig. 6 and Fig. 7, with a specific embodiment, the present invention will be further described.
Fig. 6 is the schematic flow sheet of the specific embodiment of HTTP-Range method for down loading of the present invention.
Caching server receive client Range request, this Range request include start bit and stop bits, it may be judged whether have full dose to ask, if there being full dose, then directly in response to.
There is no full dose through searching, then search doubly linked list, and determine whether the file be more than or equal to Range scope.If it has, then directly in response to.
Through searching, do not find the file be more than or equal to Range scope, check whether this locality preserves the entity within the scope of Range request, if not having, then caching server is asked to source station initiation Range and obtains Range entity, and wherein, multiple Range request can be merged by caching server, and the Range initiating maximum magnitude to source station asks, after fetching maximum magnitude entity, respond each request respectively.Further, initiate to source station obtain the request of whole entity and obtain whole entity.
Check that the entity within the scope of Range request is preserved in this locality, judge Range size whether more than threshold value, need whether whether the number of the part assembled exceed the half of request scope less than the entity of thresholding and/or existence range, if, then caching server initiates existing extraneous Range request to source station, wherein, multiple Range request is taken source and merges by caching server, such as, take the request of maximum magnitude in identical start bit, send sub-request 1: start bit 1-stop bits 1, and son request 2: start bit 2-stop bits 2.
The existing extraneous entity obtained is merged with the entity in existing scope.Each request is responded respectively after fetching maximum magnitude entity.Last till that whole Range is complete.
After fetching each entity, using each Range entity sub-block record as logic data block, and create doubly linked list.
When the multiple entity of caching server buffer memory, and entity have region adjacent or overlapping time, adjacent or lap is merged, if multiple entity merges into whole entity, then replaces each entity preserved.
In an embodiment of the present invention, multiple Range request is merged (merging by range size, with big Hui Yuan), multiple trifling Range requests are become several large range of Range request, reduces the number of Range request.By buffer memory Range entity, when running into burst focus, reduce the pressure to source station.
Fig. 7 is the schematic flow sheet of the specific embodiment of HTTP-Range download system of the present invention.This system includes caching server, client and source station.
Caching server receive client Range request, it may be judged whether have full dose to ask, if there being full dose, then directly in response to.If it is not, Range request to be sent to source station, and initiate full dose request to source station.Such as, Range request is 100-1000.Before the Range receiving client asks, any content of the uncached this document of caching server, after receiving this Range request, caching server receives the Range response of source station, this Range response contents is preserved, further, the entire content of this document that source station returns, row cache of going forward side by side are received successively.
Afterwards, caching server receives the Range request of multiple client again, for instance, multiple clients initiate the request of 100-1000,100-300,200-500,500-1000 and 200-1000 respectively.If the content between the uncached 100-1000 of caching server before, then above-mentioned multiple requests can be carried out taking source merging by caching server, i.e. the Range request after merging is 100-1000.If the content between caching server buffer memory 100-1200 before, then caching server the content between extracting directly 100-1200 can carry out Range response.
In this embodiment, when receiving the request of range of requests 100-1100, content due to caching server is buffered before 100-1000, so, caching server asks extraneous content to source station, namely the request of 1000-1100 is initiated, and obtain the Range response of 1000-1100, due to 100-1000, and 1000-1100 has scope adjacent, then the existing extraneous entity obtained is merged by caching server with the entity in existing scope, namely buffer memory range for 100-1100, so, Range between this 100-1100 is asked, caching server can be directly in response to.Decrease the pressure to source station, and, improve response speed.
So far, the present invention is described in detail.In order to avoid covering the design of the present invention, it does not have describe details more known in the field.Those skilled in the art are as described above, complete it can be appreciated how implement technical scheme disclosed herein.
It is likely to be achieved in many ways the method for the present invention and device.Such as, can by software, hardware, firmware or software, hardware, firmware any combination realize method and the device of the present invention.For the said sequence of step of described method merely to illustrate, the step of the method for the present invention is not limited to order described in detail above, unless specifically stated otherwise.Additionally, in certain embodiments, can being also record program in the recording medium by the invention process, these programs include the machine readable instructions for realizing the method according to the invention.Thus, the present invention also covers the record medium of the storage program for performing the method according to the invention.
Although some specific embodiments of the present invention being described in detail already by example, but it should be appreciated by those skilled in the art, above example is merely to illustrate, rather than in order to limit the scope of the present invention.It should be appreciated by those skilled in the art, can without departing from the scope and spirit of the present invention, above example be modified.The scope of the present invention be defined by the appended claims.

Claims (13)

1. a big file HTTP-Range method for down loading, including:
Receive the Range request of client;
Caching server checks whether this locality preserves whole entity, if having, then directly in response to;
If not preserving whole entity, then checking whether this locality preserves the entity be more than or equal to request scope, if having, then responding, otherwise, initiate request to source station.
2. big file HTTP-Range method for down loading according to claim 1, also includes:
Check whether this locality preserves the entity within the scope of Range request, if not having, then initiate Range to source station and ask and obtain Range entity, and, initiate to source station obtain the request of whole entity and obtain whole entity.
3. big file HTTP-Range method for down loading according to claim 1, also includes:
Check whether this locality preserves the entity within the scope of Range request, if having, then check request scope whether more than threshold value, existence range entity whether by forming less than N number of entity and/or whether the entity of existence range exceedes the half of request scope, if, then caching server initiates existing extraneous Range request to source station, and is merged with the entity in existing scope by the existing extraneous entity obtained;
Otherwise, caching server is initiated Range and is asked and obtain Range entity to source station, and, initiate to source station obtain the request of whole entity and obtain whole entity.
4., according to the arbitrary described big file HTTP-Range method for down loading of claims 1 to 3, also include:
When receive client multiple Range ask time, if request scope adjacent or overlapping time, then by multiple Range request merge, and to source station initiate maximum magnitude Range request, respond respectively after fetching maximum magnitude entity each request.
5., according to the arbitrary described big file HTTP-Range method for down loading of Claims 1-4, also include:
Whole entity division is become multiple logic data block, and using each Range entity sub-block record as logic data block;
When the multiple entity of caching server buffer memory, and entity have region adjacent or overlapping time, adjacent or lap is merged;
When entity range spans logic data block, increase the upper limit of previous logic data block, the lower limit of a logic data block after reduction, and logic data block lap is merged;
Obtain whole entity or multiple entity merges into whole entity, then replace each entity preserved.
6. big file HTTP-Range method for down loading according to claim 5, including:
Doubly linked list link is adopted between described logic data block and/or between described sub-block.
7. a caching server, including:
Receiver module, for receiving the Range request of client;
Processing module, is used for checking whether this locality preserves whole entity, if having, then directly in response to;If not preserving whole entity, then checking whether this locality preserves the entity be more than or equal to request scope, if having, then responding;Otherwise, request is initiated to source station.
8. caching server according to claim 7, also includes:
Described processing module is for checking whether this locality preserves the entity within the scope of Range request;If no, then initiate Range to source station and ask and obtain Range entity, and, initiate to source station obtain the request of whole entity and obtain whole entity.
9. caching server according to claim 7, also includes:
Described processing module is for checking whether this locality preserves the entity within the scope of Range request, if having, then check request scope whether more than threshold value, existence range entity whether by forming less than N number of entity and/or whether the entity of existence range exceedes the half of request scope, if, then initiate existing extraneous Range request to source station, and the existing extraneous entity obtained is merged with the entity in existing scope;Otherwise, initiate Range to source station and ask and obtain Range entity, and, initiate to source station obtain the request of whole entity and obtain whole entity.
10., according to the arbitrary described caching server of claim 7 to 9, also include:
When described processing module is for asking as the multiple Range receiving client, if request scope adjacent or overlapping time, then multiple Range request is merged, and initiate the Range request of maximum magnitude to source station, after fetching maximum magnitude entity, respond each request respectively.
11. according to the arbitrary described caching server of claim 7 to 10, also include:
Cache module, for being divided into multiple logic data block, and using each Range entity sub-block record as logic data block by file;When the multiple entity of buffer memory, and entity have region adjacent or overlapping time, adjacent or lap is merged;When entity range spans logic data block, increase the upper limit of previous logic data block, the lower limit of a logic data block after reduction, and logic data block lap is merged;Obtain whole entity or multiple entity merges into whole entity, then replace each entity preserved.
12. caching server according to claim 11, including:
Doubly linked list link is adopted between described logic data block and/or between described sub-block.
13. big file HTTP-Range download system, including the arbitrary described caching server of claim 7 to 12 and source station.
CN201410827530.3A 2014-12-26 2014-12-26 A kind of big file HTTP-Range method for down loading, cache server and system Active CN105791366B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410827530.3A CN105791366B (en) 2014-12-26 2014-12-26 A kind of big file HTTP-Range method for down loading, cache server and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410827530.3A CN105791366B (en) 2014-12-26 2014-12-26 A kind of big file HTTP-Range method for down loading, cache server and system

Publications (2)

Publication Number Publication Date
CN105791366A true CN105791366A (en) 2016-07-20
CN105791366B CN105791366B (en) 2019-01-18

Family

ID=56388528

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410827530.3A Active CN105791366B (en) 2014-12-26 2014-12-26 A kind of big file HTTP-Range method for down loading, cache server and system

Country Status (1)

Country Link
CN (1) CN105791366B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107483546A (en) * 2017-07-21 2017-12-15 北京供销科技有限公司 A kind of file memory method and file storage device
CN107566452A (en) * 2017-08-02 2018-01-09 广州阿里巴巴文学信息技术有限公司 Storage method and device, method for down loading and device and data handling system
CN107967183A (en) * 2017-11-29 2018-04-27 努比亚技术有限公司 A kind of application interface merges operation method, mobile terminal and computer-readable recording medium
CN110502696A (en) * 2019-08-05 2019-11-26 上海掌门科技有限公司 A kind of method and apparatus of information stream distribution
WO2021169298A1 (en) * 2020-02-29 2021-09-02 平安科技(深圳)有限公司 Method and apparatus for reducing back-to-source requests, and computer readable storage medium
CN113590915A (en) * 2021-06-30 2021-11-02 影石创新科技股份有限公司 Partitioned data caching method, partitioned data accessing method, partitioned data caching device, partitioned data accessing device, terminal and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101005371A (en) * 2006-01-19 2007-07-25 思华科技(上海)有限公司 Caching method and system for stream medium
CN102547478A (en) * 2012-02-20 2012-07-04 北京蓝汛通信技术有限责任公司 Triggered slice on-demand system and method of streaming media based on CDN (Content Distribution Network)
US20130058480A1 (en) * 2011-09-01 2013-03-07 Rovi Corp. Systems and methods for saving encoded media streamed using adaptive bitrate streaming
CN104185036A (en) * 2014-09-10 2014-12-03 北京奇艺世纪科技有限公司 Video file source returning method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101005371A (en) * 2006-01-19 2007-07-25 思华科技(上海)有限公司 Caching method and system for stream medium
US20130058480A1 (en) * 2011-09-01 2013-03-07 Rovi Corp. Systems and methods for saving encoded media streamed using adaptive bitrate streaming
CN102547478A (en) * 2012-02-20 2012-07-04 北京蓝汛通信技术有限责任公司 Triggered slice on-demand system and method of streaming media based on CDN (Content Distribution Network)
CN104185036A (en) * 2014-09-10 2014-12-03 北京奇艺世纪科技有限公司 Video file source returning method and device

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107483546A (en) * 2017-07-21 2017-12-15 北京供销科技有限公司 A kind of file memory method and file storage device
CN107566452A (en) * 2017-08-02 2018-01-09 广州阿里巴巴文学信息技术有限公司 Storage method and device, method for down loading and device and data handling system
CN107566452B (en) * 2017-08-02 2020-09-18 阿里巴巴(中国)有限公司 Storage method and device, downloading method and device, and data processing system
CN107967183A (en) * 2017-11-29 2018-04-27 努比亚技术有限公司 A kind of application interface merges operation method, mobile terminal and computer-readable recording medium
CN110502696A (en) * 2019-08-05 2019-11-26 上海掌门科技有限公司 A kind of method and apparatus of information stream distribution
WO2021169298A1 (en) * 2020-02-29 2021-09-02 平安科技(深圳)有限公司 Method and apparatus for reducing back-to-source requests, and computer readable storage medium
CN113590915A (en) * 2021-06-30 2021-11-02 影石创新科技股份有限公司 Partitioned data caching method, partitioned data accessing method, partitioned data caching device, partitioned data accessing device, terminal and storage medium

Also Published As

Publication number Publication date
CN105791366B (en) 2019-01-18

Similar Documents

Publication Publication Date Title
US11194719B2 (en) Cache optimization
US10798203B2 (en) Method and apparatus for reducing network resource transmission size using delta compression
CN105791366A (en) Large file HTTP-Range downloading method, cache server and system
US11044335B2 (en) Method and apparatus for reducing network resource transmission size using delta compression
US9514243B2 (en) Intelligent caching for requests with query strings
AU737742B2 (en) A method and system for distributed caching, prefetching and replication
US9253278B2 (en) Using entity tags (ETags) in a hierarchical HTTP proxy cache to reduce network traffic
US20160006645A1 (en) Increased data transfer rate method and system for regular internet user
US20150222725A1 (en) Caching proxy method and apparatus
JP2013522736A (en) Method and system for providing a message including a universal resource locator
CN107633102A (en) A kind of method, apparatus, system and equipment for reading metadata
US20180302489A1 (en) Architecture for proactively providing bundled content items to client devices
US11089100B2 (en) Link-server caching
US10015012B2 (en) Precalculating hashes to support data distribution
US9288153B2 (en) Processing encoded content
CN116796099A (en) Short link generation method and device, electronic equipment and storage medium
WO2012010214A1 (en) Provision of cached data

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20220216

Address after: 100007 room 205-32, floor 2, building 2, No. 1 and No. 3, qinglonghutong a, Dongcheng District, Beijing

Patentee after: Tianyiyun Technology Co.,Ltd.

Address before: No.31, Financial Street, Xicheng District, Beijing, 100033

Patentee before: CHINA TELECOM Corp.,Ltd.

TR01 Transfer of patent right