CN1286774A - Internet cashing system and method and arrangement in such system - Google Patents

Internet cashing system and method and arrangement in such system Download PDF

Info

Publication number
CN1286774A
CN1286774A CN99801667A CN99801667A CN1286774A CN 1286774 A CN1286774 A CN 1286774A CN 99801667 A CN99801667 A CN 99801667A CN 99801667 A CN99801667 A CN 99801667A CN 1286774 A CN1286774 A CN 1286774A
Authority
CN
China
Prior art keywords
file
server
hub
feeder
inquiry
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN99801667A
Other languages
Chinese (zh)
Inventor
斯维克尔·林德波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
MIRROR IMAGE INTERNET AB
Mirror Image Internet Inc
Original Assignee
MIRROR IMAGE INTERNET AB
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by MIRROR IMAGE INTERNET AB filed Critical MIRROR IMAGE INTERNET AB
Publication of CN1286774A publication Critical patent/CN1286774A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/957Browsing optimisation, e.g. caching or content distillation
    • G06F16/9574Browsing optimisation, e.g. caching or content distillation of access to content, e.g. by caching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/242Query formulation
    • G06F16/2433Query languages

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The present invention refers to an Internet caching system and to an arrangement and a method for serving request for Internet information files in an Internet caching system. The system is built as a two tier caching system. In order to decrease the load on a central cache server (130), an intermediate arrangement (110) interconnects the local servers (100) of the system to the central cache server (130). This arrangement communicates with the local cache servers in accordance with a protocol used for communicating between cache servers. When requesting an Internet information file from the central cache server, the arrangement uses the Structured Query Language. Thus, the central cache server (130) is primarily devoted to answer plain SQL queries.

Description

A structure in internet caching system and method and this system thereof
Structure that the present invention relates to that an internet caching system and relating to is used at an internet caching system request of searching the internet information file being served and method.
In recent years, internet and its present most popular feature, world wide web (www) has developed into a huge information source.Anyly can provide any information per capita, text for example, image, Voice ﹠ Video, and these are provided on the WWW, as long as the user can access internet, they just can retrieve these information very easily.
The present subject matter that faces in internet is that the demand of message capacity is being increased, and this is because the user is from visiting information all over the world.According to estimates, the web service flow has surpassed all black phones on most of international communication circuit and the professional summation of fax.More transmission and exchange capacity constantly increase, but this is a process that slower process also is a costliness, and demand is surpassing deliverability constantly.
The content of WWW is immeasurability more and more, and may comprise too bit (summer in 1998) of hundreds of.But a less relatively subclass of all these information just can provide reality just in the most contents of accessed information.So, the bandwidth that is used when making on access internet information minimum and its postpone also minimum, the distance that information content that has used different caching technologys to limit at present need to transmit through the internet and restricted information need be transmitted.
At buffer memory WWW object, perhaps in the field of internet information file, two basic skills, client-cache and server buffers are arranged.The simple form of client-cache is the technology that each WWW browser of today has in fact used.This browser remains on last accessed internet information file in the impact damper on user's computer.When this user wished to visit an information specific file for the second time, browser was retrieved from its impact damper, rather than produced a request through the internet.
In order to help a neighboring user, the form of another client-cache, a proxy server caches method can be used.In this method, an impact damper is placed on some users' connections WWW agent node thereon, and such agent node for example can be a server that is positioned at a company.When a WWW client wishes a www server on the access internet, this client perhaps send a http request to this WWW acting server, rather than directly a server on Global Internet sends this request to this agent node.On the contrary, being this acting server sends to a www server on the Global Internet with this request, this response of buffer memory and this response returned to this client.Like this, when asking a message file for the first time, it is transmitted and is stored in the impact damper of this WWW acting server through the internet.Subsequently, that come from any client of being connected to this WWW acting server, can be provided service by this locality then to the request of this identical information file, and need not send the http request to a server through Global Internet.By realizing said method on the regional Internet caching server that has directly connected or connected indirectly some clients thereon, also can perhaps use proxy server caches in the front-end equipment of some other tissue in a company.
According to size and the homogeneity of using a user community of impact damper on same server, approximately the buffer memory capacity (spring in 1998) of 20-40 gigabit reduces 30-50% with the internet traffic that user community produced.The growth of the information that is provided along with internet and WWW, in order to keep this hit rate, i.e. the ratio of the message file request that transmits from this caching server, needed probably cache size will further increase along with the time.Further, if this hit rate is increased to 75% or higher, the performance of internet and utilization factor will be improved greatly.Concerning typical terminal user's behavior, this needs a very big buffer memory, and the present order of magnitude is the 200-400 gigabit, but needs that also a lot of members is arranged in the terminal user group, is hundreds of thousands at present.Its reason is, the quantity of terminal user group is big more, and it is just big more to have visited a possibility that is requested file in front other people of this group inside, if when particularly these users have some common interest just more likely.
By obtaining a suitable computing machine and suitable disk size, just a big buffer memory can be installed easily.But, also need this buffer memory can handle all requests from the terminal user of all participations.Use current technology, a single process computer can not provide service for the request from a hundreds of thousands terminal user.So, release several systems and solved this problem, only enumerate main supporter's name here.
Cisco systems company proposes, and the terminal user is connected to a P, and this P is programmed to all WWW requests are redirected to a dedicated cache equipment pellucidly, and perhaps " caching engine " organized, perhaps " farm ".Each caching engine is according to the group of IP (Internet protocol) address, handle a subclass of active www server.This method can expand to 32 parallel buffer engines, and this probably can provide service to about 500,000 terminal users.
Inktomi company has proposed, and uses a switch, and so-called the 4th layer switch is redirected to one " Inktomi traffic server " with all requests to the WWW page.Used one group of powerful computing machine, these computing machines are shared identical disk storage system.This method can expand to parallel 16 workstations, and this also can provide service to about 500,000 terminal users.But the disk storage system that several computer access are identical has increased complexity, and also needs to manage, and promptly some ability of each computing machine can not be used for handling request.
Network equipment company has proposed a two-layer caching method.There are several local caches in this system in the place near the terminal user.When having occurred the situation that a buffer memory do not hit on the rank in this locality, these local cache internet usage caching protocols and a central cache communicate.If requested file is arranged in this central cache, it just is sent to this local cache, and is preceding forwarded to this terminal user then.If this requested file is not arranged in this central cache yet, this central cache will produce a request of this source server, and will forward local cache to before this file, and this local cache will forward the terminal user to before this file again.Like this, central cache is handled the ICP request from local cache, and does not have to communicate with source server under the situation of this file in this central cache.For the ease of expansion, several parallel central cache can be arranged, a subclass of each central cache process source server.This means that local cache can send to each request correct central cache server.Because this agreement does not also form standard, this means that all local caches must be the equipment for network equipment company.
All these methods all have a shortcoming, and a central cache server need use a method or another method to come the very wide communication of process range.This can make the utilization factor of capacity of server very low, and to provide service to a hundreds of thousands user also be very difficult, and for reed gets high hit rate, just needs a hundreds of thousands user.By increasing the more service device, it is very high that the cost of this system just becomes, and system is very expensive, and also more complicated.Therefore and the complicacy of system has just increased extra expense, has reduced server relatively than the utilization factor of expensive resources.
An object of the present invention is to overcome these shortcomings known, that be used for the technology of the message file on the buffer memory internet, and the method that the very high method of cost performance is come the cache information file of using is provided.
Another object of the present invention provides a method, this method be that a caching system proposes to the user with the very high method of cost performance fast with one about how to make, provide service to the request of the message file that is buffered.
Another object of the present invention provides a caching server method, and this caching server can be handled day by day the message file that increases, internet and WWW are provided.
Another object of the present invention provides the method that shoots straight that the one-tenth that is used to use a minimum acquired the message file request of a caching system originally.
Another object of the present invention provides one that the method that can use a standard expands can expand caching system.
By an internet caching system be used for providing a method of service to request, just can realize above-mentioned these purposes to the internet information file of an internet caching system according to the appended claim book.
According to a first aspect of the invention, the method that service is provided to the request of the internet information file in an internet caching system is provided, this method be included in the local internet caching server receive to come from a user, to the step of user's request of an internet information file; This received request is responded, produce inquiry, if described message file is not by described home server institute buffer memory about described message file; An answer to described inquiry responds, generation is about a file request of described message file, if a hub file server buffer being buffered the internet information file has been preserved in wherein described answer indication described message file, described file request is routed to a feeder device; With described file request is responded, described hub file server, inquire about described message file from described feeder device, to reduce described hub file load of server.
According to a second aspect of the present invention, the structure of an internet caching system is provided, described system comprises at least one local cache server and at least one hub file server, these two servers have all been preserved the internet information file that is buffered, this structure of being used to reduce described hub file server burden comprises with described local cache server and communicating, with a feeder that communicates with described hub file server, wherein said feeder comprises and is used for receiving first device about a request of an internet information file from described local cache server; Be used for from second device that receives, infer an inquiry from an alpha-numeric string of described local cache server; Use the described inquiry of inferring to inquire about the 3rd device of described internet information file with being used at described hub file server by described second device.
According to a third aspect of the present invention, an internet caching system is provided, this system comprises one group of local internet caching server, wherein each local cache server is configured to receive the request of user to the internet information file; Comprise that at least one is included in a hub file server central cache point, that be used to preserve the internet information file that is buffered; With the feeder device that comprises described local cache group of server and described hub file server interconnect, described feeder device comprises at least one feeder, this feeder comprises and is used for coming the device that communicates with at least one local cache server and being used to use data base querying to reduce described hub file load of server thus from the device of described hub file server retrieves internet information file according to an agreement that communicates between the internet caching server.
Thought of the present invention is based on some special purpose computers is connected to a hub file server that is used to preserve the internet information file, perhaps central cache server.With respect to this central cache server, these additional computing machines are low side computing machines.These special purpose computers are configured to carry out the part task of being handled by central cache server itself under the normal conditions, reduce the central cache load of server.Use this method, this central cache server can provide service with the high method of cost performance to the local cache server that is connected to central server fast with one, perhaps provides service to the local cache server that is connected to central server through special purpose computer.This has maximally utilised the hardware of costliness of file repository of file that formed practical center file server and its buffer memory, and make these inexpensive machines special executed in parallel around the file server consuming time and to the task of time requirement strictness.
Like this, by these machines and any machine of realizing a hub file server are separated, just realized feeder device of the present invention, perhaps feeder.This will reduce the burden on the hub file server, and this hub file server can have more processing time special disposal to be buffered the actual retrieval of message file then.So this hub file server can use an effective method to provide service to large numbers of users.Because through request local cache server, user's request is obtained service more efficiently, so can increase the number of serviced user's request, this makes the impact damper of hub file server can obtain higher hit rate conversely.
According to an embodiment of the invention, the feeder device is according to an agreement being used for communicating between the internet caching server, communicates with the behavior and the local cache server of hub file server.Presently used agreement or internet cache protocol (ICP) or buffer memory summary (Cache Digest), and can use any other traditional agreement or the following agreement that is used for identical purpose.Like this, by receiving, it is answered, the task of inquiry and/or solicited message file is placed on the machine that separates with the hub file server machine, just can reduce the burden on the hub file server significantly.
When a local cache server had received a request to a message file from a user after, this file was not buffered on the home server, and home server begins to produce an inquiry to this file.In one embodiment, this inquires about a directed table, perhaps in the database, this table or database can be home server inside, or is directly connected to home server.If described table indicate this by the file of inquiry by hub file server institute buffer memory, home server will be from the feeder device, perhaps this file of feeder request.According to buffer memory summary agreement, preferably carry out this inquiry and request then.But identical with the request from this user to home server, the request from the home server to the feeder can be any the 3rd layer agreement, for example a HTTP request.
In another embodiment, the inquiry that comes from home server is directed to feeder.Be included in this inquiry, for example in ICP inquiry by the URL of information inquiring file.Feeder from received, infer an enquiry number to the alphanumeric URL of a message file, then, feeder uses this enquiry number to come Query Information file in the hub file server.Feeder uses a standard SQL inquiry (Structured Query Language (SQL)) to come Query Information file in file server.If in the hub file server, promptly produced a cache hit by inquiry file, this file of being inquired about just is sent to home server through this feeder from central server.Allow the hub file server produce a file and send the answer of conduct from a SQL query of local cache server, rather than as answer to an inquiry, for example the answer of an ICP inquiry this means a large amount of abilities of having saved the hub file server.
Alternatively, enquiry number can infer out from described alphanumeric URL and the division header information from be included in described inquiry.This part header comprises the user-specific information of asking the promoter, and for example his employed language can make the hub file server that this information specific is responded.By using any hash algorithm, preferably use a MD5hash algorithm, can infer and a corresponding enquiry number of message file.
Message file is carried out in the embodiment of an inner inquiry at home server, feeder is inferred enquiry number from home server is directed to the request subsequently of feeder.Be used for inferring that the alpha-numeric string of enquiry number is the numeric string that is included in described request, for example the URL of a HTTP request infers.Then, when this message file of inquiry in the hub file server, preferably use a SQL query, this feeder uses this enquiry number.In addition, comprise to the header field of small part described request be favourable as the basis of inferring described enquiry number.
In order further to reduce the burden on the hub file server, feeder preferably include preserve with by the table of the relevant information of each message file of hub file server institute buffer memory.For example, this table can be by a resident MD5 of storer can index the hash table.By searching described table, feeder can know one by the information inquiring file whether by hub file server institute buffer memory, and do not need to inquire about this server, so feeder can be to making the answer of presenting from the inquiry of a home server.
According to another implementation of the invention, the internet caching system further comprises the renovator device, and perhaps a renovator is used to upgrade by the message file of hub file server institute buffer memory and gathers.This renewal process comprises that a copy that will be buffered in a file on the home server is sent to central server.This file that is transmitted is as a result who does not have an impact damper of center of impact server when inquiring about this file, and home server is from its source server retrieval and then by a file of home server institute buffer memory.
Like this, the hub file server, perhaps central cache server oneself is not retrieved a file that is not buffered, so when service is provided to a local cache server, can not produce a file request to a source server because not hitting an impact damper.On the contrary, come from the local cache server when the feeder evaluation, to an inquiry of a message file, and when this file of being inquired about of reaching a conclusion is not buffered on the hub file server, feeder is routed to the home server of inquiring about with an answer, indicate this file can not be obtained, order renovator to upgrade the hub file server then.After receiving answer, this is answered indication then and is not hit an impact damper, this file that the local cache server is being handled from its source server retrieval.After receiving the order of upgrading the hub file server, renovator is from a copy of home server demand file, and thus the document copying that is received is sent to it with the central cache server that is saved.Can be preferably when the integral body burden of hub file server is low, and when home server has had sufficient time to from its source server retrieving files, carry out and transmit and the preservation process.
But, if home server in the back of a fire wall, renovator will be from the copy of its this file of source server request, then, this copy is stored on the central cache server.Under this situation, preferred this feeder does not order renovator to begin this renewal process, after receiving inquiry some, to identical customizing messages file, wherein these inquiries are to send from the home server that is positioned at the fire wall back.Preferably, renovator is to use a machine that separates with the machine of realizing this feeder to realize, and this machine of realization renovator is also separated with any file server machine.Because to the file request of source server, for example the T/A of HTTP request is uncertain, so just the machine of carrying out these requests is produced a uncertain burden, is favourable so adopt this method.But, in the system of a simplification, may on the same machines that realizes feeder, realize renovator, and these machines are separated with any hub file server machine still.At the machine of realizing renovator and feeder with local cache server and hub file server interconnect, and machine oneself is not included in the embodiment of central cache point with the hub file server, and separating of these machines and hub file server machine is clearly.
Specific internet information file is not suitable for being buffered.This file also is known as the multidate information file sometimes, and term dynamically comes from these files to be upgraded on source server continuously, and the example of this file is those stock prices, the file of weather forecast or the like.A method for optimizing handling the existence of living document is or in renovator, perhaps keeps known, a listed files that can not buffer memory in home server.Use this method, as a user's who asks this file result, it is minimum that the communication in this system reduces to.
According to another implementation of the invention, several hub file servers are included in a central cache point, each file server buffer memory and source host name, IP address or the enquiry number of being inferred are correlated with, the message file in a definition scope.According to the source host name, the IP address, perhaps according to one be requested message file infer enquiry number, feeder is sent to this inquiry the file server of the file in the buffer memory OK range.In an extendible solution, each file server has its oneself disk system, is that expense is minimum like this.Further, because the agreement of this central cache point use standard, so this point can be expanded with third-party file server.
In order to make hub file server and low side computing machine, be that communication between feeder and the renovator is faster, each low side computerized optimization is connected to the hub file server by a dedicated line, alternatively, if during, arrive the hub file server by a dedicated Internet access by several file server.Under the situation of back, the subnetwork capacity is to reserve for the communication of being discussed at least.Certainly, use a not special method, the part that employed network also can the internet.Employed connection type and low side computing machine between hub file server and low side computing machine, perhaps feeder and renovator, residing position is identical with the position of hub file server, or separate very big relation arranged.
In addition, central cache o'clock provides service to a predetermined local cache server set, and this set provides service to the language user community consistent with schooling conversely.This will further increase the hit rate of central cache level, because identical message file more may be requested more than once.
Use the present invention, according to the present invention an operator of the process information file request internet caching system can use one fast, cheap provide service to a lot of user of quantity with effective method.These clients preferably with central cache point of the present invention, the different ISP that feeder perhaps of the present invention/renovator connects, company or other tissue, use their local cache server, perhaps be connected to and comprise that central cache puts a system of formed whole caching system of the present invention as the client, this caching system comprises a plurality of feeders and a renovator, the local cache server that is connected with it.Certainly, a client also can be a unique user that is directly connected to a single WWW client of system of the present invention well as formation.In addition, big company or ISP can be chosen in its oneself place and operate system of the present invention, rather than are connected to the operated system by the opposing party.Further, because caching system of the present invention is around standard agreement, for example ICP and SQL set up, so as long as they support these agreements, can be included in this system from the local cache server and the hub file server of any manufacturer.
Within the scope of the invention, a local internet caching server can be understood that an agent node, WWW agent node preferably, and for being connected to the user of this agent node, perhaps WWW client provides buffer memory.
The item that is buffered in a local internet caching server or is buffered on the file server that is positioned at a central cache point can be to use can visit and any non-living document that comprise any information in internet.Like this, employed term internet information file just comprises the different names of some dissimilar files and this file among the present invention, scale-of-two for example, text, image, the Voice ﹠ Video file, HTTP (HTML (Hypertext Markup Language)) file, WWW file, FTP (file transfer protocol (FTP)) file, the WWW page, the WWW object, or the like.Except that the file that can use the visit of HTTP or File Transfer Protocol, use any the 3rd layer agreement to come also to be included in the term internet information file through any file that the internet conducts interviews.Another example of the agreement that can be used is an employed WTP agreement (wireless transmission protocol) in WAP (WAP (wireless application protocol)) standard.
According to a fourth aspect of the present invention, the present invention includes the medium that a computing machine can be read, on this medium, preserved can on one or several multi-purpose computers, carry out, one or several computer program instructions, and comprise and make described one or several computing machines carry out the device of disclosed step in appended claim 1-17.
According to a fifth aspect of the invention, the present invention includes one or several comprise can be other the program save set of one or several agendas of carrying out of several multi-purpose computer of conversion, be used for carrying out in the disclosed step of appended claim 1-17.
By with reference to the accompanying drawings, and illustrative embodiments, just can understand above-mentioned aspect of the present invention and others and characteristics and advantage from the following description more completely.
Below with reference to accompanying drawing illustrative embodiments of the present invention is described, wherein:
Shown to Fig. 1 principle a embodiment according to an internet caching system of the present invention;
Shown to Fig. 2 principle according to another of an internet caching system of the present invention
Embodiment;
Shown to Fig. 3 principle flow graph by the performed operation of local cache server among Fig. 2;
Shown to Fig. 4 principle a flow graph by the performed operation of feeder among Fig. 2;
Shown to Fig. 5 principle a flow graph by the performed operation of renovator among Fig. 2; With
The demonstration of Fig. 6 principle according to another of an internet caching system of the present invention
Embodiment.
With reference to the shown block diagram of figure 1, an embodiment of the invention will be described.In Fig. 1, some local cache servers 100 have been shown.These home servers 100, are represented with a feeder 110 to feeder device 110 here through Internet connection.The number of number represented, feeder 110 and local cache server 100 only is an example among Fig. 1, and this embodiment is not limited to these numbers.
But no matter how many numbers of feeder is, in this embodiment, each feeder is connected to a single hub file server.In Fig. 1, feeder 110 is connected to a hub file server 130.This hub file server comprises on it have been preserved, be buffer memory storage media of internet information file (not showing), and this hub file server can be with a high-end computer, and for example Sun Ultra Sparc or DEC Alpha computing machine are realized.On the other hand, low side computing machine of each feeder 110 usefulness, for example a traditional personal computer is realized, and constitutes a front-end machine handling the communication of carrying out between local cache server 100 and the hub file server 130.
Feeder 110 internet usage caching protocols and local cache server 100 communicate, and internet cache protocol is to be used for communicate, message based agreement through the internet between caching server.So, feeder 100 use an ICP answer to 100 that receive from a local cache server, be buffered the ICP inquiry that the internet information file inquires about to one and answer.This ICP answers indication or has hit impact damper (ICP_OP_HIT), and perhaps an impact damper does not hit (ICP_OP_MISS).
According to internet cache protocol, the ICP inquiry that feeder received comprises by the URL of Query Information file.From this URL, this feeder 110 uses a MD5 hash algorithm, infer with by the corresponding enquiry number of Query Information file.Use this enquiry number, just search a hash table 115 that resides in the MD5 index in the storer then.Be included in this feeder 110 is a RAM who wherein preserves concordance list (random access storage device) 116.This concordance list 115 comprises and the entry that is buffered in corresponding each enquiry number of an internet information file on the hub file server 130.Search index table 115 comprises the entry of an enquiry number of the enquiry number coupling of searching and being inferred: if found the enquiry number of a coupling in this table, this represent this by the information inquiring file by 130 buffer memorys of hub file server, as a result, will indicate a buffer hit to the ICP answer of home server 100.Correspondingly, if in this table 115, do not find the enquiry number of a coupling, this expression by the information inquiring file not by 130 buffer memorys of hub file server, the result, ICP answers and will indicate an impact damper not hit.
Be used for using MD5 hash algorithm to infer that device enquiry number and that be used for the search index table is a microprocessor 120 that is included in feeder 110 and has the appropriate software module.This microprocessor executive software module, the result of execution are to have produced the enquiry number of inferring and search in concordance list 115.The realization of this software module is easily to the programming those of skill in the art.
If 100 answer indication has produced a buffer hit from feeder 110 to home server, home server will use HTML (Hypertext Markup Language) from this feeder solicited message file, and HTML (Hypertext Markup Language) is an agreement that is used for visiting through the internet WWW object.That is, a HTTP request is transferred to feeder, and this request comprises the URL that is requested file.
When communicating with hub file server 130, feeder 110 uses public SQL query.After receiving HTTP request, this feeder will retrieve the enquiry number that the URL that inquired about from corresponding ICP the front infers.Alternatively, can use the URL of this HTTP request to infer enquiry number again.Then, feeder uses this enquiry number in a standard SQL query pointing to the hub file server.As a response, hub file server 130 will be sent to feeder 110 to current this message file of discussing, and this feeder 110 is sent to this message file the home server 100 of initiating this file request conversely again.
If 100 answer indication is that an impact damper does not hit from feeder 110 to home server, home server will produce a HTTP request of the source server (not showing) that is requested file, the file that buffer memory received then, and the copy of this file is sent to initiates requesting users (not showing).
The device that is used for carrying out internet cache protocol at feeder 110 is the microprocessor 120 that is included in this feeder.This microprocessor has also been realized receiving a request and device that be used to use SQL query hub file server 130 of HTTP from home server 100.The operation that microprocessor will be carried out is by the part in the said apparatus, and promptly suitable software module is controlled.The realization of these software modules is easily to the technician in programming field and the technician of the familiar agreement of discussing.
With another embodiment that is described with reference to Figure 2 according to an internet caching system of the present invention.The difference of system is among this system among Fig. 2 and Fig. 1: the internet caching system comprises a renovator 240, it is updating device, this updating device is connected to hub file server 230, feeder 210, and the process Internet connection is to local cache server 200.Like this, Fig. 2 has shown the structure of the present invention that comprises a renovator 240 and a feeder 210.
Except describe below, about the parts among Fig. 2, among Fig. 2 with Fig. 1 in the corresponding parts of parts according to earlier in respect of figures 1 described the operation and Interactive control.So only the feature of those these parts relevant with the shown embodiment of Fig. 2 just is described below.
The work responsibility of renovator 240 is to use the message file that newly is buffered to upgrade the storage media relevant with hub file server 230 (not showing).As described with reference to figure 1, when home server 200 has received an impact damper when not hitting from feeder 210 during answering as an ICP to a response of previous ICP inquiry at the identical information file, home server 200 produces a HTTP request of the source server (not showing) of this file.Then, this requested file is received and buffer memory by home server 200.After a schedule time, as the result that impact damper is not hit in report in the ICP answer, feeder 210 will order renovator 240 to upgrade these hub file servers.
Renovator 240 receives by the sign of the home server 200 of the URL of inquiry file and this file of inquiry from feeder 210.Then, produce from renovator to specific home server, to a HTTP request of this file.After receiving requested file, renovator is preserved this file, promptly is buffered on the hub file server 230.When preserving this file, increase and the corresponding enquiry number of institute's discussion paper in the concordance list 215 of renovator order feeder in being stored in ram region 216.
Being used for from the device of local cache server 200 solicited message files and the device that is used for received message file is buffered on the hub file server 230 is a microprocessor 260 and the corresponding software module thereof that is included in renovator 240.Concerning a technician in programming field, the realization of these software modules is well-known.
With reference now to the flow graph of Fig. 3, the example of the performed operation of a home server 200 is described in the embodiment of Fig. 2.
In step 300, local cache server 200 receives a request to an internet information file from the client who is served by this specific local cache server.But, also can be from receiving this file request according to the renovator 240 that carries out work with reference to the description of figure 5.Then, in step 301, this local cache server is searched this requested file in its local cache file.If it has found this file, this file just is sent to the client of the request initiated or is sent to renovator 240, and this is as shown in the step 302.
If local cache server 200 is not found requested file in search process, promptly it does not have this requested file of buffer memory, and it checks in step 303 whether this request comes from renovator.If this condition is set up, in step 304, just return a message to renovator, indication does not have this requested file.If the condition in step 303 is false,, in step 305, just send an ICP inquiry to feeder 210 if i.e. this request comes from a client.In next step 306, local cache from feeder 210 receive indication hub file servers 230 whether buffer memory be requested file an ICP answer.In step 307, just estimate ICP and answer.If this is answered indication and does not hit an impact damper, promptly requested file is by central cache, local cache server 200 produce this file source server, to a HTTP request of this file.If on the other hand, this is answered indication and has hit impact damper, local cache just produce feeder 210, to a HTTP request of this file, this is illustrated in the step 309.In next step 310, the local cache server receives requested file from feeder.At last, in step 311, this file is sent to the client of this file of request.
With reference now to the flow graph of Fig. 4, the performed operation of feeder in Fig. 2 embodiment 200 is described.
In step 400, feeder 210 is from being received and an ICP inquiry that the internet information file is relevant by handled any one the local cache server 200 of this feeder.This inquiry comprises by the URL of Query Information file.From this URL, in step 401, feeder 210 uses a MD5 hash algorithm to infer an enquiry number, when in step 402, when searching index MD5 hash table in the storer 216 that resides in feeder 210, uses this enquiry number.
If do not find this number in the process of searching the hash table, in step 403, feeder will be indicated an ICP who does not hit an impact damper to answer and be sent back to the local cache server 200 that has received the ICP inquiry from it.In step 404, feeder 210 be by being delivered to renovator by the URL of inquiry file then, come that 240 retrievals of order renovator are not buffered by inquiry file.In step 405, feeder 210 will be increased in the index hash table 215 by the corresponding enquiry number of inquiry file.This is that renovator 240 is indicated to feeder: sent and be stored in the response of having done the hub file server 230 from home server 200 by inquiry file.To further describe the operation of renovator 240 with reference to figure 5.
If in condition judgment step 402, feeder 210 has found enquiry number in the process of searching hash table 215, in step 406, the ICP answer that it has hit an impact damper with indication sends back to the local cache server 200 that has received the ICP inquiry from it.In step 407, then, feeder sends this ICP inquiry from the front local cache server 200 receives a HTTP request.Similar with the ICP inquiry, the HTTP request comprises the URL that is requested message file.In step 408, feeder 210 enquiry number that retrieval is corresponding with this file, the front has been inferred.In step 409, feeder uses this enquiry number and uses a standard SQL inquiry to come this requested message file of inquiry in hub file server 230.In step 410, as a response, feeder receives this message file that is buffered from center file server 230, in next step 411, and the local cache server 200 that this is requested, internet information file that be buffered is sent to the request of initiation from feeder 210.
With reference now to Fig. 5, the performed operation of renovator in Fig. 2 embodiment 240 is described.
In step 500, renovator 240 receives the order that indication should be asked a specific file from feeder 210.The file of being discussed is asked by local cache server 200 in front, but feeder discovery central cache server 230 does not have this file of buffer memory.This order comprises the URL of this file and asks the address of the local cache server 200 of these files from this central cache 230.In step 501, this renovator check then requested file in this order whether known, can not the tabulation of cache file in.If this tabulation comprises this requested file, this order just is dropped.If this tabulation does not comprise that this is requested file, renovator 240 is just kept this order, so that local cache server 200 is retrieved this file from the source server of this file if having time.
Constantly easily to center file server 230, promptly in the relatively low moment of load of central server, central server sends a message to renovator 240, and expression can be carried out any order that is suspended, and step 502 has shown the reception of renovator 240 to this message.In next procedure 503, the execution of order begins, and renovator is from a copy of local cache server 200 these files of request of initiation file request, and now, this file should be retrieved and be buffered in this locality.Then, in step 504, a copy of this file is received from the local cache server.In step 505, received document copying is sent to hub file server 230 with by 230 buffer memorys of hub file server.In last step 506, renovator 240 order feeders 210 will be increased in the index hash table 215 with the corresponding enquiry number of this file that is buffered in the hub file server 230.
The operation of hub file server is very simple.Basically, the hub file server is done two things, it by the file that will be buffered be sent to feeder 210 answer the SQL query of coming from them and it will by be sent to from renovator 240 it, new message file is kept at its impact damper.
With reference now to Fig. 6, another illustrative embodiments according to an internet caching system of the present invention is described.In Fig. 6, the difference of shown system is among this system and Fig. 2: there is more than one hub file server in this system, has shown 3 central cache servers 630 as example here.In addition, Fig. 6 comprises two feeders 610, and each feeder 610 is connected to local cache server 600 set of itself.Feeder 610 and renovator 640 and organize together at the hub file server 630 of a central cache point 690.By being installed in an Ethernet 680 in the central cache point, renovator 640 and each feeder 610 are connected to all hub file servers 630.
In this embodiment, compare with the embodiment of Fig. 2, the number of the hub file server of increase can the more file of buffer memory, even answers more the SQL query number that can be answered by the hub file server.Because this system is extendible fully, thus in theory can be with the feeder of any number, and renovator or hub file server are increased in this system.
The basic difference of the operation of this system and Fig. 2 system operation is among Fig. 6: a feeder 610 need select a server that a SQL query is routed to this server from a plurality of hub file servers 630.The message file of each its source host name of hub file server 630 buffer memorys in a preset range.So, according to the host name that is included in from the URL that home server receives, perhaps as a part in the ICP inquiry or as the host name of a part in the HTTP request, come to select a server from the center file server.When feeder had been selected a hub file server, the SQL query of the enquiry number that band is inferred just was routed to this selecteed file server.
Should be understood that the 26S Proteasome Structure and Function of described parts is clearly to those persons skilled in art with reference to the accompanying drawings.
Although described the present invention with reference to specific exemplary embodiments, concerning those persons skilled in art, can carry out many different substituting, revise and similarly change.So embodiment as described herein does not have any restriction to defined, the of the present invention scope of appended claim book.

Claims (46)

1. be used for providing a method of service to request, comprise step the internet information file at an internet caching system:
At a local internet caching server, a user who receives an internet information file from a user asks;
This received request is responded, if described message file produces an inquiry to described message file not by described home server institute buffer memory;
An answer to described inquiry responds, generation is to a file request of described message file, if wherein described answer indication preserve be buffered the internet information file a hub file server buffer memory described message file, described file request is routed to a feeder device; With
Described file request is responded, from described feeder device, the described message file of inquiry on described hub file server,
To reduce the load on the described hub file server.
2. this method as claimed in claim 1, wherein said local cache server is carried out described inquiry according to an agreement that is used for communicating between the internet caching server.
3. this method as claimed in claim 2, wherein said agreement are internet cache protocol (ICP).
4. this method as claimed in claim 2, wherein said agreement are the buffer memory summaries.
5. as any one this method in the claim 1 to 3, wherein said inquiry is routed to described feeder device by described local cache server, and as a response, this feeder device returns described answer.
6. this method as claimed in claim 5 comprises step: in described feeder device, infer the corresponding enquiry number of described message file that relates to described inquiry.
7. this method as claimed in claim 6, wherein said query steps comprises: when the described message file of inquiry in described hub file server, use the enquiry number of being inferred.
8. this method as claimed in claim 6, wherein said inquiry provide an alpha-numeric string relevant with described message file, and described string is used in the step of the described enquiry number of described deduction.
9. this method as claimed in claim 8, wherein said alpha-numeric string are URL(Uniform Resource Locator)s (URL), and described enquiry number is inferred from described URL with to the header field of the described inquiry of small part.
10. as claim 1, any one this method in 2 or 4, wherein said file request provides an alpha-numeric string relevant with described message file, and described feeder device uses described string to infer and the corresponding enquiry number of described message file.
11. as this method of claim 10, wherein said alpha-numeric string is a URL(Uniform Resource Locator) (URL), and described enquiry number is inferred from described URL with to the header field of the described inquiry of small part.
12. as any one this method in the claim of front, comprise step: generation has a concordance list that points to the entry that is buffered in each the internet information file on the described hub file server.
13. this method as claim 12 comprises step:
In described concordance list, search described message file; With
In the described answer of described inquiry, whether indication has found described message file in described search.
14. as any one this method in the claim of front, wherein said query steps comprises: when the described message file of inquiry on described hub file server, utilization structure query language (SQL).
15. as any one this method in the claim of front, wherein said query steps comprises step:
Host name or IP address according to described message file, select a hub file server from a hub file server set, described each concentrated server is configured to its source host name of buffer memory or IP address internet information file in a preset range; With
The described message file of inquiry in selecteed hub file server.
16. as any one this method among the claim 6-14, wherein said query steps comprises step:
According to infer, at the described enquiry number of described message file, select a hub file server from a hub file server set, described each concentrated server is configured to the internet information file of the corresponding enquiry number of buffer memory in a preset range; With
The described message file of inquiry in selecteed hub file server.
17., further comprise step as any one this method among the claim 1-16:
If the described answer of described inquiry indicates described message file not to be buffered on the described hub file server, on described local cache server, retrieve described message file from its source server;
Described message file is buffered on the described local cache server; With
By being buffered on the described hub file server, upgrade described hub file server from copy of the described message file of described local cache server requests with described copy.
A 18. structure in internet caching system, described system comprises at least one local cache server and at least one hub file server, two servers are all preserved the internet information file that is buffered, in order to reduce the load on the described hub file server, its structure comprises a feeder that communicates with described local cache server and communicate with described hub file server, and wherein said feeder comprises:
First device is used for from the request of described local cache server reception to an internet information file;
Second device, be used for from receive, infer an inquiry from alpha-numeric string of described local cache server; With
The 3rd device is used to use the described inquiry of inferring from described second device, the described internet information file of inquiry on described hub file server.
19. as this structure of claim 18, wherein said first device is configured to carry out work according to one the 3rd layer Internet protocol.
20. as this structure of claim 18 or 19, wherein said the 3rd device is configured to when the described internet information file of inquiry, utilization structure query language (SQL).
21. as any one this structure among the claim 18-20, wherein said alpha-numeric string is included in from the described request that described local cache server receives.
22. as this structure of claim 21, wherein said inquiry is from described alpha-numeric string with at least from being inferred in the division header information field the described request of described local cache server.
23. as this structure of claim 22, wherein said inquiry comprises an enquiry number, by a hash algorithm application is inferred enquiry number to described string and the described part that is applied to described header field.
24. as any one this structure in the claim 18 to 20, wherein said feeder comprises:
The 4th device is used for from the inquiry of described local cache server reception to an internet information file; With
The 5th device is used for providing a receive answer of inquiring about to described local cache server.
25. as this structure of claim 24, wherein said the 4th device and described the 5th device are configured to come work according to an agreement that communicates between the internet caching server.
26. as this structure of claim 25, wherein said agreement is internet cache protocol (ICP).
27. as any one this structure among the claim 24-26, wherein said alpha-numeric string is included in from the described inquiry that described local cache server is received.
28. as this structure of claim 27, the described inquiry that wherein said second device is inferred is from described alpha-numeric string with at least from being inferred in the division header information field the described inquiry of described local cache server.
29. as this structure of claim 28, wherein said inquiry comprises an enquiry number, by a hash algorithm application is inferred enquiry number to described string and the described part that is applied to described header field.
30. as any one this structure among the claim 24-29, wherein said feeder comprises a table, this table has the copy of the whole index that are buffered in all the internet information files on the described hub file server.
31., wherein the described the 5th described answer of installing the described inquiry that receives is based on the content of described table as this structure of claim 30.
32. as any one this structure among the claim 18-31, wherein in order further to reduce the load on the described hub file server, described structure comprises a renovator that communicates with described local cache server and communicate with described hub file server, and wherein said renovator comprises:
Request unit is used to ask to be stored in an internet information file or its copy on the local cache server; With
Save set is used for such copy that receives is kept at a hub file server.
33. as this structure of claim 32, wherein said request unit is configured to: if local cache server of preserving a message file is positioned at the back of a fire wall, just from a copy of the described message file of its source server request.
34. as this structure of claim 32 or 33, wherein said renovator is configured to communicate with described feeder, described feeder is used to receive an order of the described copy of asking described message file.
35. as any one this structure among the claim 32-34, wherein said renovator comprises a known tabulation should not asking message file its copy, can not buffer memory.
36. as any one this structure among the claim 16-35, wherein said feeder is realized with a low side computing machine, and described hub file server is realized with a high-end computer.
37. as any one this structure among the claim 32-35, wherein said renovator is realized with a low side computing machine, and described hub file server is realized with a high-end computer.
38. as this structure of claim 37, wherein said renovator and at least one feeder are realized with a single low side computing machine.
39. an internet caching system comprises:
A local internet caching server set, wherein each local cache server is configured to from the request of user's reception to the internet information file;
At least one hub file server is included in a central cache point, and is used to preserve the internet information file that is buffered; With
The feeder device, be used for described local cache server set and described hub file server interconnect, described feeder device comprises at least one feeder, this feeder comprises the device that communicates according to an agreement being used for communicating and at least one local cache server between the internet caching server, be used to use the device of data base querying with comprising, reduce the load on the described hub file server thus from described hub file server retrieves internet information file.
40. as this system of claim 39, wherein said feeder device is included in the described central cache point.
41. this system as claim 39 or 40, wherein each described feeder device comprises a plurality of feeders, and each feeder in the described feeder is used for a subclass and the described hub file server of local cache server are interconnected.
42. as any one this internet caching system among the claim 39-41, wherein said central cache point is configured to provide service to the local cache server set of a definition, and this is gathered conversely again to having conforming user community that service is provided on the language He in the culture.
43. as any one this internet caching system among the claim 39-42, the agreement of wherein said use or internet cache protocol or buffer memory summary.
44. as any one this internet caching system among the claim 39-43, each feeder in the wherein said feeder comprises a table, this table has the copy of the whole index that are buffered in all message files in the described central cache point.
45. as any one this internet caching system among the claim 39-44, wherein said hub file server comprises the internet information file of its source host name of buffer memory in a preset range.
46. as any one this internet caching system among the claim 39-45, further comprise updating device, at least one local cache server in described hub file server and the described set is interconnected, be used for from the source server of an internet information file or from a copy of the described internet information file of described at least one local cache server retrieves, and be used for described copy is kept at described hub file server.
CN99801667A 1998-09-24 1999-09-22 Internet cashing system and method and arrangement in such system Pending CN1286774A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
SE98032469 1998-09-24
SE9803246A SE514376C2 (en) 1998-09-24 1998-09-24 An internet caching system as well as a procedure and device in such a system

Publications (1)

Publication Number Publication Date
CN1286774A true CN1286774A (en) 2001-03-07

Family

ID=20412708

Family Applications (1)

Application Number Title Priority Date Filing Date
CN99801667A Pending CN1286774A (en) 1998-09-24 1999-09-22 Internet cashing system and method and arrangement in such system

Country Status (28)

Country Link
EP (1) EP1040425A4 (en)
JP (1) JP2002525749A (en)
KR (1) KR20010032419A (en)
CN (1) CN1286774A (en)
AR (1) AR025806A1 (en)
AU (1) AU6389999A (en)
BR (1) BR9906468A (en)
CA (1) CA2310603A1 (en)
DE (1) DE1040425T1 (en)
ES (1) ES2152204T1 (en)
GR (1) GR20010300011T1 (en)
HU (1) HUP0004164A2 (en)
ID (1) ID27668A (en)
IL (1) IL136281A0 (en)
IS (1) IS5494A (en)
LT (1) LT4797B (en)
LV (1) LV12597B (en)
NO (1) NO20002614L (en)
PA (1) PA8482301A1 (en)
PE (1) PE20001191A1 (en)
PL (1) PL340807A1 (en)
RU (1) RU2000112850A (en)
SA (1) SA99200851A (en)
SE (1) SE514376C2 (en)
TR (1) TR200001474T1 (en)
TW (1) TW437205B (en)
WO (1) WO2000017765A1 (en)
ZA (1) ZA996124B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1316375C (en) * 2001-08-03 2007-05-16 诺基亚有限公司 Method, system and terminal for data network having distributed cache-memory
CN100544347C (en) * 2002-11-26 2009-09-23 国际商业机器公司 Support that in individual system a plurality of native network agreements realize
CN1938701B (en) * 2004-03-26 2010-12-22 英国电讯有限公司 Metadata based prefetching
CN101084662B (en) * 2004-12-22 2012-07-11 艾利森电话股份有限公司 Methods and arrangements for caching static information for packet data applications in wireless communication systems

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2405489C (en) * 2000-04-07 2012-07-03 Movielink, Llc Secure digital content licensing system and method
US7043563B2 (en) * 2000-04-17 2006-05-09 Circadence Corporation Method and system for redirection to arbitrary front-ends in a communication system
US6879998B1 (en) 2000-06-01 2005-04-12 Aerocast.Com, Inc. Viewer object proxy
US7213062B1 (en) 2000-06-01 2007-05-01 General Instrument Corporation Self-publishing network directory
US6904460B1 (en) 2000-06-01 2005-06-07 Aerocast.Com, Inc. Reverse content harvester
US6836806B1 (en) 2000-06-01 2004-12-28 Aerocast, Inc. System for network addressing
KR100394189B1 (en) * 2000-08-23 2003-08-09 주식회사 아라기술 Method for servicing web contents by using a local area network
US6868439B2 (en) * 2002-04-04 2005-03-15 Hewlett-Packard Development Company, L.P. System and method for supervising use of shared storage by multiple caching servers physically connected through a switching router to said shared storage via a robust high speed connection
US7797298B2 (en) * 2006-02-28 2010-09-14 Microsoft Corporation Serving cached query results based on a query portion
KR101109273B1 (en) * 2009-12-24 2012-01-30 삼성전기주식회사 Mobile telecommunication terminal sharing temporary internet file and temporary internet file sharing method using its terminal
US9294582B2 (en) 2011-12-16 2016-03-22 Microsoft Technology Licensing, Llc Application-driven CDN pre-caching
TWI513284B (en) * 2012-12-28 2015-12-11 Chunghwa Telecom Co Ltd Inverse proxy system and method
CN104506450A (en) * 2014-11-06 2015-04-08 小米科技有限责任公司 Media resource feedback method and device

Family Cites Families (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5511208A (en) * 1993-03-23 1996-04-23 International Business Machines Corporation Locating resources in computer networks having cache server nodes
JPH06290090A (en) * 1993-04-06 1994-10-18 Matsushita Electric Ind Co Ltd Remote file accessing system
US5794229A (en) * 1993-04-16 1998-08-11 Sybase, Inc. Database system with methodology for storing a database table by vertically partitioning all columns of the table
US5588060A (en) * 1994-06-10 1996-12-24 Sun Microsystems, Inc. Method and apparatus for a key-management scheme for internet protocols
US6160549A (en) * 1994-07-29 2000-12-12 Oracle Corporation Method and apparatus for generating reports using declarative tools
US5974455A (en) * 1995-12-13 1999-10-26 Digital Equipment Corporation System for adding new entry to web page table upon receiving web page including link to another web page not having corresponding entry in web page table
US5978841A (en) 1996-03-08 1999-11-02 Berger; Louis Look ahead caching process for improved information retrieval response time by caching bodies of information before they are requested by the user
US5995943A (en) 1996-04-01 1999-11-30 Sabre Inc. Information aggregation and synthesization system
JP2000510978A (en) * 1996-05-20 2000-08-22 ブリティッシュ・テレコミュニケーションズ・パブリック・リミテッド・カンパニー Information retrieval in cache database
JPH1021174A (en) * 1996-07-01 1998-01-23 Ricoh Co Ltd Data transfer system
JP3481054B2 (en) * 1996-07-04 2003-12-22 シャープ株式会社 Gateway device, client computer and distributed file system connecting them
US5774660A (en) * 1996-08-05 1998-06-30 Resonate, Inc. World-wide-web server with delayed resource-binding for resource-based load balancing on a distributed resource multi-node network
US5944789A (en) 1996-08-14 1999-08-31 Emc Corporation Network file server maintaining local caches of file directory information in data mover computers
GB2317723A (en) * 1996-09-30 1998-04-01 Viewinn Plc Caching system for information retrieval
US5931904A (en) * 1996-10-11 1999-08-03 At&T Corp. Method for reducing the delay between the time a data page is requested and the time the data page is displayed
US5787470A (en) * 1996-10-18 1998-07-28 At&T Corp Inter-cache protocol for improved WEB performance
US5987506A (en) 1996-11-22 1999-11-16 Mangosoft Corporation Remote access and geographically distributed computers in a globally addressable storage environment
US5933849A (en) * 1997-04-10 1999-08-03 At&T Corp Scalable distributed caching system and method

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1316375C (en) * 2001-08-03 2007-05-16 诺基亚有限公司 Method, system and terminal for data network having distributed cache-memory
CN100544347C (en) * 2002-11-26 2009-09-23 国际商业机器公司 Support that in individual system a plurality of native network agreements realize
CN1938701B (en) * 2004-03-26 2010-12-22 英国电讯有限公司 Metadata based prefetching
CN101084662B (en) * 2004-12-22 2012-07-11 艾利森电话股份有限公司 Methods and arrangements for caching static information for packet data applications in wireless communication systems

Also Published As

Publication number Publication date
SE9803246D0 (en) 1998-09-24
CA2310603A1 (en) 2000-03-30
BR9906468A (en) 2002-04-16
NO20002614L (en) 2000-07-24
GR20010300011T1 (en) 2001-04-30
KR20010032419A (en) 2001-04-16
LT2000043A (en) 2001-01-25
SE9803246L (en) 2000-03-25
EP1040425A1 (en) 2000-10-04
TR200001474T1 (en) 2000-11-21
IS5494A (en) 2000-05-12
NO20002614D0 (en) 2000-05-22
ES2152204T1 (en) 2001-02-01
WO2000017765A1 (en) 2000-03-30
SE514376C2 (en) 2001-02-19
PL340807A1 (en) 2001-02-26
ZA996124B (en) 2000-03-30
LV12597B (en) 2001-03-20
PE20001191A1 (en) 2000-11-02
ID27668A (en) 2001-04-19
TW437205B (en) 2001-05-28
RU2000112850A (en) 2002-06-10
LV12597A (en) 2000-12-20
SA99200851A (en) 2005-12-03
LT4797B (en) 2001-05-25
AU6389999A (en) 2000-04-10
PA8482301A1 (en) 2002-08-26
AR025806A1 (en) 2002-12-18
HUP0004164A2 (en) 2001-05-28
DE1040425T1 (en) 2001-03-15
IL136281A0 (en) 2001-05-20
JP2002525749A (en) 2002-08-13
EP1040425A4 (en) 2006-06-14

Similar Documents

Publication Publication Date Title
CA2233731C (en) Network with shared caching
CN1286774A (en) Internet cashing system and method and arrangement in such system
US8825754B2 (en) Prioritized preloading of documents to client
CN1151448C (en) Expandable/compressible type high speed register
US6647421B1 (en) Method and apparatus for dispatching document requests in a proxy
EP2263163B1 (en) Content management
US8275790B2 (en) System and method of accessing a document efficiently through multi-tier web caching
US7587398B1 (en) System and method of accessing a document efficiently through multi-tier web caching
US7509372B2 (en) Method and system for redirecting data requests in peer-to-peer data networks
US7725596B2 (en) System and method for resolving network layer anycast addresses to network layer unicast addresses
US6760756B1 (en) Distributed virtual web cache implemented entirely in software
JP3526442B2 (en) Processing system to enhance data flow from server to client along the network
US6868453B1 (en) Internet home page data acquisition method
CN1194502C (en) System and method for managing access authority of network users
CN102045403A (en) Method, device and system for processing data of distributed network
CN101039250A (en) Image sharing system and method
US20030005078A1 (en) Apparatus and method for providing user-requested content through an alternate network service
US8015160B2 (en) System and method for content management over network storage devices
JP2004086317A (en) Load distribution method and device
US20020007394A1 (en) Retrieving and processing stroed information using a distributed network of remote computers
AU2001250169A1 (en) Retrieving and processing stored information using a distributed network of remote computers
EP1259864A2 (en) Distributed virtual web cache implemented entirely in software
US6598085B1 (en) Enabling communications between web servers
CN116233248A (en) Resource response method, device and readable storage medium
MXPA00004999A (en) An internet caching system and a method and an arrangement in such a system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication