CN1774703A - Multi-tiered caching mechanism for the storage and retrieval of content multiple versions - Google Patents
Multi-tiered caching mechanism for the storage and retrieval of content multiple versions Download PDFInfo
- Publication number
- CN1774703A CN1774703A CN 03804105 CN03804105A CN1774703A CN 1774703 A CN1774703 A CN 1774703A CN 03804105 CN03804105 CN 03804105 CN 03804105 A CN03804105 A CN 03804105A CN 1774703 A CN1774703 A CN 1774703A
- Authority
- CN
- China
- Prior art keywords
- cache memory
- key word
- floor height
- height speed
- speed cache
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Landscapes
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Systems and methods for improved performance in the storing and retrieving of objects. In one embodiment, the invention comprises a multi-tiered caching system implemented in a network transformation proxy. The proxy performs transformations on Web content received from a Web server and stores the transformed content in the caching system. The lowest tier of caches stores the content as cache objects, while higher tiers store references to lower tiers (such as the tier which stores the objects) as their cache objects. Cache entries are looked up using a plurality of keys. Each of the keys is used to look up an entry in a different tier of the cache.
Description
The mutual reference of related application
The application requires the right of priority of following application: the U.S. Provisional Patent Application No.60/349 that is called " multi-tiered caching mechanism that is used to store and retrieve multiple release content " by Jeremy S.de Bonet in the name of submission on January 18th, 2002,553, the U.S. Provisional Patent Application No.60/349 that is called " modular insert formula issued transaction architecture " by people such as de Bonet in the name of submission on January 18th, 2002,424 and the name submitted on January 18th, 2002 by people such as de Bonet be called the U.S. Provisional Patent Application No.60/349 of " supporting the data conversion of various protocols simultaneously; storage and the network proxy platform of handling ", 424, more than Shen Qing full content is hereby incorporated by.In addition, the name of being submitted on January 14th, 2003 by people such as de Bonet is called the u.s. patent application serial number No._ (application attorney docket No.IDET1130-1) of " using shared resource and different application to carry out the method and system of issued transaction ", is hereby incorporated by.
With reference to appendix
Mode with annex comprises appendix in this application, and its full content is hereby incorporated by, and its sole purpose is an ingredient as the application.Appendix 1 proposition is for " network proxy platform and application thereof " and comprise 17 pages of texts and 8 accompanying drawings.
Technical field
The present invention relates generally to the storage and the retrieval of electronic entity.The present invention be more particularly directed to be used for to store the use with the multiple layer high speed memory buffer of searching object, wherein each group objects can be interrelated, as the Web content of the multiple version of storage on network transformation proxy.
Background technology
There is several different methods to be used to store data.A kind of such method is by using in conjunction with array.In in conjunction with array, will objects stored relevant with a key word.There is a special position in object, and this position is discerned by this related keyword.When wanting searching object, only need search this key word, because the position of this this object of keyword recognition.
Associated matrix is shown various implementations.For example, database, file system and cache memory all are in conjunction with array.Interested in the cache memory especially here.
Cache memory provide data this locality storage in conjunction with array.Here used " this locality " has a relativity.Cache memory is connected to microprocessor with allow them and work sooner and more efficient situation under, " this locality " may mean that cache memory is included in the storer of making on the chip identical with microprocessor.Yet, cache memory is being used under the situation of network agent, " this locality " may mean in the disc driver of cache memory in acting on behalf of cabinet and realize.
The caches agency uses the URL relevant with the Web page to store and the retrieval web content as key word separately.Yet one of problem of Chu Xianing is to have a large amount of different Web pages to have identical URL in this case.For example, the content of the Web page may be similar to identical, but they may be applicable to separately on dissimilar device (for example, desktop computer maybe can surf the Net cell phone) and watch.For the Web page that unique identification must be retrieved, therefore this key word need comprise extra information.Key word therefore can be in conjunction with the further feature of the Web page, such as the type of the browser of the Cookies or the design Web page.
The caches that realizes in the agency of prior art is typical planar shaped.In other words, there is the single cache memory that has a plurality of inlets.Each memory buffer inlet comprises a Web page relevant with corresponding key word.As described above, this key word can be simultaneously in conjunction with URL and the necessary further feature of unique identification speed buffering content.Therefore, if the agency needs storage to have 1000 Web pages of different URL, just need 1000 cache memory inlets.If requiring acting server is 10 kinds of different versions of each storage of these Web pages, just need 10,000 cache memories inlets.
Because cache memory is the plane, so required time and/or the storer of its storage and access entry increases with the inlet number.[[according to used data structure, the time of searching can change to O (log (n)) from O (n), even to O (1) (constant time), so following inaccurate]] similarity of inlet can not bring any benefit (that is, the dozens of inlet may be the fact of the different editions of the same Web page)
And, when using the multiple version of plane formula cache structure memory contents, have no idea to handle the set of related content.For example, have no idea to store the common data of all related contents (for example, the common out of Memory of multiple version of the storage HTTP title or the same Web page).Be necessary for each standalone version total information of storage separately.Equally, have no idea to handle the set of these related contents as a group.For example, if want to upgrade each version of the out-of-date Web page, the acting separately of all versions of just having no idea to take to influence--must in cache architecture, locate respectively and upgrade them.
Should be pointed out that when having the multilayer memory mechanism these and cache architecture are inequality for database.Database is not the functional libraries inside that is designed as other program.In Database Systems, must by the database program designer clearly make up tree and multistage storage and index structure and, because effort, cost and system overhead that the fulfillment database system is paid, this technology is not suitable for high performance cache retrieval.
Summary of the invention
Summarize one or more problems of discussing above can solving by various embodiment of the present invention.In general, the present invention includes and be used to improve the storage of object and the system and method for retrieval performance.In one embodiment, the present invention includes a multiple layer high speed buffer system, wherein can use a plurality of key words to search the cache memory inlet.The lowermost layer storage object of cache memory system, and higher level storage quoting to lower level (such as the layer of storage object).Search the inlet of the different layers that is positioned at cache memory system with each key word.
In network proxy, realize an exemplary embodiment.Configuration network conversion agency is to handle the communication between Web server and the one or more client (for example web browser).Therefore, it will work more efficiently so if the configuration network conversion acts on behalf of caches to serve client's the Web page.In this embodiment, the configuration network conversion is acted on behalf of to store the multiple version of each Web page, wherein each version corresponding to, for example, a different customer set up, each customer set up have its oneself indicating characteristic and performance.All different editions that are different from the Web page deposit the plane formula cache memory in, deposit the Web page in the multiple layer high speed memory buffer.More particularly, deposit the Web page in two-layer cache memory, wherein the URL of the Web page is used as the key word of cache memory ground floor inlet, and the version of the Web page is as the key word of the second layer inlet of cache memory (in fact it comprise a plurality of cache memories).When the client requests Web page, the URL of the page that identification is wanted from this request and client's type of device.Network transformation proxy is indexed URL and is entered the ground floor cache memory as key word.Comprise the object of discerning second layer cache memory with the corresponding inlet of this key word (URL).Network transformation proxy is indexed with second key word (type of device) and is entered the second layer cache memory of being discerned.The inlet of the second layer cache memory of being discerned of corresponding second key word comprises the object as the conceivable Web page.Just can retrieve this Web page then and serve the client.
Alternative embodiment comprise a kind of in the multiple layer high speed memory buffer storage and the method for searching object.Each wants objects stored to have a plurality of associated key words.Index with each key word and to enter the cache memory of different layers.Except that the cache memory of one deck in the end, each cache memory of all layers all comprises the object that the cache memory in the one deck of back is quoted.The cache memories store object of one deck in the end, rather than quoting itself to other cache memory.Alternative is that the cache memory of last one deck may comprise quoting rather than object itself institute's storage object.Therefore, object of storage is included in that storage in the ground floor cache memory comprises first key word and to the inlet of quoting of second layer cache memory in the multiple layer high speed buffer system, (for example may repeat this operation to other layer, in the second layer cache memory storage comprise second key word and to the inlet of quoting of the 3rd floor height speed cache memory), and in the lowermost layer cache memory with last key word storage object.Searching object comprises indexing with first key word and enters the ground floor cache memory to obtain quoting second layer cache memory, and other lower level repeated this step till arriving last one deck, in the end one deck is indexed with last key word and is entered last floor height speed cache memory with searching object.
An alternative embodiment of the invention comprises a software application.This software application is embodied in the computer-readable medium such as floppy disk, CD-ROM, DVD-ROM, RAM, ROM, database schema etc.Computer-readable medium is included as and makes the method for being summarized above the computing machine execution and the instruction of disposing.Should be pointed out that computer-readable medium may comprise the storer of the part of RAM or other composition computer system.Therefore computer system will be allowed to carry out the method according to present disclosed content, and think within the scope of the appended claims.
Also has other embodiment in a large number.
Description of drawings
By reading following detailed and with reference to the accompanying drawings, other target of the present invention and advantage will become apparent.
Fig. 1 illustrates the demonstrative structure figure of web-based system according to an embodiment of the invention.
Fig. 2 illustrates the basic configuration figure that is suitable as the computing machine of network transformation proxy according to an embodiment of the invention.
Fig. 3 illustrates the structural drawing of multiple layer high speed memory buffer according to an embodiment of the invention.
Fig. 4 is the process flow diagram of the universal method of the explanation N of being applicable to layer cache structure according to an embodiment of the invention.
Though the present invention can have various modifications and alternative form, still show its specific embodiment by the example and the additional detailed description of accompanying drawing.Yet the purpose that is understood that drawings and detailed description is not to limit the present invention to described specific embodiments.Opposite this disclosed purpose is to cover all modifications, equivalent and the alternative object that drops on as in the scope of the present invention of appended claims definition.
Embodiment
The various details preferred embodiment.Should be understood that following described this and any other embodiment is exemplary, and purpose is explanation the present invention rather than restriction the present invention.
In general, the present invention includes the system and method for the performance that is used to improve storage and searching object.In one embodiment, the present invention includes a multiple layer high speed buffer system, wherein use a plurality of key words to come the inlet of queries cache.The lowermost layer storage object of cache memory, and higher level storage quoting to lower level (as the layer of storage object).Inquire about inlet with each key word at the different layers of cache memory system.
Realize an one exemplary embodiment in network transformation proxy.The configuration network transformation proxy is with the communication between intercepting and manipulation Web server and the one or more client (for example Web browser).Therefore, if the Web page that configuration network transformation proxy caches is served the client, it will work more efficiently so.In this embodiment, the configuration network transformation proxy to be to produce the multiple version of each Web page, wherein each version corresponding to, for example, every type different optimizations of customer set up, each customer set up has its oneself indicating characteristic and performance.All different editions that are different from the Web page deposit a plane formula cache memory in, deposit the Web page versions in the multiple layer high speed memory buffer.More particularly, with the Web page stores at two-layer cache memory, wherein the URL of the Web page is used as the key word that enters the cache memory ground floor, and the page is used as the key word that enters the cache memory second layer for the device of its conversion.
When the client requests Web page, can from request, identify the URL of the conceivable page and client's type of device.Network transformation proxy enters the key word of ground floor cache memory as index with URL.Comprise object corresponding to the inlet of this key word (URL) as the identifier of the cache memory of the second layer.The cache memory that the network transformation server is indexed and entered in the second layer to be identified with second key word (type of device).Comprising corresponding to the inlet of the second layer cache memory that is identified of second key word is the object of expecting the Web page.So just can retrieve the Web page and offer the client.
Though the preferred embodiment realizes in network transformation proxy, should be understood that the present invention can be common to the multiple layer high speed memory buffer of multiple systems.Therefore, even present disclosed content mainly concentrates on the realization of the present invention in network transformation proxy, such purpose is just demonstration also, rather than restriction.
The preferred embodiments of the present invention are worked under network environment.Application network and assembly thereof are given one or more clients with web content (for example Web page) from one or more server-assignment.Showed demonstrative structure with reference to figure 1.
As shown in Figure 1, described structure comprises the client 12 who is connected to network transformation proxy 14, and this agency 14 is connected to Web server 16 again.Network transformation proxy 14 comprises cache subsystem 18.Client 12 is connected to agency 14 by first network 13.Agency 14 is connected to Web server 16 by second network 15.Have at least one to comprise the internet in anticipation network 13 and the network 15.Another of these networks may comprise the inside or the external network of a special enterprises.Yet, should be pointed out that client 12, acting server 14 and being connected of Web server 16 to there is no need to dispose with any particular form for the present invention.
The customer set up of agent processes such as Web browser or program and such as the server unit of Web server or the communication between the program.In the system based on Web, agent processes client is to the request of web content, and the web content that provided of these requests of web server response.When these communications of processing, therefore the responsible simulation of agency Web server also reduces the burden to system's (to Web server and network self).The agency by storage by some contents that Web server provided, and when may the time provide the content of being stored to respond requests for content to be done this part thing to the client.Like this, act on behalf of the burden that has alleviated the request of serving a part of client for Web server.
In a preferred embodiment, configuration network conversion agency is to carry out the conversion by the web content that server was provided.Conversion may depend on the client content is made request and request mode.Conversion can comprise the modification that content optimization is done in order to use on the customer set up of specific type.For example, revise the performance (for example, the image Web page on carried out color reduction or black and white conversion) of the Web page to adapt to different display device.Therefore, the agency can generate a kind of multiple version (or other web content) of the special Web page.The agency also can carry out conversion, for the client makes more content modifications, such as inserting different advertisements for different clients.The agency needs these different editions of storage web content then.
In order to create and discern the different editions of the Web page, network transformation proxy is used about the type of the conversion of doing on the page and the information of the version that source server is provided.Version cache key also can be indicated parameter value used in the conversion.For example, if network transformation proxy is done color reduction to piece image, this key word will comprise the label of the color that image will be attenuated to so.Agency according to the present invention provides effective mechanism fast for storing and retrieve web content, even a plurality of versions of content have increased the total amount of necessary canned data.
With reference to figure 2, showed the basic configuration that is suitable as the computing machine of network transformation proxy according to one embodiment of present invention.Realize server 14 in computer system 100.Computer system 100 comprises central processing unit (CPU) 112, ROM (read-only memory) (ROM) 114, random-access memory (ram) 116, hard disk drive (HD) 118 and input-output device (I/O) 120.Computer system 100 has more than CPU, a ROM, random access memory, hard disk drive, input-output device or other its nextport hardware component NextPort.Yet computer system 100 is described as only having a kind of assembly of single type.Should be understood that system shown in Figure 2 is the simplification diagrammatic sketch of exemplary hardware configuration, have many other alternative configurations.
Can carry out part method as described herein residing at such as the suitable software application in the storer of ROM 114, RAM 116 or hard disk drive 118.Software application can comprise programmed instruction, and these instructions are configured to make the data processor of execution command to carry out method as described herein.These instructions can be included (storage) at internal storage device, as ROM114, RAM 116 or hard disk drive 118, other device, and external memory, or by such as computer system 100 or even the readable storage medium of data processor of CPU 112.This medium can comprise, for example, and floppy disk, CD-ROM, DVD ROM, tape, optical storage media etc.
In an illustrative examples of the present invention, computer executable instructions can be several rows editor's C
++, Java or other Languages code.Also can use other structure.For example, the function of any computing machine can be realized by various computing machine shown in Figure 2.In addition, carry on the computer-readable recording medium of the computer program of this code or more than a kind of data handling system that its software ingredient also can be stored in a more than computing machine.
In the superincumbent hardware configuration, various software ingredients can reside in any combination of single computing machine or stand-alone computer.In alternative embodiment, some or all of software ingredients can reside at same computing machine.For example, one or more software ingredients of Agent Computer 100 can reside on a client computer or server computer or both.In being still another embodiment,, the function that Agent Computer is realized just may not request Agent Computer itself in client computer or the server computer if being dissolved into.In such embodiment, client computer and server computer be orientable to be connected to same network.
Communication available electron between any client, server and the Agent Computer, light, radio frequency or other signal are finished.For example, when the user is on client computer, the client computer intelligible form of conversion of signals can being behaved also can be converted to people's input suitable electronics, light, radio frequency or other signal that agency or server computer will use when sending communication to the user.Equally, as operator during,, also people's input can be converted to suitable electronics, light, radio frequency or other signal that computing machine will use when transmitting communication server computer intelligible form of conversion of signals can being behaved when giving the operator at server computer.
As top explanation, the information that the agency is responsible for the storage Web server and before provided makes and this information can be offered the client to respond their request.This information stores is in agency's cache subsystem.In fact cache subsystem comprises the cache memory that is organized in two-layer or multilayer in a large number.The cache memory of lower level is quoted in upper strata and middle level.In fact lowermost layer stores conceivable information.Visit the information that is stored in the cache subsystem by each layer of visiting these layers successively.
With reference to figure 3, showed the structure of multiple layer high speed buffer system according to an embodiment of the invention.As shown in the figure, cache subsystem 18 comprises the ground floor 22 and the second layer 24.In fact ground floor 22 comprises the have a plurality of inlets single cache memory 30 of (for example 31-33).The second layer 24 comprises a plurality of cache memories (for example 40,50,60).
Each inlet of each cache memory comprises a key word and an object.Key word is used for discerning the expectation inlet that enters cache memory.To as if be stored in the conceivable information of cache memory.Because cache subsystem 18 is designed to store web content, so the key word of ground floor cache memory 26 inlets (for example 35) comprises URL (URL(uniform resource locator)).Inlet object (for example 36) comprises quoting the cache memory of the second layer 24.For example, the object 36 of the inlet 31 of ground floor cache memory is quoting second layer cache memory 40.
Cache memory 40 (as other second layer cache memory) is closely similar, and each inlet (for example, 41,42,43) comprises a key word and an object.Yet, because cache memory 40 is positioned at the lowermost layer of cache memory system structure, so the object that its caches inlet is comprised comprises the object (for example, the Web page) of the type that cache subsystem 18 designs will be stored.If cache subsystem 18 has two-layer incessantly, the object that is comprised at the cache memory (as 40) of the second layer 24 will comprise quoting the 3rd floor height speed cache memory.This 3rd layer may be lowermost layer, or is still another middle layer, and wherein the object of high-speed cache comprises quoting the high-speed buffer of succeeding layer.Therefore, this structure can expand to any number N layer.
The process flow diagram of Fig. 4 has been summed up and has been used this cache structure to retrieve the method that is adopted in institute's objects stored.The flowcharting of Fig. 4 is applicable to the universal method of N layer cache structure.As shown in the figure, with first key word index enter first structure with the retrieval quoting second layer high-speed buffer.Repeat this operation according to the number of plies N in the cache structure.Concerning last one deck (N layer), indexing with N key word enters high-speed buffer with retrieval institute objects stored.
Under the situation of the preferred embodiment, wherein realize cache subsystem in network transformation proxy, the key word of ground floor cache memory 26 is included in the title (for example URL) of each content of being stored the there.Object in the ground floor cache memory 26 comprises quoting the cache memory of the second layer 24.Key word in the second layer cache memory is based on the parameter of the different editions of the content that specified URL identifies.Object in the second layer cache memory comprises the web content of being stored by cache subsystem 18.Key word in the first and second floor height speed cache memories combines any version that can be used for storing or retrieve any one section content that cache subsystem 18 stored.
In simple embodiment of the present invention, two kinds of different cache memories, or the cache memory as much as possible that needs, each all uses a key word to store a value.The functional similarity of each cache memory.Cache memory can comprise any caches or relevant memory construction.
One embodiment of the present of invention are multiple layer high speed buffer systems.In simple embodiment, the present invention uses two-layer cache memory system.At ground floor, discern in many second cache memories one with the key word of content-based title.In second cache memory, discern the special version of the specified content of ground floor key word with the key word of the information that encapsulates indicated release.Can be expressed as follows with program:
Level_1_Caclhe:=CacheOf<
f(Content_Name),
Level_2_Cache
>
Level_2_Cache:=CacheOf<
g(Description_Of_Content_Version),
Content_Version
>
Abstract function f () and g () convert to succinct their independent variable and key word that mate easily.In the preferred embodiment, use MD5sum, but will use any (being similar to) unique coding.CacheOf<Key, Content>be is by relevant high-speed buffer memory data structure of storing with retrieval of content with key word.
Ground floor caches (Level_1_Cache) is the caches of cache memory, wherein stores and retrieve second layer caches (Level_2_Cache) with the key word of content-based title.Second layer caches is the caches of standard, and it connects the description of contents version and suitable contents version.It should be noted that, the key word of second layer caches there is no need the encapsulated content title, because in all of each second layer cache memory all is the different editions-more particularly, be by Content_Name institute content identified of same content.
Storage comprises that the second layer cache memory of the key word of version and the value that comprises the Web page is similar to state of the art.Yet storage comprises the key word of URL and comprise that the ground floor cache memory of the value of second cache memory is unique aspect store cache.
In a preferred embodiment, ground floor caches (Level_1_Cache) is based on the title (for example URL) of content, and it points to second layer caches.Second layer caches (corresponding to Level_2_Cache) is based on the type and the parameter setting of the conversion that is applied to content.
In the preferred embodiment, provide the key word of ground floor caches (corresponding to the top described Level_1_Cache of simple embodiment) with the MD5Sum of content URL.MD5 algorithm by Ronald professor L.Rivest of MIT exploitation is usually used one-way function, possesses the character that two unequal inputs extremely can not produce identical result.This algorithm makes the key word of recognition data more effective.
Second layer cache memory comprises the multiple version by URL institute content identified.The agency by to the web content execution parameter conversion create these diversified versions.The key word that provides second layer cache memory based on the MD5Sum and the parameter thereof of conversion title.Each key word is provided with the version that identifies institute's conversion content with those.
Use the MD5sum function, this structure can be described as:
Level_1_Cache:=OneCacheOf<
MD5Sum(URL),
Level_2_Cache
>
Level_2_Cache:=OneCacheOf<
MD5Sum(Transformation_Parameters),
Transform(Content,Transformation_Parameters)
>
The preferred embodiment C
++Template is created cache memory in multiple layer high speed buffer-stored structure.C
++Template makes and there is no need to write out independently code body in order to finish same task.It makes tasks abstract, allows a C
++Object is carried out a plurality of tasks.In order to finish a certain specific tasks, can be in template with the key word of any kind and value assignment.Under the situation of native system, C
++Template makes and there is no need is that the first and second floor height speed cache memories are write two kinds of independently code bodies.At the key word of two kinds of different cache memories and the value type same C that can pack into simultaneously
++In the structure of template.Use C by this way
++The example system of template and method are called detailed description among the U.S. Patent application No._ (application attorney docket No.IDET1150-1) of " storage of arbitrary content and application data and the design of retrieval " in the name of being submitted on January 16th, 2003 by inventor Jeremy S.de Bonet, Todd A.Stiers, Jeffrey R.Annison, Phillip Alvelda VLL and Paul M.Scanlan.
Use the multiple layer high speed buffer-stored to make searching of cache memory more effective.For example, caches agency can store among 1000 URL the 10 kinds of versions independently of each.Use present invention, will there is no need URL to be saved as 10,000 independent communities with 10,000 individual key.On the contrary, URL can be saved as 1,000 independent community that uses 1,000 individual key.In the time will searching the concrete version of certain concrete page, the agency only needs to search 1,000 URL, and then searches 10 kinds of versions of that URL.This searching only required and searched 1,010 independent community rather than 10,000.
And the present invention produces a kind of method of storing the common data of all the elements.For example, perhaps the date of content creating or various other HTTP title are all versions common (as under conversion agency's situations), and the invention provides common location and store this information.There is no need each version of content this information of storage separately, and if any variation is arranged, can make these variations to the multiple version of content simultaneously.
Simultaneously, because the Web page of all versions is stored in single cache memory system together, developer's manipulation, dump or delete them together there is no need to identify separately each.
The developer can expand to cache memory system above two-layer.The other key word of the cache memory that enters these layers of can be used for indexing may comprise other identifier, such as type of device, browser or payment level.
Along with increasing of difference between the network client, just be necessary the multiple version of content creating, each version is optimizations to the dissimilar of client.Before this invention, there is not the method that the content of this many versioned is organized into single unified cache structure.
Benefit and advantage that the present invention may provide have been described in the above in conjunction with specific embodiments.Can not be with these benefits and advantage and any important, necessary or essential characteristic that makes their generations or the element that becomes clearer and more definite or restrictive condition be interpreted as any or all of claim.As used herein, phrase " comprises " that the purpose of " comprising " or its any other modification is to be interpreted as the non-element or the restrictive condition of following behind those phrases that exclusively comprise.Therefore, will not comprise that system, method or other embodiment of a group element is limited in just those elements, and may comprise the element that other are not clearly listed or described embodiment is intrinsic.
Though the present invention has been described, has been understood that described embodiment is illustrative and does not limit the scope of the invention in these embodiment with reference to special embodiment.The top embodiment that describes also had many variations, modification, interpolation or improvement.Expect that these change, revise, add or improvement will fall in the scope of the present invention that describes in detail as following claims.
Claims (33)
1. system that is used to store and retrieve the object of the first kind comprises:
A plurality of cache memories, wherein each inlet at each cache memory comprises a key word and an object;
Wherein said a plurality of cache memory comprises a plurality of (n+1) floor height speed cache memory, wherein comprises the object of a first kind at each object of each inlet of (n+1) floor height speed cache memory; And
Wherein said a plurality of cache memory further comprises a n floor height speed cache memory, wherein comprises the identifier of correspondence (n+1) floor height speed cache memory at each object of each inlet of n floor height speed cache memory.
2. system according to claim 1, wherein said system comprise a Web page cache memory system of realizing on network transformation proxy; Wherein said n floor height speed cache memory comprises the ground floor cache memory, wherein the key word in the ground floor cache memory comprises URL, and wherein the object in the ground floor cache memory comprises the identifier of second layer cache memory; Wherein (n+1) floor height speed cache memory comprises second layer cache memory, and wherein the key word in second layer cache memory comprises the version of corresponding URL, and wherein the object in second layer cache memory comprises the Web page.
3. system according to claim 2, wherein said agency be configured to the Web page carry out one or more conversion and with the Web page stores of institute's conversion in (n+1) floor height speed cache memory.
4. system according to claim 1, wherein said object comprises web content.
5. system according to claim 4, wherein at least a portion web content comprises the Web page.
6. system according to claim 1, wherein the key word in n floor height speed cache memory comprises URL.
7. system according to claim 1, wherein the key word in (n+1) floor height speed cache memory comprises version identifier.
8. system according to claim 1, wherein said system comprise at least one deck cache memory in addition, and wherein the caches object in each except that lowermost layer layer is all quoted cache memory in the lower level.
9. system according to claim 1, wherein (n+1) floor height speed cache memory is configured to be stored in the common shared information of all inlets in the corresponding cache memory system.
10. system according to claim 1, wherein each (n+1) floor height speed cache memory is configured to store the multiple version corresponding to the Web page of single URL.
11. system according to claim 1, wherein said system are configured to allow handle all inlets in selected (n+1) floor height speed cache memory by joint operation.
12. a method that is used to retrieve desired object wherein has a plurality of key words relevant with this expectation object, this method comprises:
Identification and the relevant a plurality of key words of expectation object;
N the inlet of n keyword search in n floor height speed cache memory in the key word that passes through to be discerned,
Wherein said n inlet comprises n key word and n the object in the key word of being discerned,
Wherein said n object comprises the identifier of (n+1) floor height speed cache memory; And
Search (n+1) the individual inlet in (n+1) floor height speed cache memory, wherein said (n+1) individual inlet comprises (n+1) individual key word and (n+1) individual object of the key word of being discerned, and wherein said (n+1) individual object comprises this desired object.
13. method according to claim 12, wherein said desired object are the Web pages; Wherein the identification a plurality of key words relevant with described expectation object comprise the URL of this Web page and the version of URL; Wherein said method comprises by described URL searches first inlet in the ground floor cache memory, and wherein said first inlet comprises as the URL of corresponding key word with as the identifier of the second layer cache memory of corresponding objects; And search second inlet in second layer cache memory, wherein said second inlet comprises as the version of the URL of corresponding key word and as the expectation Web page of corresponding objects.
14. method according to claim 12, wherein this expectation object comprises web content.
15. method according to claim 14, wherein this web content comprises the Web page.
16. method according to claim 12, wherein n key word comprises URL.
17. method according to claim 12, wherein (n+1) individual key word comprises version identifier.
18. comprising, method according to claim 12, wherein said cache memory have more than two-layer cache memory system.
19. method according to claim 12, wherein (n+1) floor height speed cache memory is configured to store the common shared information of all inlets in (n+1) floor height speed cache memory.
20. method according to claim 12, wherein (n+1) floor height speed cache memory is configured to store the multiple version corresponding to the Web page of single URL.
21. method according to claim 12 further comprises by all inlets in joint operation manipulation (n+1) floor height speed cache memory.
22. method according to claim 12 further comprises web content is carried out one or more conversion and the web content after the conversion is deposited in (n+1) floor height speed cache memory.
23. one kind comprises a plurality of instruction software products that are embodied on the readable medium of data processor, wherein said instruction is configured to make data processor to carry out described method, comprising:
Discern a plurality of key words relevant with desired object;
N the inlet of n keyword search in n floor height speed cache memory in the key word that passes through to be discerned,
Wherein n inlet comprises n key word and n object of the key word of being discerned,
Wherein n object comprises the identifier of (n+1) floor height speed cache memory; And
Search (n+1) the individual inlet in (n+1) floor height speed cache memory, wherein (n+1) individual inlet comprises (n+1) individual key word and (n+1) individual object of the key word of being discerned, and wherein (n+1) individual object comprises this desired object.
24. software product according to claim 23 wherein should be expected liking the Web page; Wherein the identification a plurality of key words relevant with this expectation object comprise the URL of this Web page and the version of URL; Wherein said method comprises by URL searches first inlet in the ground floor cache memory, and wherein first inlet comprises as the URL of corresponding key word with as the identifier of the second layer cache memory of corresponding objects; And search second inlet in second layer cache memory, wherein second inlet comprises as the version of the URL of corresponding key word with as the Web page of the expectation of corresponding objects.
25. software product according to claim 23, wherein this expectation object comprises web content.
26. software product according to claim 25, wherein this web content comprises the Web page.
27. software product according to claim 23, wherein n key word comprises URL.
28. software product according to claim 23, wherein (n+1) individual key word comprises version identifier.
29. comprising, software product according to claim 23, wherein said cache memory have more than two-layer cache memory system.
30. software product according to claim 23, wherein (n+1) floor height speed cache memory is configured to be stored in the common shared information of all inlets in (n+1) floor height speed cache memory.
31. software product according to claim 23, wherein (n+1) floor height speed cache memory is configured to store the multiple version corresponding to the Web page of single URL.
32. software product according to claim 23 further comprises by all inlets in joint operation manipulation (n+1) floor height speed cache memory.
33. software product according to claim 23 further comprises web content is carried out one or more conversion and the web content after the conversion is deposited in (n+1) floor height speed cache memory.
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US34934402P | 2002-01-18 | 2002-01-18 | |
US60/349,344 | 2002-01-18 | ||
US60/349,424 | 2002-01-18 | ||
US10/345,886 | 2003-01-16 |
Publications (2)
Publication Number | Publication Date |
---|---|
CN1774703A true CN1774703A (en) | 2006-05-17 |
CN100514309C CN100514309C (en) | 2009-07-15 |
Family
ID=36760954
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CNB038041057A Expired - Fee Related CN100514309C (en) | 2002-01-18 | 2003-01-16 | System and method of storing and searching objects through multi-tiered cache memory |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN100514309C (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102479213A (en) * | 2010-11-25 | 2012-05-30 | 北大方正集团有限公司 | Data buffering method and device |
CN104956349A (en) * | 2013-03-20 | 2015-09-30 | 惠普发展公司,有限责任合伙企业 | Caching data in a memory system having memory nodes at different hierarchical levels |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3934174B2 (en) * | 1996-04-30 | 2007-06-20 | 株式会社エクシング | Relay server |
US6061762A (en) * | 1997-04-14 | 2000-05-09 | International Business Machines Corporation | Apparatus and method for separately layering cache and architectural specific functions in different operational controllers |
GB2361332A (en) * | 2000-04-13 | 2001-10-17 | Int Computers Ltd | Electronic content store |
CA2342558A1 (en) * | 2000-05-30 | 2001-11-30 | Lucent Technologies, Inc. | Internet archive service providing persistent access to web resources |
-
2003
- 2003-01-16 CN CNB038041057A patent/CN100514309C/en not_active Expired - Fee Related
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102479213A (en) * | 2010-11-25 | 2012-05-30 | 北大方正集团有限公司 | Data buffering method and device |
CN102479213B (en) * | 2010-11-25 | 2014-07-30 | 北大方正集团有限公司 | Data buffering method and device |
CN104956349A (en) * | 2013-03-20 | 2015-09-30 | 惠普发展公司,有限责任合伙企业 | Caching data in a memory system having memory nodes at different hierarchical levels |
US10127154B2 (en) | 2013-03-20 | 2018-11-13 | Hewlett Packard Enterprise Development Lp | Caching data in a memory system having memory nodes at different hierarchical levels |
Also Published As
Publication number | Publication date |
---|---|
CN100514309C (en) | 2009-07-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN1310175C (en) | International information search and deivery system providing search results personalized to a particular natural language | |
US8412717B2 (en) | Changing ranking algorithms based on customer settings | |
EP1738290B1 (en) | Partial query caching | |
US20040148293A1 (en) | Method, system, and program for managing database operations with respect to a database table | |
KR100971863B1 (en) | System and method for batched indexing of network documents | |
US6795817B2 (en) | Method and system for improving response time of a query for a partitioned database object | |
US9652558B2 (en) | Lexicon based systems and methods for intelligent media search | |
CN100485689C (en) | Data speedup query method based on file system caching | |
US9305091B2 (en) | Anchor tag indexing in a web crawler system | |
US8326828B2 (en) | Method and system for employing a multiple layer cache mechanism to enhance performance of a multi-user information retrieval system | |
US20030167257A1 (en) | Multi-tiered caching mechanism for the storage and retrieval of content multiple versions | |
CN1809803A (en) | Method and system for blending search engine results from disparate sources into one search result | |
CN1202257A (en) | System and method for locating pages on the world wide web and for locating documents from network of computers | |
CN1713179A (en) | Impact analysis in an object model | |
CN101136027B (en) | System and method for database indexing, searching and data retrieval | |
CN1255215A (en) | System and method for storing and manipulating data in information handling system | |
CN1422403A (en) | System and method for rapid completion of data processing tasks distributed on a network | |
CN105793843A (en) | Combined row and columnar storage for in-memory databases for OLTP and analytics workloads | |
CN1705945A (en) | Global query correlation attributes | |
CA2302303A1 (en) | System for accessing database tables mapped into memory for high performance | |
CN1755677A (en) | System and method for scoping searches using index keys | |
JP2006065855A (en) | Efficient ranking of web pages via matrix index manipulation and improved caching | |
CN1784680A (en) | Progressive relaxation of search criteria | |
CN1555533A (en) | Method and system for delivering dynamic information in a network | |
CN1916905A (en) | Method for carrying out retrieval hint based on inverted list |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20090715 Termination date: 20200116 |