CN1260887A - Network object high-speed buffer memory engine - Google Patents

Network object high-speed buffer memory engine Download PDF

Info

Publication number
CN1260887A
CN1260887A CN98805987A CN98805987A CN1260887A CN 1260887 A CN1260887 A CN 1260887A CN 98805987 A CN98805987 A CN 98805987A CN 98805987 A CN98805987 A CN 98805987A CN 1260887 A CN1260887 A CN 1260887A
Authority
CN
China
Prior art keywords
cache
target
buffer memory
mass storage
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN98805987A
Other languages
Chinese (zh)
Inventor
M·马尔科姆
R·扎恩克
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cacheflow Inc
Original Assignee
Cacheflow Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cacheflow Inc filed Critical Cacheflow Inc
Publication of CN1260887A publication Critical patent/CN1260887A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1415Saving, restoring, recovering or retrying at system level
    • G06F11/1435Saving, restoring, recovering or retrying at system level using file system or storage system metadata
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Library & Information Science (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention provides a method and system for caching information objects transmitted using a computer network. A cache engine determines directly when and where to store those objects in a memory (such as RAM) and mass storage (such as one or more disk drives), so as to optimally write those objects to mass storage and later read them from mass storage, without having to maintain them persistently. The cache engine actively allocates those objects to memory or to disk, determines where on disk to store those objects, retrieves those objects in response to their network identifiers (such as their URLs), and determines which objects to remove from the cache so as to maintain sufficient operating space. The cache engine collects information to be written to disk in write episodes, so as to maximize efficiency when writing information to disk and so as to maximize efficiency when later reading that information from disk. The cache engine performs write episodes so as to atomically commit changes to disk during each write episode, so the cache engine does not fail in response to loss of power or storage, or other intermediate failure of portions of the cache. The cache engine also stores key system objects on each one of a plurality of disks, so as to maintain the cache holographic in the sense that loss of any subset of the disks merely decreases the amount of available cache. The cache engine also collects information to be deleted from disk in delete episodes, so as to maximize efficiency when deleting information from disk and so as to maximize efficiency when later writing to those areas having former deleted information. The cache engine responds to the addition or deletion of disks as the expansion or contraction of the amount of available cache.

Description

Network object high-speed buffer memory engine
Invention field
The present invention relates to be used for the use a computer equipment of the target that network transmits of high-speed cache.
Relevant technologies
At the computer network that is used for the information of transmitting, often call information supplier (being sometimes referred to as " server ") is to transmit same or similar information or to transmit repeatedly to same take over party to a plurality of take over partys (being sometimes referred to as " client computer ").This just produces and repeatedly transmits same or analogous information, has increased the burden of network service structure and server resource, and makes client computer suffer the long relatively reaction time.This problem is especially serious under several situations: (a) particular server becomes very popular suddenly; (b) distributed to a large amount of relatively client computer from the information of particular server by routine; (c) prescribe a time limit relatively from the information of particular server; (d) between server and the client computer or client computer relative with communication path between the network slow.
A known method provides a kind of equipment (for example general processor of working) under software control, this equipment is as acting server, reception is from the information request of one or more client computer, from one or more server acquired informations, and this information sent to the client computer that replaces server.When acting on behalf of server in advance from one or more client computer acquired information, it can send to client computer with this information and needn't the repetitive requests server.Though this method realizes reducing the purpose of network traffic and server load, its shortcoming is that the file server of local operation system and local file system or acting server needs a large amount of expenses.This has increased the expense of operational network and the communication path between server and the client computer of slowing down.
Have several deferment factors, mainly abandon the storage control of local operation system and local file system is caused by acting server: (a) acting server can not be in mass storage from the information of server so that quick access; (b) acting server can not be deleted the old network objectives and the new network objectives of storage from the server reception that receives from server to the mode of mass storage access to optimize.Except increase expense and delay, acting server is abandoned the control of storage has also been limited the operability that acting server uses storer: (a) very difficultly when acting on behalf of server and just working maybe can not increase or reduce the storer of distributing to acting server; (b) acting server and its local file system do not utilize for example RAID storage system any the losing that just can not recover to store of expensive redundant memory technology.
Therefore, need provide a kind of method and system to be used for the Cache network institute information transmitted that uses a computer, this method and system is difficult for because of using local operation system and local file system or file server to increase delay and limiting operability.This advantage realizes in embodiments of the present invention, a high-speed buffer memory engine that wherein is coupled to network provides the Cache of institute's transmission objectives, stores in internal memory and the mass storage by the direct control of taking to when and where these targets to be stored in the high capacity storage.High-speed buffer memory engine can be with the storage of these objective holographic so that continue operation gently and from the increase of its mass storage, failure, or recover at leisure in the deletion.
Summary of the invention
The invention provides a kind of method and system, be used for the use a computer information object of Network Transmission of high-speed cache.In the present invention, high-speed buffer memory engine directly determines when and where store these targets in internal memory (for example RAM) and high capacity storage (for example one or more disc driver), also from the high capacity storage, read afterwards so that optimally these targets are write the high capacity storage, and needn't safeguard them constantly.This high-speed buffer memory engine is given internal memory or disk with Target Assignment on one's own initiative, determine go disk, where to store such target, the operator logo symbol (for example their URL) that responds them is retrieved these targets and is determined which target of deletion is so that preserve suitable remaining space from Cache.
In a preferred embodiment, high-speed buffer memory engine is captured in the information that writes disk in the write operation, makes most effective when with box lunch information being write disk and the highest to read these information timeliness rates after the box lunch from disk.This high-speed buffer memory engine is carried out write operation so that submit to disk to upgrade automatically during each write operation, so this high-speed buffer memory engine can not failed with the loss of power supply or storage or the middle fault of other Cache part.This high-speed buffer memory engine is storage key system target on each of a plurality of disks, has preserved the Cache holographic so that damage on the meaning that only reduces available caches quantity in any subclass of disk.This high-speed buffer memory engine is chosen in the deletion action information that will delete from disk, makes during from disk deletion information most effective with box lunch and makes when after the box lunch fresh information being write these zones of disk most effective.The increase of this high-speed buffer memory engine response disk or deletion are as the increase of available caches quantity and reduce.
Brief description
Fig. 1 is illustrated in the block scheme of network object high-speed buffer memory engine in the computer network.
Fig. 2 represents the block scheme of data structure, is used to preserve the storage block of the network objectives of one group of Cache.
Fig. 3 represents to be used for the block scheme of the data structure of Cache network objectives.
Fig. 4 represents the block scheme of one group of original and piece of revising.
Fig. 5 represents a kind of process flow diagram of method, and the piece that is used for revising is write an independent disc driver automatically.
Fig. 6 represents one group of pointer and the regional block scheme in the high capacity storage.
DETAILED DESCRIPTION OF THE PREFERRED
In following explanation, a preferred embodiment of the present invention has been described at optimum treatment step and data structure.Those skilled in the art will think that after reading the application the embodiment of the invention can use general processor and memory device, application specific processor and memory device, or the circuit that other is suitable for particular procedure step and data structure described here is realized and the realization of these treatment steps described here and data structure will not need excessive experiment or further invention.
The high-speed cache network objectives
Fig. 1 has represented the block scheme of the network object high-speed buffer memory engine in computer network.
A high-speed buffer memory engine 100 is connected to computer network 110, so that this high-speed buffer memory engine 100 can receive message from one group of equipment 111 that also is connected to network.
In a preferred embodiment, this network 110 comprises a plurality of such equipment 111, uses communication media 112 to interconnect.For example, comprise the place of LAN (LAN (Local Area Network)) at network 110, this communication media 112 comprises Ethernet cable, and optical fiber connects, or other media.This network 110 preferably includes Webweb, is sometimes referred to as " internet " or " in-house network ".
In a preferred embodiment, for example communicate by letter with high-speed buffer memory engine 100 by FTP (file transfer protocol (FTP)) or other agreement of one of HTTP (HTML (Hypertext Markup Language)) or its variant with using one or more communication protocols for the equipment 111 that is connected to network 110.This high-speed buffer memory engine 100 comprises a processor 101 and a Cache 102.In a preferred embodiment, this processor 101 comprises a general processor, works under software control to carry out method described here and formation and to use data structure described here; As used in this, when high-speed buffer memory engine 100 was carried out specific tasks or safeguarded concrete data structure, with reference to being included in the proper handling of being made by processor 101 under the software control, this software was kept in program and the data-carrier store 103.
This high-speed buffer memory engine 102 comprises program and data-carrier store 103 and a mass storage 104.In a preferred embodiment, this mass storage 104 comprises for example disc driver of a plurality of disc drivers, but also can comprise light or magneto optical driver in addition.As using,,, mass storage 104 and each driver physically be not the element of plate-like even not comprising with reference to " disk " and " disc driver " expression mass storage 104 and its each driver at this.This high-speed buffer memory engine 100 is connected to network 110 and can and sends one group of protocol message 113 according to one or more agreements receptions, and these agreements are used for communicating by letter with high-speed buffer memory engine 100 by equipment 111.
This high-speed buffer memory engine 100 is preserved a group network target 114 in Cache 102.High-speed buffer memory engine 100 receives protocol message 113 with request retrieval network target 114 from one group of " server " equipment from one group of " client computer " equipment 111.Subsequently, these high-speed buffer memory engine 100 distribution protocol message 113 are with request network objectives 114 from one or more server apparatus 111, receive these network objectives 114 and be stored in the Cache 102 and these network objectives 114 sent to the client devices 111 of just asking.
When this uses, term " client computer " and " server " are represented the relation between this client computer or server and the high-speed buffer memory engine 100, need not to be concrete physical equipment 111.When this uses, one " client devices " 111 or " server apparatus " 111 can comprise following anything: (a) the independent physical equipment 111 of an executive software, this equipment has the client computer of high-speed buffer memory engine 100 or relationship server; (b) part of physical equipment 111, a software program or one group of software program of on a hardware device 111, carrying out for example, this of physical equipment is partly with the client computer of high-speed buffer memory engine 100 or relationship server; Or (c) a plurality of physical equipments 111, or its part, cooperation forms and has the client computer of high-speed buffer memory engine 100 or the physical entity of relationship server.This vocabulary " client devices " or " server apparatus " are meant this logic entity but need not to be each concrete physical equipment 111.
High-speed buffer memory engine 100 keeps network objectives 114 and by providing them to reuse these network objectives 114 continuously to their client devices 111 of request in Cache 102.When Cache 102 becomes enough when expiring, high-speed buffer memory engine 100 is deleted network objectives 114 from Cache 102.For example, high-speed buffer memory engine can be as in the described target of deleting of " deleting target from Cache " part.
In a preferred embodiment, high-speed buffer memory engine 100 uses storeies 103 to be used to these network objectives 114 of using mass storage 104 to be safeguarded as Cache, and uses the storer 103 and the mass storage 104 that merge to be used for network objectives available on file network 110 114 as Cache 102.
Cache 102 be not document storage system and be stored in the Cache 102 network objectives 114 can by high-speed buffer memory engine 100 at any time automatically from Cache 102 deletion.All-network target 114 and other data that all are safeguarded by Cache 102 are transient state, except being required for the very small amount of aims of systems of operation, be kept in the high capacity storage 104 by redundancy with these aimss of systems, so that preserve may damaging of these aims of systems defence mass storage 104 (for example one or more disc drivers damage) parts.Needn't to guarantee to be stored in any time of network objectives 114 after being stored in the Cache 102 all be available to high-speed buffer memory engine 100 like this, and Cache 102 (for example part of mass storage 104) part failure or even have a mind to deletion and can not cause high-speed buffer memory engine 100 failures.Similarly, recover or increase extra mass storage 104 (for example " heat interchange " of disc driver) wittingly and merged to reposefully in the Cache 102 and needn't interrupt the operation of high-speed buffer memory engine 100.
In addition, high-speed buffer memory engine 100 is operated alone to carry out the operation of Cache network objectives 114.There is not independent " operating system ", do not have the user and do not exist in the independent user application of carrying out on the processor 101.In storer 103, there is not the storage space of independent being used for " user " and " operating system ".The Cache 102 of high-speed buffer memory engine 100 self maintained network objectives 114 and selection are used for the network objectives 114 in Cache 102 preservations or deletion from Cache 102, operation is so that (1) location writes mass storage 104 with network objectives 114, (2) location from mass storage 104 deletion network objectives 114 and (3) with the effective network objectives 114 of replacements in Cache 102 of new network objectives 114.In a preferred embodiment, high-speed buffer memory engine 100 is carried out these operations effectively and is operated the Caches 102 of being correlated with and being filled by network objectives 114 simultaneously.
In a preferred embodiment, high-speed buffer memory engine 100 is preserved relevant statistics to Cache 102 accesses.These statistics can comprise following content:
One group of hit rate that is used for Cache 102, comprise that (1) compare in Cache 102 hit rate of finding that is used for network objectives 114 with the network objectives that must retrieve from server apparatus 111, compare the hit rate of in storer 103, finding that is used for network objectives 114 with the network objectives that from mass storage 104, must retrieve with (2);
One group of operation statistics on storer 103 comprises that (1) is kept at the quantity of the network objectives 114 in the storer 103 and storer 103 parts that (2) are exclusively used in Cache network objectives 114 than storage system target or unappropriated part; With
One group of operation statistics on mass storage 104, comprise (1) quantity from mass storage 104 read operations, (2) to the quantity of the write operation of mass storage 104, comprise that " write operation " quantity described here and (3) are exclusively used in the part of mass storage 104 of Cache network objectives 114 than storage system target or unallocated part.
This high-speed buffer memory engine 100 also can be preserved above-mentioned combination or variant.
Use high-speed buffer memory engine
High-speed buffer memory engine 100 can provide performance improvement or additional functionality in network 110 in many cases.For example, whether high-speed buffer memory engine 100 can (provide fire wall as the proxy server cache device, be provided for being connected to the Cache of the client devices 111 of LAN (Local Area Network), otherwise or), as the Reverse Proxy Cache, be used for the request made by independent ISP customer's machine as Cache, be used for " implementations " agreement, or be used as accelerator or server cache device as Cache.
Otherwise high-speed buffer memory engine 100 provides to network objectives 114 access relatively faster or from server apparatus 111 to client devices 111 and directly can obtain.Common customer machine equipment 111 perhaps is transferred to client devices 111 with them from Cache 102 from high-speed buffer memory engine 100 these network objectives 114 of request, perhaps obtains and be transferred to then client devices 111 from server apparatus 111.
High-speed buffer memory engine 100 can be exercised more intelligence and action in advance, but not the simple file of waiting for by client devices 111 requests:
High-speed buffer memory engine 100 can be configured to utilize selected network objectives 114 prestrains, and this target is estimated by client devices 111 requests.For example, known some network objectives 114 is usually asked by the network 110 of familiar internet by client devices 111 usually; These network objectives 114 can be preloaded in the high-speed buffer memory engine 100 during fabrication.These network objectives 114 can comprise the homepage and the well-known search engine (for example " the ALTA VISTA " of DIGITAL company) of renowned company (for example NETSCAPE).
High-speed buffer memory engine 100 can respond one group of statistics of the relevant common network objectives of asking 114 and regular request network objectives 114.For example, the information of the relevant common network objectives of asking 114 can be kept on the server apparatus 111; High-speed buffer memory engine 100 can be used for being stored in Cache 102 from server apparatus 111 these information of request and these network objectives 114 of regular request.In a preferred embodiment, high-speed buffer memory engine 100 can regularly be carried out these operations when client devices 111 does not initiatively use high-speed buffer memory engine 100, for example at dead of night or the relative no-load time in early morning.
High-speed buffer memory engine 100 on can customer in response machine equipment 111 group client right of priority statistics and regular request network objectives 114.For example, high-speed buffer memory engine 10 can receive (otherwise perhaps according to request or) one group of bookmark and can be from server apparatus 111 these network objectives 114 of request from client devices 111.In a preferred embodiment, high-speed buffer memory engine 100 can be asked these network objectives 114, and this target is for example changing in selected time cycle of one day.
High-speed buffer memory engine 100 can provide the mirror image website to one or more server apparatus 111.By periodically or according to request, the network objectives 114 that receives from server apparatus 111 sends to client devices 111 by server apparatus 111, and this target changes in selected for example time cycle of one day.
By receiving being distributed in the request of the server apparatus 111 in a plurality of high-speed buffer memory engines 100, high-speed buffer memory engine 100 can provide accelerator for one or more server apparatus 111.Each high-speed buffer memory engine 100 is safeguarded the Cache 102 that has network objectives 114, and this network objectives sends to client devices 111 by server apparatus 111.The service that is provided by server apparatus is accelerated like this, because each high-speed buffer memory engine 100 can respond the load request of some information, and must self be handled by server apparatus 111 restriction of passing through information request quantity.
It is auxiliary to one or more server apparatus 111 first kind implementation agreements that high-speed buffer memory engine 100 can provide, and by using an implementation agreement network objectives 114 is transferred to one or more client devices 111 or proxy cache device.For example, when server apparatus 111 provided broadcast service, high-speed buffer memory engine 100 can receive network objectives 114 and can independently broadcast these network objectives 114 from the server apparatus 111 that will be broadcast to network 110 subclass.
It is auxiliary to one or more server apparatus 111 second classes implementation agreements that high-speed buffer memory engine 100 can provide, and by allowing server apparatus 111 network objectives 114 is broadcast to a plurality of high-speed buffer memory engines 100.Each high-speed buffer memory engine 100 can make 114 pairs of client devices 111 of radio network target use, and it is exactly the server apparatus 111 that is used for these network objectives 114 that these client devices resemble high-speed buffer memory engine 100 from high-speed buffer memory engine 100 these network objectives 114 of request.
Network objectives 114 can comprise data, HTML page or leaf for example, text, figure, picture, audio frequency, video; Program, for example JAVA or ACTIVE X applet or application program; Or the network objectives of other type, for example carry out protocol target.High-speed buffer memory engine 100 can write down flow audio frequency or mobile frames of video information in Cache 102, be used for being used by the delay of a plurality of client devices 111.The known network target 114 of some type is Cache not, and for example CGI exports or be labeled as by server apparatus 111 project of non-Cache.
In a preferred embodiment, high-speed buffer memory engine 100 can be from protocol message 113 or by the knowledge of alternate manner collection about client devices 111, the inquiry routing device in the network 110 for example and can be to these information responses to provide different network objectives 114 to different client devices 111.For example, high-speed buffer memory engine 100 can respond Information Selection server apparatus 111 about client devices 111 because of close or content, and is as follows:
High-speed buffer memory engine 100 can be selected particular server device 111 because of rapid reaction, and is for example close or because of the load of activating business on a plurality of server apparatus 111 because of the network route.
High-speed buffer memory engine 100 can respond the content of selecting about the information of client devices 111 on the server apparatus 111, for example language (for example help page or leaf of English or French) is responded in establishment, or for example work out local information (for example advertisement, news, or meteorological).In a preferred embodiment, local information for example advertisement can be retrieved from home server equipment 111, and this server apparatus is provided for inserting the advertisement on the page or leaf of serving native client machine equipment 111.
Cache
Fig. 2 represents the block scheme of data structure, is used to the network objectives of one group of institute's Cache to preserve storage block.
Each piece 100 can comprise arbitrary data block 200, wherein comprise data, be that high-speed buffer memory engine 100 does not use but the information of preserving as client devices 111, or control information, promptly use and client devices 111 obsolete information by high-speed buffer memory engine 100.
Piece 200 is formed one group of target 210, and each target comprises 211, one groups of data blocks 200 of a goal descriptor and one group of block pointer 212 from goal descriptor 211 reference data pieces 200.Goal descriptor 211 comprises a controll block of separating 200.Be not inconsistent the place of the independent controll block 200 of unification at block pointer 212, or for other type of relatively large target 210, goal descriptor 211 can be quoted one group of indirect block 216, each indirect block is quoted subordinate's indirect block 216 or data block 200.Each indirect block 216 comprises a controll block of separating 200.Less relatively target 210 does not need indirect block 216.
Cache 102 comprises a chunk 200, and each piece comprises that 4096 bytes and each piece can be stored in storer 103 or the mass storage 104 in a preferred embodiment.In optional embodiment, each piece 200 can comprise be not 4096 bytes scale and can be corresponding to the quantity of available memory 103 or mass storage 104.
Each block pointer 212 comprises a pointer value 215, and this value comprises independent 32 words and represent the position of piece 200 on mass storage 104, for example a physical disk block address.
In another embodiment, each block pointer 212 comprises one first 213, the piece 200 that expression is quoted is stored in the storer 103 or is stored in the mass storage 104, one second 214, the piece 200 that expression is quoted is controll block 200 (comprising control information) or data block 200 (comprising the data that are used for network objectives 114), comprise one 30 bit value with pointer value 215, the position of expression piece 200.In such an embodiment, when piece 200 was stored in storer 103, pointer value 215 was illustrated in a byte address in the storer 103; When piece was stored in the mass storage 104, pointer value 215 was illustrated in a physical disk block address on the mass storage 104.
In a preferred embodiment, quote each target 210 by root object 220, root object is kept in a plurality of (preferred two) root piece 221 copy of each disc driver of each mass storage 104 redundantly.In a preferred embodiment, there is a root object 220 in each disc driver for mass storage 104.Like this, each disc driver of mass storage 104 has an independent root object 210, and it uses two copies of root piece 221 to preserve.The root object 220 of each disc driver is quoted each current goal 210 that is used for this disc driver.
In a preferred embodiment, root piece 221 copy is kept on each physical disk piece 2 and 3 of each disc driver of mass storage 104.When the root piece 221 that is used for this disc driver is written into mass storage 104, at first is written to physical disk piece 2 and similarly writes physical disk piece 3 then.When high-speed buffer memory engine 100 was activated or restarts, root piece 221 was read from physical disk piece 2.If this read operation success similarly is then written to physical disk piece 3 then; , if this read operation is unsuccessful, but reads root piece 221 and similarly be then written to physical disk piece 2 then from physical disk piece 3.
In a preferred embodiment, the also redundant ground of high-speed buffer memory engine 100 some aims of systems 210 of storage on each disc driver of mass storage 104 is so that damage and preserve Cache 102 holographies when reducing available caches quantity in any subclass of disc driver.Like this, each such aims of systems 210 is quoted by root object 220 and is used for disc driver and uses two copies of its goal descriptor 211 to preserve.These redundant aimss of systems 210 of preserving comprise 220, one piece maps of a root object target 210 and a hash table 350 (Fig. 3), and each is as describing and other aims of systems at this, for example are used for the statistics file gathered and the target 210 of program code.
In storer 103, preserved a subclass of piece 200, so that use storer 103 as the Cache that is used for mass storage 104 (as storer 103 and mass storage 104 jointly as the Cache 102 that is used for network objectives 114).This piece 200 that is kept in the storer 103 is quoted by chunk processing 230, and it also is kept in the storer 103.
Each piece is handled 230 and is comprised forward processing 235, one buffer pointers 236 of 234, one block address of 233, one reference counters of 232, one reverse process pointers of pointer and a group echo 237.
Forward is handled pointer 232 and reverse process pointer 233 and is quoted other piece that piece handles in 230 the doubly-linked list and handle 230.
Reference counter 234 is preserved by the numeration of quoting to piece 200 of the process of high-speed buffer memory engine 100.This reference counter 234 is updated when a piece that is used for piece 200 is handled by the range request excessively of high-speed buffer memory engine 100 or is released.When reference counter 234 reaches zero, do not exist to piece 200 quote and it is placed in the free list of available block 200 after being written into disk, if in next write operation, be modified.
Block address 235 has the form identical with block pointer 212.Cache district pointer 236 is quoted the Cache district that is used for piece 200.Sign 237 records are about the out of Memory of piece 200.
In one embodiment, piece is handled 230 and is also used the 2Q pointer 238 of one group of applications " 2Q " technology and 2Q reference counter 239 to come index block processing 230, and as to " 2Q " further instruction: THEODORE JOHNSON and DENNIS SHASHA show " a kind of low expense high-performance high-speed cache management replacement algorithm " and be incorporated herein by reference.
Cache network objectives how
Fig. 3 represents to be used for the block scheme of the data structure of Cache network objectives.
High-speed buffer memory engine 100 receives agreement request from network 110.In a preferred embodiment, each agreement request uses http protocol (or a kind of variant for example SHTTP) and each HTTP request to comprise a URL (unique resource localizer) 310, a network objectives 114 in this steady arm recognition network 110.In a preferred embodiment, each URL310 identification is used for the position of the server apparatus 111 and the network objectives on server apparatus 111 114 of network objectives 114.
In optional embodiment, high-speed buffer memory engine 100 can use other agreement beyond HTTP or its variant; Can respond one or more identifiers that are used for network objectives 114 except that its URL310 with high-speed buffer memory engine 100.Therefore, as used in this, term " URL " refers generally to can identify or help to identify the identifier of any kind of a particular network target 114.
This URL310 comprises a hostid, and it identifies this network objectives 114 residing server apparatus 111 and file identifier, and its marked network target 114 is in the position on the server apparatus 111.In a preferred embodiment, this hostid comprises that a string name is used for server apparatus 111, and it can an IP of resolved one-tenth (Internet protocol) address., in optional embodiment, this hostid can comprise the IP address that is used for server apparatus 111, but not is used for the character string of server apparatus 111.
High-speed buffer memory engine 100 comprises a hash function 320, and it makes URL 310 be associated hash table bucket of index in its hash table 350 in Cache 102 with a hash signature.In a preferred embodiment, this hash table 350 comprises one group of hash table 350 that is used for each disc driver, its each quote on this disc driver of mass storage 104 these network objectives 114 of storage in the Cache 102.Each such hash table 350 has its oneself goal descriptor 211; Independent logic hash table of these hash table 350 common formation.
In a preferred embodiment, this Hash mark 330 comprises 32 number of being unsigned round valuess, and this is worth corresponding URL310 determines, on should value expectation relative all Probability Areas that are distributed in 32 signless integer values uniquely.In a preferred embodiment, the URL310 signature that also is associated with 64 URL, it also is the round values that is unsigned of this URL10 of response, and estimates on relative all Probability Areas that are distributed in 64 signless integer values uniquely with it; When comparing URL310, at first relatively URL signs and is the URL310 self that is compared when they equate only.In a preferred embodiment, URL310 also is converted into a canonical form before definite Hash mark 330 or URL mark, for example by being converted to a single form (lower case or upper case) at this all alphabetic characters.In a preferred embodiment, each non-NULL hash table bucket 340 comprises a data block 200.
Because the hash table bucket 340 direct correlation URL310 in the hash table 350 usefulness hash tables 350, the network objectives 114 of storage is not holographic in Cache 102; Each network objectives 114 can be cited and in the set time from Cache 102 accesses, for example be less than the read access time of about two disks.In addition, must have the unique name aspect for the network objectives in the Cache 102 114 and not have special requirement; When network objectives 114 has same names (for example when they are the old and new version of identical network target 114), hash table 350 for they two all point to identical hash table bucket 340 simply.
When having the old and redaction of identical network target 114, high-speed buffer memory engine 100 will newly be quoted by URL310 and resolve the redaction that becomes network objectives 114.The client devices of old network objectives 114 versions of access will continue the legacy version of access network target 114 when the new sample of network objectives 114 stores Cache 102 into., promptly use URL310 will resolve the redaction that becomes network objectives 114 by high-speed buffer memory engine 100 to the access subsequently of this network objectives 114 by the same client machine equipment.When All Clients equipment 111 uses legacy version to finish, the legacy version of network objectives 114 is deleted at once.
Cache 102 does not possess the control of storage networking target 114 in Cache 102 with the different client devices 111 that are of file system, comprise that (1) is used for the name space of network objectives 114 storages, (2) name or rename the ability of network objectives 114, (3) whenever whether from Cache 102 deletion network objectives 114 and (4) network objectives 114 whether be stored in fully in the Cache 102.
In a preferred embodiment, high-speed buffer memory engine 100 uses storer 103 and mass storage 104 (preferably a plurality of disc driver) to come Cache network objectives 114 so that preserve the network objectives 114 of most probable by client devices 111 requests in Cache 102.; in optional embodiment; high-speed buffer memory engine 100 can be implemented selected management expectancy outside the network objectives 114 that the preservation most probable is asked by client devices 111; for example select or repel the kind of some network objectives 114 or some kind of client devices 111 or server apparatus 111, whether be whole day or on times selected and selected date.
High-speed buffer memory engine 100 uses hash function 320 and hash table 350 identifications and the relevant target 210 (with so one or more data blocks 200) of URL310 (and relevant with network objectives 114 thus).These high-speed buffer memory engine 100 these targets 210 of operation are retrieved the network objectives 114 of being asked by HTTP from Cache 102, and this network objectives 114 is sent to client devices 111.High-speed buffer memory engine 100 uses storer 103 and mass storage 104 maintaining cached devices 102, so that whether target 210 is in Cache 102, if with in Cache 102, whether target 210 is in storer 103 or are being transparent (different times except from storer 103 or from mass storage 104 searched targets 210 time postpone may) for client devices 111 on the mass storage 104.
As " writing disk " part described in, high-speed buffer memory engine 100 writes mass storage 104 with piece 200 (with the target 210 that comprises these pieces 200) from storer 103 once in a while, so that preserve these pieces 200 in the storer 103 of frequent access.
As describing at this, when when storer 103 writes mass storage 104 with piece, high-speed buffer memory engine 100 controll blocks 200 are written to where (for example determine on which disc driver of mass storage 104 and in which position on this disc driver) on the mass storage 104 and when write on the mass storage 104 (for example determine when write mass storage 104 on data favourable).High-speed buffer memory engine 100 is attempted to optimize piece 200 and is write time and position on the disk, so that make from time and space minimum that disk reads and writes.
Hash table 350 is aimss of systems 210 and is similar to other aims of systems 210, comprises a goal descriptor 211, zero or more indirect blocks 216 and zero or multidata piece 200 more.Because hash table 350 is estimated to be used relatively continually, always its indirect block 216 estimate to be kept in the storer 103, although will be kept in the mass storage 104 for big relatively hash table 350 some data blocks 200.In a preferred embodiment, hash table 350 is distributed on a plurality of disc drivers of mass storage 104 and hash table 350 parts of each disc driver are quoted in the root object 220 of this disc driver.
Each Hash mark 330 is all used with the quantity of the hash table bucket in the hash table 350 340 Hash mark 330 as modulus and is indexed in the hash table 350.Each hash table bucket 340 comprises a piece 200.Each hash table bucket 340 comprises zero or more hash projects 360; Each hash project 360 comprises that for the target 210 of hash project 360 quotes (comprising a pointer for the goal descriptor 211 of this target 210).
Hash table bucket 340 comprises a secondary hash table, has a plurality of secondary hash table project series (for example 32 such series).Use Hash mark 330 to select one of this series so that search for the hash project 360 of relevant URL310.
In another embodiment, hash project 360 is kept in the hash table bucket 340 with secondary hashed value sequential list, has the zero project (otherwise when relevant network objectives 114 has been deleted or deleted from hash table 350) that may disperse; The secondary hashed value also determines according to hash signature 330, for example by calculating with selected value for example 2 *32 is the hash signature 330 of modulus.If there are a plurality of hash projects 360 have identical secondary hashed value, high-speed buffer memory engine 100 check with each relevant goal descriptor 211 of a plurality of hash projects 360 to find and to have relevant hash and sign the URL310 of the relevant correct network objectives 114 of 330 URL310.
In a preferred embodiment, each hash table bucket 340 has selected scale, and this scale is enough preserved at least 15 to 2 times of quantity of estimating hash project 360, if hash project 360 is optimal uniform distributions (this selection scale is a data block 200 preferably).If hash project 360 is assigned to a full hash table bucket 340, one of relevant network objectives 114 with this hash table bucket 340 with relevant hash project 360 by from hash table bucket 340 and from Cache 102 deletion be used for the space of new hash project 360 with generation.
In a preferred embodiment, it is deletable can existing a plurality of different operating strategies to be used to select which target 210.
Mass storage with a plurality of disc drivers
High-speed buffer memory engine 110 is that each disc driver current or nearest appearance is preserved a DSD (disk unit descriptor) target 210 on mass storage 104, and it comprises a data structure describing this disc driver.High-speed buffer memory engine 100 is also preserved a DS (disk unit) target 210, and it quotes all DSD targets 210 and it is kept on one or more disc drivers of mass storage 104 redundantly.Like this, DS target 210 redundancies are kept on a plurality of disc drivers of mass storage 104 (preferably all), and the information of each disc driver is stored in the DSD target 210 of this disc driver.
Each DSD target 210 comprises following message at least: (1) disc driver quantity; (2) the concentrated total scale of all disc drivers; (3) for each disc driver--the independent scale of-this disc driver, for a descriptor of this disc driver and an index of all disk drive arrays; (4) for each disc driver---be kept at the zone of the Hash mark 330 on this disc driver.The zone that is kept at the Hash mark 330 on each disc driver also is kept in the independent aims of systems 210, and this target is mapped to a concrete disc driver with each Hash mark 330.In a preferred embodiment, scale be by selected value for example the multiple of 1 megabyte represent.
Hash project 360 according to being in proportion of each disc driver be distributed on a plurality of disc drivers, around a round values of hash project 360.
Work as increase, deletion, or when replacing a disc driver, high-speed buffer memory engine 100 produces or revises relevant DSD target 210 and upgrades DS target 210.This operation is carried out in new data block 200 similar modes more; Like this, the controll block of one of any DS of quoting target 210 or DSD target 210 also is updated and utilizes next write operation to automatically perform renewal to mass storage 104.(each disc driver is automatically performed renewal to DS target 210, whenever next.) like this, mass storage 104 can dynamically upgrade, comprise and change sign or disc driver quantity, high-speed buffer memory engine 100 continuous workings simultaneously and are the sensations that changed mass storage 104 quantity that can be used for Cache 102 to unique influence of high-speed buffer memory engine 100.
Be written to disk
High-speed buffer memory engine 100 is used " delay writes " technology, the target 210 that wherein is written to Cache 102 (the redaction target 210 that comprises the old target 210 that occurs at Cache 102) at first is written to storer 103 and writes mass storage 104 subsequently.Be different from and use the file system that postpones the technology that writes, do not need to provide non-volatile random access memory or UPS (uninterrupted power source) and relevant order shutdown programm, because high-speed buffer memory engine 100 does not guarantee the persistence of the network objectives 114 in Cache 102.For example, if a concrete network objectives 114 is lost from Cache 102, this network objectives 114 generally can regain from relevant server apparatus 111.
, use to postpone the consistance of the technology that writes, by not rewriteeing controll block 200 or data block 200 (except root piece 221) with preservation Cache 102.On the contrary, the piece 200 of modification is written to mass storage 104, replaces original piece 200 and original piece 200 is released, and the institute of operation is called " write operation " in steps automatically.If write operation interrupts or failure, whole write operation is failed automatically and original piece 200 is preserved available.
When being modified, the following layer data of original data block 200 produces an amended data block 200 (or, for example being used for new network objectives 114, when being stored in new data block 200) when new following layer data.Piece 200 after one of original controll block 200 of being quoted by original controll block 2200 (originally data block 200 or original controll block 200) is modified (is revised back data block 200, new data block 200, or modification back controll block 200) when substituting, produce an amended controll block 200; This modification back controll block 200 is quoted and is revised back piece 200 but not original piece 200.
Set up each write operation so that optimize the operation that piece 200 is written to the operation of mass storage 104 and then these pieces is read from mass storage 104.Following technology is used the optimum target that reads and write to realize:
After the modification of gathering and writing piece 200 and may the time write the disc driver order magnetic track of mass storage 104;
Indirect block 216 is written into close and the storage block before those data blocks 200 that they are quoted, so that can read the data block of being quoted 200 with identical read operation in any possible moment;
The free memory blocks that is written into the order on one of disc driver of being used for mass storage 104 with the relevant data block 200 of order (if possible, continuous free memory blocks), so that can read the data block of being quoted 200 with identical read operation in any possible moment;
With the piece 200 (controll block 200 or data block 20) that is written into because of their related objective 210 together collected and by relative address in each target 210 internal sort so that can any may be that a specific objective 210 reads piece 200 with identical read operation constantly.
Fig. 4 represents one group of original and block scheme of amended.
Fig. 5 represents to revise the process flow diagram that the back piece writes the method for an independent disc driver automatically.
The tree of piece 200 (Fig. 4) comprises original controll block 200 and original data block 200, and it has been written to mass storage 104 and has been quoted by root object 220.Some or all original pieces 200 can be stored in and be used in the storer 103 using.
Method 500 (Fig. 5) comprises the flow points of one group of note and the step of being carried out by high-speed buffer memory engine 100.
In flow points 510, data block 200 and new data block 200 are stored in and also are not written to disk in the storer 103 after revising.
Because there is not data block 200 to be rewritten, quote each original controll block 200 (quoting each original controll block 200 of revising back controll block 200) of revising back data block 200 and must be modified back controll block 200 and substitute with each, along tree always upward to root object 200.
In step 521, revise back data block 200 for each, data block 200 after distributing a free memory blocks to be used for record modification on the mass storage 104.Piece mapping target 210 is modified with reflection and is used to revise the storage block distribution of back data block 200 and discharge the storage block that is used for original data block 200.
Piece mapping target 210 is preserved about which storage block on mass storage 104 to be assigned with and information and that storage block with the data of being stored is released and can uses.High-speed buffer memory engine 100 search are used for the piece mapping target 210 of free memory blocks, will write pointer and be kept in the piece mapping target 210 so that carry out search in the round-robin method mode.Like this, when writing pointer 250 and surmount piece mapping target 210 tail ends, it raps around to the top of piece mapping target 210.Write that pointer 250 is stored in the root object 220 so that search continues with round-robin method, even after Cache 102 failures and restarting.
For preserving the consistance of Cache 102 under failure scenarios, if still be cited, if even by root object 220 indirect referencings, that free memory blocks 200 can not be considered to is idle (with thus be used).Therefore, these idle pieces 200 before root object 200 is submitted to automatically are not considered to idle, are automatically written into disk up to root object 220.
In step 522,, produce controll block 200 after the modification for each original controll block 200 of quoting the original piece 200 that in write operation, to revise.With with step 521 similar fashion, controll block 200 after the free memory blocks on the mass storage 104 is allocated for record modification.With with step 521 similar fashion, piece mapping target 210 is modified the storage block that is used to revise back controll block 200 with reflection and distributes and discharge the storage block that is used for original controll block 200.
For each grade repeating step 522 of tree 400 up to root object 200.
In step 523, for the piece that is modified shines upon those piece 200 repeating steps 521 of target 210 and the operation of step 522.
In step 524, revise back data block 200 and revise back controll block 00 (comprising piece mapping target 210) being written into the storage block on the mass storage 104 that is distributed.
In step 525, root object 220 is write the position on the mass storage 104 again.
In flow points 530, root object 220 is write again, and the institute of tree 400 changes and submitted to automatically; Revising back piece 200 has become the part of tree 400 and has been modified original pieces 200 that back piece 200 substitutes and become idle and be fit to reuse.Revise back piece mapping target 210 and do not submit to automatically and write again, therefore be designated as and distribute or idle storage block is no longer so indicated, up to submitting to automatically at flow points 530 write operations up to root object 220.
When revising back piece 200 actual allocated and give storage block and be written into storage block on these mass storages 104, they write in the following manner:
With depth-first from top to bottom mode travel through tree revise back data block 200 before revising back data block 200 and write so that guarantee to quote according to the storage block order at them;
Revise back controll block 200 at each, after the modification of being quoted data block 200 with depth-first from top to bottom mode travel through so that data block 200 is formed together according to the storage block order bunch after the modification that guarantees to be quoted after the data block 200 after the modification of quoting them;
This technology helps to guarantee that when reading controll block 200 data block 200 that they are quoted may constantly be read (READ-AHEAD) earlier any, reads controll block 200 and data block 200 necessary operations minimum number so that make from mass storage 104.
Response storage 103 conditions (being included in the quantity of revising back piece 200 in the storer 103), the condition of mass storage 104 (being included in free memory blocks quantity available on the mass storage 104), with the condition (being included in the hit rate of network objectives 114 in the Cache 102) of Cache 102, high-speed buffer memory engine 100 determines when the execution write operation.
In a preferred embodiment, the write operation of using method 500 is carried out according to following arbitrary condition:
When the write operation owing to the front makes certain time over and done with (for example 10 seconds); Or
When the back piece of modification comprises the vast scale excessively of storer.
The write operation of using method 500 also can be carried out according to following arbitrary condition:
The quantity of piece 200 is less than near the available free storage block quantity on mass storage 104 and is used for piece mapping target 210 needed storage block quantity after the modification in storer 103; Or
The part of piece 200 is near when the error rate of the network objectives 114 of storer 103 after the modification in storer 103.
, the quantity of the free memory blocks on mass storage 104 200 normally should be greatly more than the quantity of the piece 200 that during write operation, will write.
Each target 210 has relevant " access time ", this time representation is write as the time of reading this target 210 for the last time, do not need access time of being that each target 210 is upgraded on the disks yet no matter when read target 210, so this will produce one group of amended controll block 200 (this controll block must write disk during write operation next time) and read any target 210 whenever.
Therefore, preserve the easy drop-out of an easy drop-out table record about target 210, comprise the access time of reading target 210, with the access quantity that is used for these targets 210, when reading target 210, its access time only easily is being updated in the drop-out table, but not in the goal descriptor 211 that is used for target 210 self.This easy drop-out table is kept at storer 103 neutralizations and is not written into disk.
In a preferred embodiment, network objectives 114 can be read simultaneously continuously, and the write operation of using method 500 is performed, even comprising, these network objectives 114 revise back data block 200, because whether data block 200 is preserved continuously in storer 103 and is carried out write operation simultaneously after revising, no matter with they actual being successfully writing in the mass storage 104.
From Cache, delete target
Fig. 6 is illustrated in one group of pointer and the regional block scheme on the mass storage.
One group of storage block on each disc driver on the mass storage 104 is had from zero index to maximal value NMAX by circle diagram 600 representatives.In the figure, index to be counterclockwise increasing, and raps around to the top of each disc driver from terminal, is mould with maximal value NMAX.
Preservation comprises a DT (delete list) target 210 of the project that is used for each deletable target 210.One of hash table bucket 340 in hash table 350 is during by access, inserts one and quote one of hash project 360 of being used for by this hash table bucket 340 and quote and be restricted to each deletable target 210 in DT target 210.
In optional embodiment, preserve a target mapping target 210, this target comprises a project of each the piece map entries that is used for target mapping target 210.In this embodiment, each project in the target mapping target 210 or sky, the corresponding piece 200 of its expression does not comprise a goal descriptor 211; Or be not empty, the corresponding piece 200 of its expression comprises a goal descriptor 211 and comprises that further information is to determine whether that corresponding target 210 can be deleted.Each non-NULL project in target mapping target 210 comprises at least one hit rate, and a load time, experience value (LIVE TIME) time and a Hash mark 330 are used for indexing hash table 350.
High-speed buffer memory engine 100 search block mappings target 210 is used for deletable target 210 (target 210 of being quoted by DT target 210), preserves a deletion pointer 260 and shines upon in the target 210 to piece, is similar to write pointer 250 and carry out search in a looping fashion.Like this, the similar pointer that writes, when the deletion pointer surmounted piece mapping target 210 terminals, it rapped around to the top of piece mapping target 210.The also similar pointer 250 that writes, deletion pointer 260 is kept in the root object 220 so that search continues in a looping fashion, even after Cache 102 failures and restarting.
The writing pointer 250 and deletion pointer 260 each be included in a index in the mapping 600 of each disc driver that is used for mass storage 104.
In a preferred embodiment, deletion pointer 260 is kept at least and writes before 250 1 selected minor increment d0601 of pointer, but do not write pointer 250 to walking around once more before not understanding, so that select the deletion zone 610 be used to delete each disc driver that can delete target 210, this target near employed be used to write revise back and fresh target 210 write regional 620.Write zone 620 scales that have at least by minor increment d0601 appointment.Although to deleting the special requirement of regional 610 capacity, the capacity in deletion zone 610 is preferably times over writing zone 620 (preferably about five times).High-speed buffer memory engine 100 stipulates that thus all only appear in the less relatively part of each disc driver writing of disk.This allows mass storage 104 to work quickly, only needs mobile phase to less distance because be used for the yoke of mass storage 104 during each write operation.
Because high-speed buffer memory engine 100 is attempted writing the relatively-stationary distance relation of preservation between pointer 250 and the deletion pointer 260, write operation frequently occurs relative with deletion action.In a preferred embodiment, high-speed buffer memory engine 100 replaces between write operation and deletion action, so that each deletion action work is so that (next write operation subsequently shines upon target 210 with piece and is written to disk, and expression piece 200 is with deleted for later write operation making space on disk; Write operation after this can use recently idle piece 200) and each write operation work with consume on disk free space with require later deletion action.
Pickup area 630 selected must near and before deletion zone 610, so that select the target 210 that is used to delete.Select the size of pickup area 610, so that advance past pickup area 630 (this will spend several write operations) in 250 times at the pointer of estimating that writes, nearly all hash project 360 will be by the normal running access of high-speed buffer memory engine 100.Like this, because each hash project 360 comprises the enough information of determining whether its relevant target 210 should delete, nearly all target 210 will evaluatedly be used for moving through pickup area 630 needed several write operations and deleting writing regional 620.
Assessed the target 210 that is used to delete and be placed on a deletion tabulation, according to the suitable condition ordering of deletion.In a preferred embodiment, the target 210 that assessment is used to delete according to one of following standard:
Because http protocol (or its variant, for example SHTTP) if work target 210 clearly selected by high-speed buffer memory engine 100 deletion, target 210 is placed on the beginning of deletion tabulation at once;
If produce the fresh target 210 have same names, in case all are released for quoting of old target 210, old target 210 is placed to the beginning (that is, no longer handle quote old target 210) of deletion tabulation at once on high-speed buffer memory engine 100;
If target 210 is expired, it is placed to the beginning of deletion tabulation at once;
If first target 210 has the access time older than second target 210, first target, 210 to the second targets 210 are more suitable for selected deletion and are ordered into thus in the deletion tabulation before second target 210.
Select the deletion tabulation to go up the part of target 210 according to last two in these standards (that is), preferably select the target 210 of deletion tabulation last 1/3rd to delete because the expired or old access time.
After each write operation, with the expectation size propelling pickup area 630 in next write area territory.In a preferred embodiment, writing the size that write zone 620 of zone 620 expectation size by several (best seven) write operation before will being used for next time is averaged and estimates.Like this before being advanced to deletion zone 610 in the deletion tabulation and the target 210 after deletion zone 610 be planned to delete; These targets are selected and deletion in next deletion action (in a preferred embodiment, next deletion action is carried out after write operation is finished at once) one by one.
In a preferred embodiment, for each disc driver write operation and deletion action on the mass storage 104 is independently, therefore there is deletion zone 610 separately for each disc driver on mass storage 104, writes zone 620 and pickup area 630.
Remodeling embodiment
Although preferred embodiment is open at this, preserving the principle of the invention, many remodeling all are possible in zone and the spirit zone, will become clearly by these remodeling of research the application for a person skilled in the art.
Pressing the condition of PCT revises
Claims
1. system that is used for target on the network, described system comprises:
A receiver is connected to described network; With
Cache will from a target effective of described network be recorded on the mass storage;
Wherein said high-speed buffer memory engine can select time to write down described target, chosen position is to write down described target, but store described target so that continuous working after the part of described mass storage is lost holographicly, or make that to write the needed time of described mass storage minimum.
2. equipment that is used to preserve a group network target comprises:
A processor is used to control a Cache mechanism that is configured to store a plurality of targets that comprise a described group network target, and described processor can and be configured at least one in the described network objectives group of transmission on described network with network service;
A mass storage, relevant with described Cache mechanism, with described processor communication;
A storer, relevant with described Cache mechanism, communicate by letter with described mass storage with described processor;
A hash mechanism is configured to any one of in described Cache mechanism described a plurality of targets in location, responds an object identifier; With
A target storing mechanism responds described hash mechanism, is configured to the one or more described a plurality of targets of transmission between described storer and described mass storage.
3. the equipment of claim 2, wherein said Cache mechanism is organized into a plurality of and this hash mechanism and returns a block pointer and store with access, any one of the described a plurality of targets in described Cache mechanism.
4. the equipment of claim 3, wherein described a plurality of of mass storage are independent of any file system of being applied on this mass storage and direct access.
5. the equipment of claim 2, wherein the target storing mechanism comprises that further a delay writing station is used to carry out an automatic write operation and is written to described mass storage will comprise a plurality of of one or more described a plurality of targets.
6. the equipment of claim 3,4 or 5, wherein this target storing mechanism comprises that further an optimization means is used to make the described a plurality of needed times of transmission minimum.
7. claim 2,3,4,5 or 6 equipment, wherein this mass storage comprises that each of a plurality of disc drivers and described a plurality of disc drivers is relevant with corresponding disk unit descriptor target, quotes each described corresponding disk unit descriptor target by a disk unit target.
8. the equipment of claim 7, comprise that further a dynamic mass storage inking device is used for responding the one or more increase of described a plurality of disc drivers, deletion, failure or replacement, upgrade described disk unit target and generation or revise described corresponding disk unit descriptor target, described equipment continuous working simultaneously.
9. a computer control method is used to preserve a group network target, comprises step:
Control a Cache mechanism, this Cache mechanism is configured to store a plurality of targets that comprise described network objectives group;
Any one of the described a plurality of targets in location in described Cache mechanism responds a goal descriptor;
The response positioning step transmits the one or more of described a plurality of targets automatically between a storer and a mass storage; With
On described network, send at least one in the described network objectives group.
10. the computer control method of claim 9, wherein said Cache mechanism is organized into step that a plurality of and hash mechanism carry out to return a block pointer and is stored in described a plurality of targets in the described Cache mechanism any one with access.
11. the computer control method of claim 9, wherein this method further comprises the step of carrying out an automatic write operation, is written to described mass storage with one or more a plurality of of will comprise in described a plurality of target.
12. the equipment of claim 9,10 or 11 further comprises making the described a plurality of minimum steps of required time of transmission.
13. claim 9,10,11 or 12 computer control method, wherein said mass storage comprises a plurality of disc drivers and each described disc driver and corresponding disk unit descriptor target, it is relevant to quote each described corresponding disk unit descriptor target by a disk unit target, this method further comprises the step of preserving described disk unit target and described corresponding disk unit descriptor target, so that described network objectives group is stored on the described mass storage holographicly.
14. the computer control method of claim 13 further comprises step:
Upgrade described disk unit target; With
Respond the one or more increase in described a plurality of disc driver, deletion, described corresponding disk unit descriptor target is revised in failure or replacement.Statement according to 19
The applicant believes that the modification to claims does not produce any influence to instructions or accompanying drawing.
The present invention includes a highly reliable and high efficiency network cache device (with relevant method).
YAMMINE (US5564011) discloses a system and method, is used for the dynamically regeneration of controll block on the file system disk.YAMMINE has imagined a mass storage, has applied a hierarchical file system that comprises catalogue and file on it.YAMMINE uses the regenerate file system of disk of the information keep during file system initialization, will make existing file system become inconsistent.
JONES (EP0359384) discloses a kind of back queuing strategy and equipment write and has been used for that a disc driver is arranged a plurality of independent sectors and writes.If continuous write request and be aligned to this disk and be used for contiguous sector on this disk, this disc driver is carried out sector more than and is write, but not a plurality of independent sector writes.
The present invention is that a separate network cache apparatus is used for high-speed cache network objectives effectively.The present invention does not need to use the mass storage system of object oriented file.Otherwise during the write operation that is used for optimal Storage and target retrieval, target (comprising network objectives) is directly stored on the data block of mass storage device.In addition, in that automatically the transmission objectives state that keeps information with toilet is always consistent on mass storage between storer and mass storage device during these write operations.In addition, on the mass storage that a plurality of drivers are formed, the present invention automatically detects the increase of any driver, and deletion and failure also utilize the mass storage that increases or reduce to carry out cache operations continuously.

Claims (1)

1. system that is used for target on the network, described system comprises
A receiver is connected to described network;
A high-speed buffer memory engine will be recorded on the mass storage from a target effective of described network;
Wherein said high-speed buffer memory engine can select time to write down described target, chosen position is to write down described target, but store described target so that continuous working after the part of described mass storage is lost holographicly, or make that to write the needed time of described mass storage minimum.
CN98805987A 1997-06-09 1998-06-09 Network object high-speed buffer memory engine Pending CN1260887A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US4898697P 1997-06-09 1997-06-09
US60/048,986 1997-06-09

Publications (1)

Publication Number Publication Date
CN1260887A true CN1260887A (en) 2000-07-19

Family

ID=21957486

Family Applications (1)

Application Number Title Priority Date Filing Date
CN98805987A Pending CN1260887A (en) 1997-06-09 1998-06-09 Network object high-speed buffer memory engine

Country Status (8)

Country Link
EP (1) EP0988598A1 (en)
JP (1) JP2002511170A (en)
KR (1) KR20010012913A (en)
CN (1) CN1260887A (en)
AU (1) AU8061798A (en)
EA (1) EA200000004A1 (en)
IL (1) IL133241A0 (en)
WO (1) WO1998057265A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100353340C (en) * 2003-07-04 2007-12-05 中国科学院计算技术研究所 Custom-made network storage apparatus operating systems

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7171469B2 (en) * 2002-09-16 2007-01-30 Network Appliance, Inc. Apparatus and method for storing data in a proxy cache in a network
KR100654462B1 (en) 2005-08-24 2006-12-06 삼성전자주식회사 Method and cache system for storing file's data in memory block which divides cache memory
CN101707684B (en) * 2009-10-14 2014-04-30 北京东方广视科技股份有限公司 Method, device and system for dispatching Cache
WO2013141198A1 (en) * 2012-03-21 2013-09-26 日本電気株式会社 Cache server, content delivery method and program
US9298391B2 (en) * 2012-12-19 2016-03-29 Dropbox, Inc. Application programming interfaces for data synchronization with online storage systems
US9311343B2 (en) 2014-04-02 2016-04-12 International Business Machines Corporation Using a sequence object of a database

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5065354A (en) * 1988-09-16 1991-11-12 Compaq Computer Corporation Queued posted-write disk write method with improved error handling
JPH0679293B2 (en) * 1990-10-15 1994-10-05 富士通株式会社 Computer system
US5564011A (en) * 1993-10-05 1996-10-08 International Business Machines Corporation System and method for maintaining file data access in case of dynamic critical sector failure
WO1997001765A1 (en) * 1995-06-26 1997-01-16 Novell, Inc. Apparatus and method for redundant write removal

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100353340C (en) * 2003-07-04 2007-12-05 中国科学院计算技术研究所 Custom-made network storage apparatus operating systems

Also Published As

Publication number Publication date
KR20010012913A (en) 2001-02-26
WO1998057265A1 (en) 1998-12-17
IL133241A0 (en) 2001-03-19
EP0988598A1 (en) 2000-03-29
JP2002511170A (en) 2002-04-09
EA200000004A1 (en) 2000-10-30
AU8061798A (en) 1998-12-30

Similar Documents

Publication Publication Date Title
US7539818B2 (en) Network object cache engine
US6542967B1 (en) Cache object store
CN1132109C (en) System and method for efficient caching in a distributed file system
US8417746B1 (en) File system management with enhanced searchability
CN1291320C (en) Preserving a snapshot of selected data of a mass storage system
US8595340B2 (en) Method and system for managing digital content, including streaming media
US5644766A (en) System and method for managing a hierarchical storage system through improved data migration
CN1109444C (en) A configuration method for a data management system
US7849207B2 (en) Method and system for managing digital content, including streaming media
CN1292370C (en) Method and apparatus for data processing
CN101263494B (en) Method and device for monitoring affair related with object of storage network
CN1659537A (en) System and method for automatically updating a wireless device
US20110078220A1 (en) Filesystem building method
CN1690974A (en) System and method for resynchronization of time after minimized backup of system and system failure
CN1202257A (en) System and method for locating pages on the world wide web and for locating documents from network of computers
JP2008546076A (en) Efficient handling of time-limited messages
CN1157978C (en) Apparatus and method for managing mobile phone agent
CN1836232A (en) Automatic and dynamic provisioning of databases
CN101064630A (en) Data synchronization method and system
JP2004192292A (en) Prefetch appliance server
CN1755673A (en) File system with file management function and file management method
CN1147648A (en) Data storage apparatus and it storage method
CN102523279A (en) Distributed file system and hot file access method thereof
US11100111B1 (en) Real-time streaming data ingestion into database tables
CN1877583A (en) Accessing identification index system and accessing identification index library generation method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C01 Deemed withdrawal of patent application (patent law 1993)
WD01 Invention patent application deemed withdrawn after publication