WO1999062179A1 - Appareil et procede de transmission efficace de donnees binaires - Google Patents

Appareil et procede de transmission efficace de donnees binaires Download PDF

Info

Publication number
WO1999062179A1
WO1999062179A1 PCT/US1998/012409 US9812409W WO9962179A1 WO 1999062179 A1 WO1999062179 A1 WO 1999062179A1 US 9812409 W US9812409 W US 9812409W WO 9962179 A1 WO9962179 A1 WO 9962179A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
source
triplet
token
data token
Prior art date
Application number
PCT/US1998/012409
Other languages
English (en)
Inventor
Henry Collins
Original Assignee
Citrix Systems, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Citrix Systems, Inc. filed Critical Citrix Systems, Inc.
Priority to AU80725/98A priority Critical patent/AU8072598A/en
Publication of WO1999062179A1 publication Critical patent/WO1999062179A1/fr

Links

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M7/00Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
    • H03M7/30Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction
    • H03M7/3084Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction using adaptive string matching, e.g. the Lempel-Ziv method
    • H03M7/3086Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction using adaptive string matching, e.g. the Lempel-Ziv method employing a sliding window, e.g. LZ77

Definitions

  • the present invention relates to data transmission methods and, in particular, to methods and apparatus for efficiently transmitting binary data.
  • the present invention provides a method and apparatus that allows a large amount of data to be easily and efficiently transmitted, thereby reducing the amount of time and bandwidth necessary to transmit the data.
  • the present invention relates to a method for efficiently transmitting binary data from a source to a receiver.
  • the source maintains an input buffer and an output buffer.
  • the input buffer stores a plurality of data tokens that the data source desires to transmit to a data target. Those data tokens are processed and the results of that processing are stored in the output buffer for eventual transmission to the data target.
  • the output buffer contains a plurality of data tokens and other information to be transmitted to the receiver by the source.
  • the source determines if a data token triplet has been previously encountered and inserted into the output buffer.
  • the source inserts information into the output buffer indicating the number of data token triplets processed by the source since the triplet was previously last encountered. If the triplet has not been previously encountered, the data source inserts a single data token from the triplet into the output buffer.
  • the output buffer is transmitted to a data target.
  • the receiver maintains a buffer containing a plurality of data tokens transmitted to the receiver by the source.
  • the present invention relates to a system for efficiently transmitting binary data from a source to a receiver.
  • the source is connected to the receiver over a connection and the source includes a transmitter, an output buffer, an input buffer, and a comparator.
  • the transmitter is in electrical communication with the connection and transmits data over the connection to the receiver.
  • An output buffer is in electrical communication with the transmitter and the output buffer stores a plurality of data tokens and other information to be transmitted by the transmitter.
  • An input buffer stores a plurality of data tokens that the data source desires to transmit to a data target.
  • the comparator is in electrical communication with both the input buffer and the output buffer.
  • the comparator identifies whether a data token triplet stored in the input buffer has been previously encountered and inserted into the output buffer for transmission to a receiver. If the triplet has been previously encountered, the source inserts information into the output buffer indicating the number of data token triplets processed by the source since the triplet was previously last encountered. If the triplet has not been previously encountered, the data source inserts a single data token from the triplet into the output buffer.
  • the output buffer is transmitted to a data target.
  • the receiver includes a receiver and a buffer. The receiver is in electrical communication with the connection and receives data transmitted over the connection. The buffer is in electrical communication with the receiver and stores data tokens received by the receiver.
  • the invention in still another aspect, relates to an apparatus for efficiently transmitting binary data.
  • the apparatus includes a transmitter, an output buffer, an input buffer, and a comparator.
  • the transmitter is in electrical communication with the connection and transmits data over the connection.
  • the output buffer is in electrical communication with the transmitter and stores a plurality of data tokens to be transmitted by the transmitter.
  • the input buffer stores a plurality of data tokens that the data source desires to transmit to a data target.
  • a comparator is in electrical communication with both the input buffer and the output buffer. The comparator identifies whether a data token triplet has been previously encountered and inserted into the output buffer for transmission to a receiver.
  • the source inserts information into the output buffer indicating the number of data token triplets processed by the source since the triplet was last encountered. If the triplet has not been previously encountered, the data source inserts a single data token from the triplet into the output buffer. The output buffer is transmitted to a data target.
  • FIG. 1 is a block diagram of a system for efficiently transmitting binary data
  • FIG. 1A is a diagrammatic representation of an embodiment of a cache page
  • FIG. 2 is flow diagram of the steps to be taken to efficiently transmit binary data
  • FIG. 2B is a flow diagram showing an alternative embodiment for step 220 in FIG. 2.
  • a data source 10 is connected to a data target 40.
  • the connection between the data source 10 and the data target 40 may have varying bandwidth characteristics.
  • the connection 21 between the data source 10 and the data target 40 may be a low bandwidth connection of the sort typical in local area network systems, such as Ethernet.
  • client-server systems is the Internet, in which connections between nodes can have bandwidths as low as 2400 baud.
  • a data token is any conveniently-sized datum used by the wide number of applications and systems in which the described technique may find utility.
  • a data token may be a 4-bit nibble, an 8-bit byte, a 16-bit word, a 32-bit longword, or any other convenient data size.
  • data tokens may be one bit in size for monochromatic graphical data, eight bits in size for multicolor graphical data, or some other size which conveniently denotes an element of binary data to be transmitted.
  • Data token triplets will be used to refer to three data tokens. Although, in general, a data token triplet will be comprised of three data tokens having the same length, it is contemplated that certain systems may utilize differently-sized tokens and that, therefore, a data token triplet may comprise three differently-sized tokens.
  • the data source 10 maintains an input buffer 12 and an output buffer 14.
  • the input buffer 12 stores data tokens that the source 10 desires to transmit.
  • the output buffer 14 stores data that has been processed by the source 10 and is ready to be transmitted to a data target 40 by the source 10.
  • Data stored in the output buffer 14 can include data tokens that have been readied for transmission and other control information.
  • the data source 10 considers the data stored in the input buffer 12 as a series of overlapping token triplets, i.e., if the input buffer contains data tokens ABCDEF, the data source considers the input buffer to store token triplets ABC, BCD, CDE, etc.
  • a comparator unit 13 analyzes data token triplets stored in the input buffer 12 to determine if a particular data token triplet has been previously encountered by the comparator 13 in the course of processing the input buffer 12.
  • the comparator 13 includes a cache memory element 16 which is subdivided into a plurality of cache pages 15, 15', 15".
  • the cache memory element 16 should be subdivided into a number of cache pages 15, 15', 15" that is a power of two, e.g. 2, 4, 8, 16, etc., although this is not necessary in order to practice the described invention.
  • each cache page 15, 15', 15" comprises a number of cache page entries 17, 17', 17", 17"'.
  • Each cache page entry 17, 17', 17", 17'” includes three fields: a tag field 17(1) identifying the triplet currently stored in the cache page entry 17, 17', 17", 17"'; a unique sequence number field 17(2) (which will be discussed in greater detail later); and a "next data token triplet” field 17(3) which stores an identifier of the cache page entry 17, 17', 17", 17'” in which the data token triplet immediately following the current data token triplet is stored.
  • a hash table 20, 20', 20" is associated with each cache page 15, 15', 15" to facilitate location of particular entries 17, 17', 17", 17"' within each cache page 15, 15', 15".
  • the data source 10 includes an array 18 of identifiers, each of which indicate the next available entry 17, 17', 17", 17"' in one of the cache pages 15, 15', 15".
  • the array 18 may be indexed by cache page.
  • the data source 10 also includes a transmitter 19.
  • the transmitter 19 communicates electrically with the connection 21 and drives data over the connection 21 to the data target 40.
  • the transmitter 19 may be one or more transceivers embodied as integrated circuits which connect to the connection 21 via a port or the transmitter 19 may be a stand alone device such as a modem.
  • the data target 40 includes a receiver 44 in electrical communication with the connection 21 and a history buffer 42 in electrical communication with the receiver 44.
  • the receiver 44 accepts data driven by the transmitter 19 over the connection 21.
  • the history buffer 42 stores data tokens received by the receiver 44 on a first-in first-out basis. Since the history buffer 42 stores only data tokens, any control information placed in the output buffer 14 by the data source 10 and transmitted to the data target 40 must be interpreted by the data target 40 before the contents of the data receiver 44 is stored in the history buffer 42.
  • the data target 40 processes the data received by receiver 44 before storing data tokens in the history buffer 42.
  • the data target 40 stores received data in a second, separate buffer and processes the data stored in that buffer before storing data tokens in the history buffer 42.
  • the data source 10 maintains an input buffer 12 of data tokens to be transmitted.
  • the input buffer 12 may be a discrete memory element that receives a predetermined amount of data to be transmitted from some other source, either from within or without the data source 10.
  • the input buffer 12 may receive data from a transmission media such as a common transport mechanism shared between the data source 10 and some other network node.
  • the input buffer 12 may refer to a portion of random access memory which is set aside by the data source 10 to store binary data to be transmitted.
  • the data source 10 could load the entire graphics file into random access memory and identify the starting and ending points of the file in memory as the starting and ending points of the input buffer 12.
  • the input buffer 12 may be provided as mass storage such as disk or tape memory.
  • the data source 10 may store files directly in associated mass storage serving as the input buffer 12.
  • the data source 10 may be the target of a data query. As results that match the data query are identified by the data source 10, it may store those results in an input buffer 12 for transmission to the data target 40, i.e., the source of the data query. Referring now to FIG. 2, the steps taken by the data source 10 to process data token triplets in the input buffer 12 are shown.
  • the current data token triplet 11 is hashed to yield an identifier for a particular cache page 15, 15', 15" (step 202).
  • hash tables 20, 20', 20" are provided to facilitate location of cache page entries 17, 17', 17", 17"' within cache page 15, 15', 15"
  • the hash table 20, 20', 20" associated with the identified cache page 15, 15', 15” is consulted to determine a particular cache entry 17, 17', 17", 17"' in the cache page 15, 15', 15” (step 204).
  • the hash used to determine the particular entry 17, 17', 17", 17"' in a cache page 15, 15', 15” to consult should be selected to use all available cache page entries 17, 17', 17", 17'" in a manner similar to that described above with respect to usage of cache pages 15, 15', 15".
  • the hashed data token triplet may identify both a cache page 15, 15', 15" and an entry 17, 17', 17", 17"' within the identified cache page 15, 15', 15".
  • a predetermined number of bits of the data token triplet 11 is used as the cache page entry identifier. For example, the first five bits of data token triplet 11 could be used to identify an entry in a thirty-two entry cache page 15, 15', 15".
  • the entry 17, 17', 17", 17"' identified by the associated hash table 20 is accessed to determine if the tag stored in the tag field 17(1) of that entry 17, 17', 17", 17"' matches the tag of the data token triplet 11 the source 10 is currently processing.
  • the tag is generally some number of bits extracted from the data token triplet 11, such as the first five bits or the last eight bits of the triplet 11, for example. If the identified cache page entry 17, 17', 17", 17"' does not store a tag which matches the tag of the data token triplet 1 1 (step 206), the data token triplet 11 has not previously been encountered by the comparator 13, i.e. a cache "miss" has occurred.
  • one of the data tokens comprising the data token triplet 11 the source 10 is currently processing is stored in the output buffer 14 of the data source 10 for transmission to the data target 40 (step 208).
  • the first data token comprising the data token triplet 11 is stored in the output buffer 14.
  • the data source 10 also includes an array 18 of identifiers that identify the next available entry 17, 17', 17", 17"' in each cache page 15, 15', 15". Tags, sequence identifiers, and "next data token triplet" values associated with the data token triplet 11 are stored in the cache page entry 17, 17', 17", 17'" identified as the next available entry in that page 15, 15', 15" by the array 18 (step 210).
  • the array 18 may be an array of integers identifying the next available entry 17, 17', 17", 17"' or array 18 may be an array of pointers.
  • the step of storing tags, sequence identifiers, and "next data token triplet" values associated with the data token triplet 11 can actually be composed of a number of substeps.
  • a tag is formed from the data token triplet 11 as described above and is stored in the tag field 17(1) of the identified cache entry 17, 17', 17", 17'" (step 280).
  • the current sequence number is stored in the sequence number field 17(2) of the cache entry 17, 17', 17", 17"' (step 282).
  • the sequence number is a unique identifier associated with each data token triplet that can be implemented, for example, as a continually-increasing counter.
  • Each data token triplet 11 therefore, is associated with a unique identifier.
  • each time data associated with a token triplet 11 is stored, it is associated with a unique identifier, which can be considered a time-stamp.
  • An identifier of the current cache page entry 17, 17', 17", 17'" is stored in the "next data token triplet" field 17(3) of the immediately previously used cache page entry 17, 17', 17", 17"' (step 284).
  • the hash table 20, 20', 20" associated with the cache page 15, 15', 15" in which the tag and unique sequence number are stored is updated to reflect the new entry in the cache page 15, 15', 15" (step 286).
  • the array 18 entry associated with that cache page 15, 15', 15" is updated so that it identifies a new cache page entry 17, 17', 17", 17"' for receiving data associated with a new data token triplet.
  • the cache page entries 17, 17', 17", 17'" may be identified in round-robin fashion, for example, the array 18 entry may simply be incremented to identify the next sequential cache page entry 17, 17', 17", 17"'.
  • cache memory management schemes such as most-recently-used (MRU) or least-recently-used (LRU) may be used to identify entries 17, 17', 17", 17"' for receiving data associated with new triplets. If more data token triplets exist in the input buffer to be processed (step 214) then the data source 10 identifies the next overlapping data token triplet 1 1' stored in the input buffer 12 (step 216) and resumes processing at step 202. Otherwise, the process of processing data token triplets is finished (step 222).
  • MRU most-recently-used
  • LRU least-recently-used
  • step 206 if the tag stored in the tag field 17(1) of the identified cache page entry 17, 17', 17", 17'" matches the tag of the data token triplet 11 being processed, then the data token triplet 11 has been previously encountered by the comparator 13.
  • a unique sequence identifier is stored in the sequence number field 17(2) of each cache page entry 17, 17', 17", 17'", and a "next data token triplet" identifier is stored in the "next data token triplet" field 17(3) of the cache page entry 17, 17', 17", 17'".
  • the difference between the current unique sequence identifier and the sequence identifier associated with the matching data token triplet indicates the number of data token triplets that have been processed by the data source 10 since that data token triplet was encountered.
  • the data source 10 Rather than transmit the data tokens comprising the data token triplet 11 itself, the data source 10 inserts the difference between the current sequence identifier and the stored sequence identifier in the output buffer (step 220) for transmission to the data target 40.
  • the data source 10 instead writes the difference between the two sequence numbers (8,100) to the output buffer 14.
  • the output buffer 14 is transmitted by transmitter 19 to the data target 40.
  • the data target 40 encounters the difference value of 8, 100 instead of data, the data target uses that difference value to consult the history buffer 42. That is, the data target 40 retrieves the data token triplet that begins 8,100 positions back in the history buffer.
  • Data token triplets stored in the output buffer 14 can be differentiated from sequence number information stored in the output buffer 14 by a flag, for example, an additional bit.
  • a flag for example, an additional bit.
  • a "1" bit instructs the data target 40 that a history buffer entry is a data token and an "0" bit instructs the data target 40 that the history buffer entry is sequence information.
  • FIG. 2B shows an alternate set of steps that may be taken instead of step 220.
  • the difference between the current sequence identifier and the sequence identifier associated with the previously encountered data token triplet identifies the beginning of a matching sequence that is at least one data token triplet long. Further matching data token triplets, which extend the length of the match, are recognized in the following manner.
  • the data source 10 identifies to the next, overlapping data token triplet 11' (step 302). That data token triplet 11' is hashed and tagged (step 304) to determine if a matching cache page entry 17, 17', 17", 17"' exists (step 306), that is, the data source 10 determines if the next triplet 11 ' has also been encountered previously by the comparator 13. If the tag field 17(1) contains data which matches the tag of data token triplet 11 ', the sequence number stored in the sequence field 17(2) is compared to the sequence number stored in the sequence field 17(2) of the cache entry 17, 17', 17", 17'" associated with the previous data token triplet 11 to determine if they differ by one (step 308).
  • the new data token triplet 11' is part of a sequence of at least four tokens. That is, not only has the first data token triplet 11 been previously encountered, but the next, overlapping data token triplet 11' has also been previously encountered, and the triplets 11, 11 ' were previously encountered in the same order.
  • the data source 10 attempts to extend the length of the match again by returning to step 302. At some point, either the tag associated with a new data token will not match the tag stored in the cache entry 17, 17', 17", 17'", or the sequence number associated with an entry 17, 17', 17", 17'" will not be one greater than the sequence number associated with the last data token triplet.
  • the starting point of the sequence can be determined as described above in connection with FIG. 2, and the length of the sequence can be determined by subtracting the sequence identifier associated with the first matching data triplet from the sequence identifier associated with the last matching triplet.
  • step 312 If there are more data token triplets stored in the input buffer 12 to process (step 312), the data source returns to step 202 and begins processing the triplet that failed to extend the sequence. If no more data token triplets remain to be processed, the data source 10 is finished (step 314).
  • the logical structure of the history buffer 42 maintained by the data target 40 must provide for random access to the data stored in the buffer 42 while retaining structure so that particular data tokens may be located.
  • the buffer 42 maintained by the data target 40 is a first-in first-out buffer directly stored in memory.
  • a circular linked list may be used to provide a first-in first-out buffer which takes advantage of fragmented memory. If the buffer 42 is implemented as raw data storage in memory, then the data target 40 must be aware of the token size in order to correctly locate data tokens in the buffer 42. For example, if the data token size is eight bits, then the data target 40 must have some knowledge that the token size is eight bits in order to correctly offset into the memory storage area in the buffer 42.
  • the data target 40 may be made aware of the data token size by storing the data token size as an entry in a configuration or initialization file stored on the data target. Further, embodiments which use multiple sizes of data tokens must be able to identify the size of a token to accurately locate stored tokens. Such methods are readily apparent to one of ordinary skill in the art.
  • a cache hit occurs that has an associated sequence number greater than a predetermined threshold, it is treated as an invalid entry.
  • the predetermined number may be set equal to the size of the history buffer 42 of the data target 40. This allows a reset cache (i.e. a cache with all zero entries) to be initialized by setting the current sequence number equal to the size of the history buffer 42 of the data target 40, making all existing cache entries invalid.
  • sequence variable continues to increment, it will "wrap.” For example, an eight-bit counter wraps when it increments from 255 to 256, that is, the counter increments from 1111 1111 (decimal 255) to 0000 0000 (decimal 0).
  • the cache is purged and rebased prior to the wrap event. This is accomplished by setting the sequence numbers associated with all currently unused cache entries to zero and subtracting an amount from the sequence numbers associated with each active cache entry equal to the predetermined threshold. For example, if an eight-bit sequence number was used, all currently unused cache entries would be set to zero and all used cache entries would have 255 subtracted from the sequence numbers stored in the sequence number fields 17(2).
  • the functionality described above may be implemented as software executing on a general purpose computer.
  • a program may set aside portions of the computer's random access memory to provide the input buffer 12 and the output buffer 14 and program logic may effect the comparisons between the buffers as noted above.
  • the program may be written in any one of a number of high level languages such as FORTRAN, PASCAL, C, C++, or BASIC.
  • the software could be implemented in an assembly language directed to the microprocessor resident on the target computer, for example, the software could be implemented in Intel 80x86 assembly language if it were configured to run on an IBM PC or PC clone.
  • the software may be embodied on an article of manufacture including, but not limited to, a floppy disk, a hard disk, an optical disk, a magnetic tape, a PROM, an EPROM, or CD-ROM.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

L'invention concerne un procédé de transmission efficace de données binaires qui consiste à maintenir, au niveau de la source de données, un tampon d'entrée stockant une première pluralité de jeton de données et un tampon de sortie contenant une seconde pluralité de jetons de données à transmettre. La cible de données maintient un tampon contant une pluralité de jetons données reçus par le récepteur. La source détermine si un triplet de jetons de données stocké dans le tampon d'entrée a déjà été rencontré. Le cas échéant, la source de données insère les informations dans le tampon de sortie identifiant le nombre de triplets de donnés traités par la source depuis que le triplet a été rencontré. Si le triplet n'a pas été rencontré, la source de données insère un seul jeton de données dans le tampon de sortie. Font aussi l'objet de cette invention un système et un appareil associés.
PCT/US1998/012409 1998-05-26 1998-06-15 Appareil et procede de transmission efficace de donnees binaires WO1999062179A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU80725/98A AU8072598A (en) 1998-05-26 1998-06-15 Apparatus and method for efficiently transmitting binary data

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US8483898A 1998-05-26 1998-05-26
US09/084,838 1998-05-26

Publications (1)

Publication Number Publication Date
WO1999062179A1 true WO1999062179A1 (fr) 1999-12-02

Family

ID=22187537

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1998/012409 WO1999062179A1 (fr) 1998-05-26 1998-06-15 Appareil et procede de transmission efficace de donnees binaires

Country Status (2)

Country Link
AU (1) AU8072598A (fr)
WO (1) WO1999062179A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009143120A2 (fr) * 2008-05-19 2009-11-26 Citrix Systems, Inc. Systèmes et procédés de codage d'image amélioré

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5406279A (en) * 1992-09-02 1995-04-11 Cirrus Logic, Inc. General purpose, hash-based technique for single-pass lossless data compression
EP0691628A2 (fr) * 1994-07-06 1996-01-10 Microsoft Corporation Méthode et système de compression de données

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5406279A (en) * 1992-09-02 1995-04-11 Cirrus Logic, Inc. General purpose, hash-based technique for single-pass lossless data compression
EP0691628A2 (fr) * 1994-07-06 1996-01-10 Microsoft Corporation Méthode et système de compression de données

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009143120A2 (fr) * 2008-05-19 2009-11-26 Citrix Systems, Inc. Systèmes et procédés de codage d'image amélioré
WO2009143120A3 (fr) * 2008-05-19 2010-04-01 Citrix Systems, Inc. Systèmes et procédés de codage d'image amélioré
US8295617B2 (en) 2008-05-19 2012-10-23 Citrix Systems, Inc. Systems and methods for enhanced image encoding

Also Published As

Publication number Publication date
AU8072598A (en) 1999-12-13

Similar Documents

Publication Publication Date Title
US6754799B2 (en) System and method for indexing and retrieving cached objects
US9253277B2 (en) Pre-fetching stored data from a memory
EP2314027B1 (fr) Table de commutation dans un pont ethernet
EP1721438B1 (fr) Serveur de communication, procedes et systemes permettant de reduire les volumes de transport via les reseaux de communication
US7647417B1 (en) Object cacheability with ICAP
EP1622056A2 (fr) Architecture de proxy et cache pour le stockage de documents
US7146371B2 (en) Performance and memory bandwidth utilization for tree searches using tree fragmentation
US6625612B1 (en) Deterministic search algorithm
US20030018688A1 (en) Method and apparatus to facilitate accessing data in network management protocol tables
JP2002511616A (ja) 高性能オブジェクト・キャッシュ
KR20040073269A (ko) 컴퓨터 시스템, 데이터 처리 방법 및 컴퓨터 판독가능한 기록 매체
EP1782212A1 (fr) Systeme et procede permettant de maintenir des objets dans une memoire cache de consultation
US20100014516A1 (en) Table lookup mechanism for address resolution
US20020178176A1 (en) File prefetch contorol method for computer system
US7249219B1 (en) Method and apparatus to improve buffer cache hit rate
US5146560A (en) Apparatus for processing bit streams
US5742611A (en) Client server network and method of operation
WO1999062179A1 (fr) Appareil et procede de transmission efficace de donnees binaires
JPH06290090A (ja) 遠隔ファイルアクセスシステム
US11048758B1 (en) Multi-level low-latency hashing scheme
CN111541624B (zh) 空间以太网缓存处理方法
JPH06103150A (ja) システム間の更新の速度をあげる方法
EP0344915B1 (fr) Appareil et procédé pour le traitement de séquences de bits
CN116033017A (zh) 网络数据包的处理设备及处理方法、电子设备
Tatarinov Cache Policies for Web Servers

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AL AM AT AU AZ BA BB BG BR BY CA CH CN CU CZ DE DK EE ES FI GB GE GH GM GW HU ID IL IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT UA UG UZ VN YU ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW SD SZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
NENP Non-entry into the national phase

Ref country code: KR

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: CA