WO1996007140A2 - Systeme d'antememoire pour stockage de donnees - Google Patents

Systeme d'antememoire pour stockage de donnees Download PDF

Info

Publication number
WO1996007140A2
WO1996007140A2 PCT/EP1995/003447 EP9503447W WO9607140A2 WO 1996007140 A2 WO1996007140 A2 WO 1996007140A2 EP 9503447 W EP9503447 W EP 9503447W WO 9607140 A2 WO9607140 A2 WO 9607140A2
Authority
WO
WIPO (PCT)
Prior art keywords
cache
memory
main memory
marking
address
Prior art date
Application number
PCT/EP1995/003447
Other languages
German (de)
English (en)
Other versions
WO1996007140A3 (fr
Inventor
Jochen Liedtke
Original Assignee
Gmd - Forschungszentrum Informationstechnik Gmbh
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gmd - Forschungszentrum Informationstechnik Gmbh filed Critical Gmd - Forschungszentrum Informationstechnik Gmbh
Priority to JP8508501A priority Critical patent/JPH10504922A/ja
Publication of WO1996007140A2 publication Critical patent/WO1996007140A2/fr
Publication of WO1996007140A3 publication Critical patent/WO1996007140A3/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0806Multiuser, multiprocessor or multiprocessing cache systems
    • G06F12/0813Multiuser, multiprocessor or multiprocessing cache systems with a network or matrix configuration

Definitions

  • the invention relates to a cache system with multiple parallel access, which can be used together with parallel buses, networks or cross switches in multiscalar, parallel or massively parallel computers.
  • Modern processors need cache memory to bridge the gap between fast processors and slow main memories. Structures as in FIG. 22 are found in multiprocessor systems. 1 to k active processor elements P trj use a cache C together. i , ⁇ - .. i, k can be special works of a single processor (instruction fetch and one or more integer and floating point units) or they can also be complete processors. In addition to one or more connections to memory elements, each cache x also has n-1 connections to the other caches. The inter-cache connections are used to implement Cache coherence protocols (see under 1.3). In many cases, the inter-cache connections are not realized by an nxn crossbar, but by an inexpensive connection network, in extreme cases by a single or multiple bus. The type of the connection network is irrelevant for the following, as long as a cache can receive several jobs per cache cycle.
  • n called accessors in a cache cycle n orders (a ⁇ o- j) to send to the cache.
  • n orders (a ⁇ o- j) to send to the cache.
  • Each order consists of the addressed address a 3 and the operation O j to be carried out .
  • a cache index is calculated from the real or virtual address a using a map function and a row of the cache is thus selected. Then a is compared to the address of the memory area currently associated with this cache line (the marking of the cache entry). If there is a match, there is a hit (and the cache line is used instead of the main memory); otherwise there is a miss.
  • the map function is the regulation
  • Direct-mapped caches are simply structured and organized caches, but they lead to higher miss rates than n-way caches. In principle, these consist of n correspondingly smaller direct-mapped cache blocks. 25 shows a 2-way associative cache. This ensures that each main memory element is in at most one block.
  • the map function always addresses an entire line, i.e. n cache entries at the same time. All n markings are read out in parallel and compared with the address a. If all comparisons fail, there is a cache miss. If equality is found exactly once, there is a hit and the cache entry assigned to the path in question is used.
  • Cache coherence is ensured by sending invalidate or update messages to the other caches. If a cache receives an invalidate message for an address that it currently holds, it invalidates the corresponding cache entry, so that a new access leads to a cache miss. In the case of an update job, the new value is also sent along with the address so that the cache can take over and the cache entry remains valid.
  • the systems described in the context of this invention allow both methods.
  • Cache coherence protocols are based on broadcast or targeted multicast mechanisms. In the first case, the invalidate or update message is always sent to all caches. Each cache then checks whether it is affected or not. The best known method of this type is bus snooping. Directory-based procedures keep track of which caches are currently copies of each memory area. In the event of an invalidate or update, only the caches that are marked as active for this address need to be notified via multicast.
  • a cache system is to be developed which efficiently supports multiple parallel cache access.
  • the invention proposes a cache system with the features of claim 1 or claim 12.
  • Advantageous embodiments of the invention are specified in the subclaims.
  • the cache system for storing data is provided with a cache main memory with a plurality of entries, each of which has at least one marking field for
  • Contents of the marking fields of the cache main memory are also stored in the cache secondary memories. Update The markers are propagated via the connection from the cache main memory to each cache secondary memory. A cache secondary memory addressed to invalidate a marking field forwards this information to the main cache memory.
  • the contents of the marking fields of the secondary cache memory are expediently the same as the contents of the marking fields of the cache main memory.
  • each entry in the cache main memory has a plurality of marking fields, - that all marking fields in the cache main memory are combined into different groups, with each group having exactly one for each entry in the cache main memory Marking field is assigned, - that the number of cache sub-memories is equal to the number of groups of marking fields and that each cache sub-memory has one number of marking fields per entry equal to the number of groups of marking fields of the cache main memory, whereby per entry Exactly one marking field is provided for each group of marking fields (see FIG. 2).
  • the main cache memory is thus a multi-way associative cache, the marking fields of which are stored in each of the secondary cache memories or all of their marking fields are distributed over a plurality of secondary cache memories (see FIGS. 1, 2, 5 and ). Unless it is expressly mentioned in the above and in the following that an entry or a marking field is invalid, it is always assumed that the entry or the marking field is valid.
  • the cache system is advantageously operated in such a way that when an operation job to be forwarded to the cache main memory arrives at a cache slave memory, the cache slave memory is first indexed with the address and that the operation job is then sent to the cache memory. Main memory is forwarded if the content of a marking field of the cache secondary memory matches the address.
  • the cache system expediently has at least two cache secondary stores, a selection device being connected between the cache secondary stores and the main cache store, which selects in the case of jobs simultaneously forwarded from the cache secondary stores to the main cache store, which job and / or in which order the jobs are forwarded to the cache main memory (see FIG. 3).
  • each entry of a cache secondary memory is supplemented by a pending order field for the temporary storage of operational orders for the cache main memory, the order being only temporarily stored if the one for indexing one Address used cache cache with the content of exactly one marking field of this cache cache. true and the job cannot be forwarded directly to the cache main memory (see FIG. 4).
  • the sum of the number of flag fields of all cache sub-memories is equal to the number of flag fields of the cache main memory, the content of the number of flag fields of the cache secondary memory being a mathematical decomposition of the content of the number of flag fields of the cache main memory and on the basis of an address created from outside, it is decided in which cache secondary memory the content of the marking field belonging to this address is located, provided that this address is located in a cache secondary memory (see FIGS. 5 and 6) .
  • each entry of the cache main memory has several marking fields, - that all marking fields of the cache main memory are combined into different groups, with exactly one marking field per entry of the cache main memory is assigned, - that the number of secondary cache memories is equal to the number of groups of marker fields and that each secondary cache memory has one marking field per entry and as many entries as a marking field group comprises marking fields (see FIGS . 5 and 6).
  • connection between the secondary cache memories and the main cache memory is preferably designed in such a way that contents of marking fields are only between a group of marking fields of the cache main memory and the marking fields of the relevant cache secondary memory are interchangeable (FIG. 6).
  • Group information is assigned that each group information is assigned a blocking signal which is in the set state or
  • Main cache memory can only be carried out if the blocking signal, the group information addressed by means of the address
  • Entry of the cache main memory is assigned, is in the reset state (see FIGS. 7 to
  • each cache secondary memory which temporarily stores a non-directly forwardable job for performing an operation with an address on the cache main memory, sets the blocking signal associated with the group information of the cache main memory, this blocking signal being reset when in the cache Sub-memory no more orders to be forwarded to the cache main memory are stored (see FIGS. 10 to 12).
  • a variant of the cache system for storing data is provided with a plurality of cache memories, each having a plurality of entries, each of which has at least one marking field for address data, and at least one distributor device, which is coupled to each cache memory, the distributor device receiving an operation job for performing an operation with an address on one of the cache memories and selectively forwarding this job to one or more cache memories (see Figures 13 to 18).
  • the cache memories can be designed as main cache memories and / or as secondary cache memories.
  • a selection device is preferably connected upstream of each cache memory, a plurality of distributor devices being provided, each of which is connected to each of the selection devices assigned to the cache memories, each selection device in the case of simultaneous receipt of several orders from the distribution devices selects which of the jobs and / or in which order the jobs are forwarded to the assigned cache memory (see FIGS. 14 to 17).
  • a buffer device for temporarily storing orders is connected between each selection device and the associated cache memory (see FIGS. 15 to 17).
  • the distributor devices may simultaneously receive several orders at the same time as those with the highest priority over those with lower priority (see FIGS. 16 to 18).
  • the highest priority jobs are routed past the distributors to all of the caches (see Figures 17 and 18).
  • the distribution of the jobs to the cache memory is based on the address.
  • Fig. 1 shows a cache system with a direct-mapped cache as the main cache and shadow caches
  • Fig. 2 shows a cache system with a 2-way associative
  • FIG. 3 shows a cache system with a selection device between the main cache memory, which communicates with a processor, and the secondary cache memories,
  • FIG. 5 shows a cache system with a cache main memory and address-selective cache sub-memories
  • FIG. 6 shows a cache system with a 2-way associative cache main memory and with way-selective cache secondary memories.
  • 7 shows a cache system with a blocking control which is derived from the shading of an address
  • FIG. 8 shows a cache system with a blocking control, the shading being supplied by the cache main memory
  • FIG. 9 shows a cache system with a blocking control, the shading being provided by a TLB
  • FIG. 10 shows a cache system with a cache main memory and a plurality of cache secondary memories which are connected to the processor via a block bus,
  • 11 shows a cache system with blocking control by bit masks
  • FIG. 13 shows a four-way parallel cache system with four cache modules, selective access to the cache modules being shown in the left part and associative access to the cache modules in the right part,
  • FIG. 16 shows a four-way parallel cache system with the 3-way buffers upstream of the cache modules, the orders fed to the distributor device being preferred to the orders fed to the other distributors,
  • 17 shows a four-way parallel cache system with associative parallel access, latency-critical accessors being preferred over latency-uncritical accessors.
  • Fig. 21 shows a distribution device with a
  • a normal cache consists of marking and data fields.
  • a simple shadow cache is the replication of the cache without the data fields. 1 shows a direct-mapped, FIG. 2 shows a 2-way associative cache with shadow caches.
  • Both the cache main memory C and the shadow cache secondary memory Si receive orders from the outside, each consisting of an operation o to be carried out and the address a addressed. Changes in the marking fields are sent from the cache C to the shadow caches Si via the so-called shadow bus (connection between the cache C and the shadow caches Sj).
  • hit is a Boolean function which assumes the value true if and only if the specified cache delivers a hit at the specified address.
  • a shadow cache can provide more than just the hit information (hit / miss), for example also the path and the line address of the entry addressed. If this is used, the information in the shadow cache must always be consistent with the corresponding cache information in the event of a hit. In order to maintain consistency or further consistency, the cache propagates changes in the marking fields to all of its shadow caches. Because of the invariance condition specified above, invalidations can certainly be delayed, but not new entries in the cache. These are propagated to the shadow caches before the cache entry becomes valid so that they become valid there at the same time at the latest. Designating the cache address (route and line number) of the cache entry to be changed with x results in the following sequence for an invalidation:
  • Time cache shadow cache S send msg (x, a new )
  • Vx: C x : invalid send msg (all, invalid)
  • shadow caches can be addressed both from the outside and from the shadow bus. Competing accesses must be permitted in terms of hardware (multiported) or serialized (arbiter).
  • Shadow caches can also be implemented in small numbers by multiporting the marking memory of the actual cache. The propagation of changes then takes place automatically.
  • shadow caches can also be designed to be multi-ported. In this way, n logical shadow caches can be implemented using n / 2 dual-ported shadow caches.
  • the (memory address) currently assigned to the entry can also be used for propagation via the shadow bus.
  • Shadow caches can contribute to an increase in efficiency if cache jobs that lead to cache misses and which do not trigger a change in the cache frequently have to be processed, i.e. can be ignored. This is the case, for example, with bus snooping or similar broadcast-based coherence protocols: cache invalidations or updates are only necessary in the event of a hit, misses can be ignored.
  • Fig. 3 shows a processor element with a cache C and 4 shadow caches S 1 (which up to 4 jobs (a ⁇ o can treat in parallel.
  • the shadow bus is not shown for reasons of clarity.
  • S x receives the order, operation Oi with the address a. ⁇ to carry out, it checks whether a 1 triggers a hit (in the shadow cache S. In the event of a hit, it forwards the job to the cache C, otherwise it ignores it. If several shadow caches forward jobs simultaneously to the cache C, an arbiter chooses (Selection device) one out of it and blocks the other.
  • Such blockages can be somewhat alleviated by using buffers between the shadow cache and the actual cache.
  • the number x of the shadow cache entry addressed can also be used for the forwarding to the cache C. This enables the cache C to be accessed directly instead of associative, but requires consistency between the assignment of Si and C.
  • the order pending field can be expanded so that further order types including parameters can also be saved.
  • Another variant is to expand the pending order field so that a pending order queue in the shadow cache can be implemented as a single or double-linked list.
  • Shadow cache with a pending order field is never blocked, since there can never be too many pending orders. In extreme cases, all entries in the shadow cache represent pending orders. Each order (a, o) coming from outside can then either be ignored or implemented by modifying an existing pending order.
  • a subclass of the selective shadow caches are the address-selective shadow caches, which distribute jobs to the shadow caches based on the address a.
  • a connection network such as FIG. 5, can replace the shadow bus.
  • each path of an n-way associative cache C is assigned a shadow cache S ⁇ .
  • the number of addresses of all valid entries in path i of the cache is also referred to
  • path-selective shadow caches presupposes that each job (a, o) is presented to all shadow caches Si or that cache misses, i.e. when loading data into the cache C, the path of the new entry is chosen to be predeterminable. This can be achieved, for example, by deriving the path from address a or from the shading (see under 4.).
  • Address or path-selective shadow caches can be advantageous in particular in combination with n-parallel caches (see under 5.4).
  • the shading s is used in the event of a hit to check the assigned signal block E. If it is not set, the procedure is the same as for a normal cache hit. However, if it is set, the cache operation is not carried out and the processor operation causing it is suspended (similar to a cache miss).
  • Applying a blocki signal blocked all cache entries with shading i. The blockade is removed by resetting blocki.
  • the shading s is derived from the addressed virtual or real address with each access, for example by selecting a few address bits.
  • the shading s is provided by the cache on each access. For this purpose, it is either derived from the number of the cache entry (repective of the route) or is stored in the cache for each entry in a shading field. In the latter case, in the event of a cache miss, the shading must also be loaded in addition to the data.
  • the TLB Translation Look-aside Buffer
  • the shading is defined at the page level. As a rule, the shading information will therefore be contained in the page table entries as additional information in order to be loaded into the TLB from the page table entries in the event of a TLB miss.
  • Block control can be used by shadow caches with pending order handling (see under 3.2) to prevent access to cache entries for which an invalidate or update request has come from outside, but which ( due to a traffic jam) could not yet be forwarded to the actual cache.
  • 10 shows such a block bus, the shading here being provided by the cache.
  • Each shadow cache S 1 sets the signal block shading (ai) as soon as it receives a job (a ⁇ o and cannot immediately forward it to the cache.
  • the shadow cache can set the shading the address a x or derive the shadow cache index or path or keep it in a shading field for each entry in the shadow cache. In the latter case, the cache must propagate not only the marking but also the shading via the shadow bus or the network to the affected shadow caches in the event of changes, in particular when reloading.
  • the above-mentioned method of shading fields and propagation can be used.
  • synonyms i.e. Data with different virtual but identical real addresses, always having the same shading under all virtual addresses
  • the shading can be stored as a number in the shadow cache. Otherwise an n-bit wide bit mask can be used to store a shading amount in which each bit represents exactly one shading. Set bits then mean assigned shades. The shadow cache then bitwise blocks this mask onto the block bus, so that all synonyms are also blocked.
  • the cache system is divided into n independently operating cache modules C 0 ... CM ,,. ! divided up.
  • the individual modules can be direct-mapped, k-ways associative or fully associative.
  • the pair (a, o) is passed from a distributor X to a cache module for carrying out the operation.
  • a selective access is shown on the left in FIG. 13, which responds to CM 2 (active phase or cache modules are each indicated by thick lines).
  • the right field of FIG. 13 shows an associative access (a, o) which is directed to all cache modules for execution at the same time. Partial accesses (addressing a subset of the cache modules) are also possible.
  • the n-parallel cache can be operated by means of associative accesses like an n-way associative cache. Since each selective access activates only one cache module, up to n selective accesses can be carried out in parallel, provided that each access can be allocated to another module, as in FIG. 14 Access sources are provided with their own distributor X A and each distributor is connected to each cache module. Conflicts are resolved by a 1-out-of-n-arbiter per module: if a module is selected by several distributors at the same time, the arbiter selects an order and blocks the others. The right part of FIG. 14 shows a conflict situation in which the orders switched from X 2 and X 3 to CM 0 are blocked.
  • the arbiters can be supplemented by job buffers, which implement a queue in front of each cache module (see FIG. 15). If you have m buffers per module, an m-out-of-n arbiter is used as the arbiter. If the queue is empty, he selects up to m orders from the queue and transfers them to the queue. If there are still k jobs waiting in the queue at the cache module, the arbiter accordingly only takes over m - k of the jobs pending.
  • one or more accessors are more latency-critical than the rest, ie accesses via them should always be processed without delay, they can be given priority access to the queue of the others. 16, the accessor assigned to X 0 can overtake the others.
  • latency-critical access can take place parallel to latency-uncritical one. 17 shows such a situation, the latency-critical accessory associatively accessing.
  • the distributors Xi can distribute the orders (a, o) to the cache modules using the address a. As a rule, suitable bits of a are selected and used as index i.
  • the shading s can be used for selection, which, however, must then be supplied to the distributors as a supplement to (a, o).
  • Shadow caches can be used to distribute the orders to the individual modules. For example, with n cache modules CM X each distributor X can use n shadow caches S 10 ... S ⁇ -. j can be assigned, each S 1>; ⁇ each being a shadow cache from CM. An order (a ⁇ r o is distributed to CM- if and only if S 1 # D delivers a hit for a ⁇ . FIG. 19 shows such a distributor for a 4-parallel cache.
  • An order (a ⁇ oj is distributed to CM if and only if S 1 reports a hit in path j.
  • FIG. 20 shows such a distributor for a 4-parallel cache. If the individual CMi themselves are m-paths associative, one takes corresponding associative S ⁇ and decides for CM- if Si reports a hit on path k with k • m ⁇ k ⁇ (k + 1) • m.
  • a shadow cache is used in the distributor, which also delivers the shading s in the event of a hit on a job (a ⁇ o ⁇ .
  • the job is then distributed using s, for example to CM S or to CM smodn if there are more shades 21 shows a distributor for a 4-parallel cache which distributes on the basis of the shading.
  • shadow caches described in 3 can be used with n-parallel caches, not only for the distribution of the orders described above but also independently of, as described under 3, efficiency increases by ignoring irrelevant orders and to implement buffering.
  • Order storage and. Blocking can be used accordingly.
  • the blocking can be processor-specific (as described under 4.), but also cache-module-specific.
  • the system according to the invention can have the features of the shadow caches described above and / or the features of the shading described above and / or the features of the n-parallel cache.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

L'invention concerne un système d'antémémoire pour stockage de données, pourvu d'une antémémoire principale comportant une pluralité d'entrées dont chacune présente au moins une zone de marquage pour des données d'adresse et au moins une zone de données utiles affectée à la zone de marquage. En outre, au moins une antémémoire secondaire comporte une pluralité d'entrées dont chacune présente une zone de marquage par zone de marquage d'une entrée de l'antémémoire principale. L'antémémoire principale et chaque antémémoire secondaire peuvent être indexées au moyen d'adresses identiques ou différentes afin d'exécuter des ordres d'opérations sur l'antémémoire principale. Une liaison entre l'antémémoire principale et chaque antémémoire secondaire permet de transférer les contenus des zones de marquage de l'antémémoire principale vers chaque antémémoire secondaire et vice versa.
PCT/EP1995/003447 1994-09-01 1995-09-01 Systeme d'antememoire pour stockage de donnees WO1996007140A2 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP8508501A JPH10504922A (ja) 1994-09-01 1995-09-01 データ記憶用キャッシュシステム

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE4431090 1994-09-01
DEP4431090.0 1994-09-01

Publications (2)

Publication Number Publication Date
WO1996007140A2 true WO1996007140A2 (fr) 1996-03-07
WO1996007140A3 WO1996007140A3 (fr) 1996-05-02

Family

ID=6527128

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP1995/003447 WO1996007140A2 (fr) 1994-09-01 1995-09-01 Systeme d'antememoire pour stockage de donnees

Country Status (3)

Country Link
JP (1) JPH10504922A (fr)
DE (1) DE19532418A1 (fr)
WO (1) WO1996007140A2 (fr)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0366323A2 (fr) * 1988-10-28 1990-05-02 Apollo Computer Inc. File d'attente d'invalidation de duplicat de mémoire d'étiquettes

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE3216238C1 (de) * 1982-04-30 1983-11-03 Siemens AG, 1000 Berlin und 8000 München Datenverarbeitungsanlage mit virtueller Teiladressierung des Pufferspeichers
JPH04230549A (ja) * 1990-10-12 1992-08-19 Internatl Business Mach Corp <Ibm> 多重レベル・キャッシュ

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0366323A2 (fr) * 1988-10-28 1990-05-02 Apollo Computer Inc. File d'attente d'invalidation de duplicat de mémoire d'étiquettes

Also Published As

Publication number Publication date
DE19532418A1 (de) 1996-03-14
WO1996007140A3 (fr) 1996-05-02
JPH10504922A (ja) 1998-05-12

Similar Documents

Publication Publication Date Title
DE10262164B4 (de) Computersystem mit einer hierarchischen Cacheanordnung
DE69721643T2 (de) Multiprozessorsystem ausgestaltet zur effizienten Ausführung von Schreiboperationen
DE69822534T2 (de) Gemeinsame Speicherbenutzung mit variablen Blockgrössen für symmetrische Multiporzessor-Gruppen
DE102006030879B4 (de) System zum Reduzieren der Latenzzeit von exklusiven Leseanforderungen in einem symmetrischen Multiprozessorsystem
DE69722079T2 (de) Ein Mehrrechnersystem mit Anordnung zum Durchführen von Blockkopieroperationen
DE102007030116B4 (de) Snoop-Filter mit ausschließlicher Inhaberschaft
DE102009022151B4 (de) Verringern von Invalidierungstransaktionen aus einem Snoop-Filter
DE112013000891T5 (de) Verbessern der Prozessorleistung für Befehlsfolgen, die Sperrbefehle enthalten
DE69722512T2 (de) Mehrrechnersystem mit einem die Anzahl der Antworten enthaltenden Kohärenzprotokoll
DE102013204417B4 (de) Daten-Cachespeicherblock-Freigabeanforderungen
DE60211076T2 (de) Datenübertragung zwischen virtuellen Adressen
DE69724355T2 (de) Erweiterte symmetrische Multiprozessorarchitektur
DE69732938T2 (de) Hybrides Speicherzugangsprotokoll in einem Datenverarbeitungssystem mit verteiltem, gemeinsamem Speicher
DE69724353T2 (de) Mehrrechnersystem mit einem Drei-Sprung-Kommunikationsprotokoll
DE112006003453T5 (de) Per-Satz-Relaxation der Cache-Inklusion
DE112005002420T5 (de) Verfahren und Vorrichtung zum Pushen von Daten in den Cache eines Prozessors
DE112019000629B4 (de) Koordination von cacheoperationen
DE112005002268T5 (de) Verfahren und Vorrichtung für eine vorwärtslaufende Victim-Auswahl zum Verringern eines unerwünschten Ersatzverhaltens in inklusiven Caches
DE102007048601A1 (de) Datenspeicherung in einem Schaltsystem, das mehrere Prozessoren eines Computersystems koppelt
DE19516937A1 (de) Hierarchisches Cachesystem für einen Computer
DE102008048627A1 (de) Zuteilen von Platz in dedizierten Cache-Wegen
DE102016013577A1 (de) Systeme und Verfahren zum adaptiven Partitionieren in verteilten Cache-Speichern
DE10219623A1 (de) System und Verfahren zur Speicherentscheidung unter Verwendung von mehreren Warteschlangen
DE112011103433T5 (de) Verfahren, System und Programm zum Steuern von Cache-Kohärenz
DE112005003222T5 (de) Dynamische Allokation eines Puffers auf mehrere Klienten bei einem Prozessor mit Threads

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): JP US

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): AT BE CH DE DK ES FR GB GR IE IT LU MC NL PT SE

AK Designated states

Kind code of ref document: A3

Designated state(s): JP US

AL Designated countries for regional patents

Kind code of ref document: A3

Designated state(s): AT BE CH DE DK ES FR GB GR IE IT LU MC NL PT SE

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
ENP Entry into the national phase

Ref country code: US

Ref document number: 1997 793479

Date of ref document: 19970618

Kind code of ref document: A

Format of ref document f/p: F

122 Ep: pct application non-entry in european phase