EP1079307B1 - Method for operating a memory system as well as memory system - Google Patents
Method for operating a memory system as well as memory system Download PDFInfo
- Publication number
- EP1079307B1 EP1079307B1 EP00202908.0A EP00202908A EP1079307B1 EP 1079307 B1 EP1079307 B1 EP 1079307B1 EP 00202908 A EP00202908 A EP 00202908A EP 1079307 B1 EP1079307 B1 EP 1079307B1
- Authority
- EP
- European Patent Office
- Prior art keywords
- address
- line section
- cache
- prefetch
- memory
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Lifetime
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0862—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with prefetch
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Definitions
- the invention relates to a method for operating a memory system comprising a main memory and associated therewith, structured in address-related lines cache memory in the data can be loaded from the main memory and read out as needed by a processor, wherein during access of the processor to data a certain address in the cache memory, under which certain data from the main memory of a corresponding address are stored, it is checked whether sequential data is stored in the cache memory at the next following address, which in case of absence from the main memory in the context of a prefetch in the cache memory can be invited.
- a cache memory is a fast buffer memory, from which the required data is available to the processor much faster than from the much slower working main memory.
- prefetch To ensure that even in the vast majority of cases, the data required by the processor in the cache memory are present, the use of prefetch has proven.
- data which is not yet needed by the processor at the moment, but which is likely to be needed shortly due to processor access in the cache memory is previously transferred from the main memory into the cache memory. Memory or a corresponding register so that they, if the processor requires this data, there are also available.
- a cache memory is subdivided into a multiplicity of lines, which are usually also structured in columns, with each memory line being assigned a specific address line.
- One line of the main memory is written to one line of the cache memory, with the line address specified in the cache memory corresponding to that of the main memory.
- Each line thus has a certain length over which the data is written.
- the operation of such a memory system is now such that when a hit in the Cache line N, which contains the data of the address A, is checked simultaneously, whether in the cache line N + 1, the data of the address A + line length are.
- a hit in the cache line N always checks whether sequential data are present in the subsequent line which the processor is expected to require shortly. If this is not the case, the data of the address A + line length in a prefetch operation are either loaded into the cache memory or a corresponding associated register.
- the use of prefetch for a cache increases the hit rate that is achieved in the cache memory by the processor.
- a high hit rate results in better performance of the system because memory access can be performed on a hit without delay, while a miss results in access to the slower main memory and slows the system down.
- prefetch but also disadvantages.
- data is loaded from the main memory into the cache memory, which may not be needed at all, whereby this danger exists in particular for large line lengths, since in this case a large amount of data is transferred into the cache memory during a prefetching process. Since every data transfer contributes to power consumption, the power loss increases.
- a conflict arises if a cache cache fails during a prefetch operation, ie if the processor does not find the data there. Then the prefetch process must either be aborted or completed before the required Daren can be loaded. In both cases, additional delays occur which reduce the performance of the system.
- Prefetching using a pageable branch history table "IBM TECHNICAL DISCLOSURE BULLETIN ., XP002152936, describes a cache line prefetch using PBHT by dividing a cache line into areas and specifying the subsequent line to be used for each area.
- Buffer block prefetching method IBM TECHNICAL DISCLOSURE BULLETIN , XP002152937, describes a prefetching method for high speed buffer (cache) processors, where prefetching is done using a special bit associated with each block position in the cache.
- the invention is therefore based on the problem to provide a method for operating a memory system, which improves the performance of the system and reduces the power consumption when using a prefetch strategy
- a prefetch takes place only when the processor accesses a predetermined line section lying within a line.
- a prefetch process is advantageously only initiated if the processor accesses data which lies within a predetermined line segment within a line. Only in this case, if the check for the presence of the sequential data reveals that they are missing, a prefetch is performed. This is based on the knowledge that the probability that the data of the line following the row being processed sequentially is required by the processor is greater, the further back in line N the hit lies or the processor reads out data. Because then it can be assumed that the processor executing a particular program, the data of the line N reads out further and shortly the sequential data of the subsequent line N + 1, as stored in the main memory, would need.
- a prefetch is not necessarily performed each time upon detection of the absence of the sequential data, but only in preferred cases. This reduces the number of prefetch operations performed during a program run, so that the power consumption caused by the loading of data into the cache memory or the register associated with it also drops significantly. Compared to a conventional prefetch strategy, power consumption for data transfer can be reduced by more than 30% with improved system performance.
- a particularly preferred application of this prefetch strategy is in the field of mobile radiotelephony, in particular when using the storage system in a mobile terminal of the mobile radiotelephony.
- this can be defined according to the invention by means of a predetermined line section address and comprise all addresses of a line which are greater than or equal to the line section address, wherein the Address within the line, which is accessed by the processor is compared with the line section address to determine from this comparison, whether the processor now outside or within the predetermined line section works, so that it can be decided whether a prefetch can be initiated or not.
- the final placement or selection of the line section address is essentially dependent on the performance parameters of the processor, in particular the processor clock and the data transfer rate from the main memory as well as the program structure of the program, which the processor executes because this is predetermined by whether the processor in Essentially continuously address for line or line by line processing the data, or whether during execution of the program very many interlaced.
- the line segment address should be set so that in the majority of cases the prefetch operation is already completed before accessing the data from the processor.
- Each line of the cache memory may be divided into a plurality of line areas of particular length, each associated with a separate line area address, the line area address being compared to the line section address.
- the line segment address may be given in the form of a numerical value, wherein a comparative numerical value is determined on the basis of the address within the line or the line region address where the processor is currently accessing.
- provision can be made for a plurality of line sections of different lengths to be determined, the line section to be used for determining a prefetch to be executed being selected depending on the program section of a program to be executed by the processor within which the processor is operating at the time of the determination.
- the line section or the line section address is read during the runtime of a Processor to be processed by the program itself.
- a dynamic change of the line section is realized, so that it is possible to define separate line sections for different program parts, which may be differently structured, and to separately define the trigger conditions for a prefetch for each program section. In this way it is possible to react to different program structures. So if you implement the decision to initiate a prefetch process so that it can be configured by software, a dynamic change of this parameter is possible.
- the invention further relates to a data storage system comprising a main memory and a multi-row cache memory in which data read from the main memory is writable and readable by a processor when needed, and a prefetch means for determining; whether to prefetch data from the main memory for transmission to the cache, as well as to prefetch.
- This memory system is characterized in that in prefetch means at least one of the length of a line of the cache memory related line section address is determined or determinable, by means of which a line section is defined within the line of the cache memory, wherein the means for comparing the Line section address having an address within a line accessed by the processor, and for determining whether the access is within the line section, as well as for performing a prefetch depending on the comparison result.
- Each row of the cache memory may be divided into a plurality of row areas each having a separate row area address comparable to the row slot address, this row area address indicating the access location of the processor and correspondingly comparing to the row address address of the prefetch means to determine whether a prefetch would be possible at all.
- the cache memory In order to avoid that any data loaded from the main memory in a prefetch should be written directly into the cache memory, it has proven to be expedient for the cache memory to comprise a register memory into which the prefetch data is to be fetched by means of the prefetch means are writable. If the prefetch data stored in the register memory of the cache memory is required, it is given by the latter to the processor and also stored in the cache memory for subsequent accesses.
- Fig. 1 shows a storage system 1 according to the invention, comprising a main memory 2 large storage volume, but has a relatively slow operating speed. Associated with this is a cache memory 3, which is a much faster working buffer memory. Connected or associated with the cache memory 3 and the main memory 2 is a prefetch means 4, by means of which, on the one hand, it is determined whether a prefetch is to be performed at all, and, on the other hand, the prefetch is carried out. Assigned to the memory system 1 is also a processor 5, which is in the example shown in communication with the cache memory 3 and the prefetch means 4.
- Fig. 1 shown overall configuration, which is preferably provided on a common chip, which is advantageous in terms of data transmission and power consumption, is also designed as described for performing a prefetch. As part of this, it is checked on access of the processor to a cache memory line and there to a specific address in the line, whether a subsequent line containing sequential data is also already present in the cache memory 3. If this is not the case, the corresponding missing line is transferred from the main memory 2 into the cache memory 3 via the prefetch means 4 or such a transmission process is initiated.
- Fig. 2 shows in the form of a schematic diagram of a possible cache memory configuration and serves to explain the procedures for initiating a possible prefetch.
- the processor 5 provides an address 6 to the cache memory indicating what data the processor needs.
- the address 6 consists of a total of three address sections I, II, III.
- the first address section I indicates the page address of the main memory to which the data of a cache memory line is assigned.
- the cache memory 3 has a corresponding cache address directory 7 in which page addresses 8 corresponding to the main memory page addresses are stored. Within the cache address directories 7 is now addressed with the address section II and the address section I compared with the contents of the cache address directories. In this case, the page address is "0x01".
- the address portion II representing a row address indicates in which row of the cache memory 3 the data stored under the address portion I is present. This information thus represents the link between the cache address directory and the cache. In the example shown, this address information is "Z5". Finally, with the information in the address section III, the address of the respective data to be read out within the line is indicated. As Fig. 2 shows, the cache memory 3 is divided into a plurality of rows Z1, Z2, ..., ZN as well as a plurality of columns S1, S2, ..., SM. The information in address part III is "S6" and thus determines as data to be read those in column S6.
- the cache memory now receives the address 6 from the processor, it is automatically checked whether the following data, which are stored in the main memory in the corresponding following line, are already present in the cache memory 3. It is therefore checked whether the data line following the address "0x01 / Z5", namely the line "0x01 / Z6" is likewise already present in the cache memory 3. This is done by checking the cache directories 7 in step 9. If the presence of the row is detected, this data is available, if this row is missing, a prefetch might be required to load that row from main memory.
- the processor in the cache memory 3 must access an address which is within a predetermined line segment.
- This line segment is determined in the example shown using the line segment address "X".
- the line section value X can ultimately be arbitrarily set, that is, the respective line section can be arbitrarily set in size. If, for example, as in the prior art, the possibility of a prefetch be given for each processor access, the line section value X is set to S1, the line section then comprises all column sections. If a prefetch is not to be performed at any time, the line section value X is set to 0, for example.
- Fig. 3 finally shows the configuration of a cache memory accordingly Fig. 2
- the cache memory 3 a plurality of register memory 12 ', 12 ", 12"', 12 ''' assigned, in which in the context of a prefetch loaded from the main memory data is written.
- the write-in strategy may be such that new prefetch data always overwrites the register memory that stores the contains oldest prefetch data. This takes into account the fact that in the processing of programs in which there are very frequent interleaves, not read prefetch data, although read, but were not needed due to the interleaved, are held for a certain time until overwriting because the possibility exists that the program will return to the original line and the data will still be needed. Overwriting by a subsequent prefetch is excluded because this data would be written to another register memory.
- the cache memory configurations shown are merely exemplary.
- the cache memory can be structured as desired.
- the memory may be implemented as a directmapping cache or as a two-way set associative cache.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Memory System Of A Hierarchy Structure (AREA)
Description
Die Erfindung betrifft ein Verfahren zum Betrieb eines Speichersystems umfassend einen Hauptspeicher und einen diesem zugeordneten, in adressbezogene Zeilen strukturierten Cache-Speicher, in den Daten aus dem Hauptspeicher eingeladen und bei Bedarf von einem Prozessor ausgelesen werden können, wobei während des Zugriffs des Prozessors auf Daten einer bestimmten Adresse im Cache-Speicher, unter welcher bestimmte Daten aus dem Hauptspeicher einer korrespondierenden Adresse abgelegt sind, überprüft wird, ob unter der nächstfolgenden Adresse im Cache-Speicher sequentielle Daten abgelegt sind, wobei diese bei Nichtvorhandensein aus dem Hauptspeicher im Rahmen eines prefetch in den Cache-Speicher eingeladen werden können.The invention relates to a method for operating a memory system comprising a main memory and associated therewith, structured in address-related lines cache memory in the data can be loaded from the main memory and read out as needed by a processor, wherein during access of the processor to data a certain address in the cache memory, under which certain data from the main memory of a corresponding address are stored, it is checked whether sequential data is stored in the cache memory at the next following address, which in case of absence from the main memory in the context of a prefetch in the cache memory can be invited.
Bei bekannten Speichersystemen werden eine Vielzahl von im Hauptspeicher abgelegten Daten bereits vor ihrer Abfrage durch einen zugeordneten Prozessor, der ein entsprechendes Programm abarbeitet, in einen Cache-Speicher eingeladen. Ein Cache-Speicher ist ein schneller Pufferspeicher, aus welchem dem Prozessor die benötigten Daten wesentlich schneller als aus dem deutlich langsamer arbeitenden Hauptspeicher zur Verfügung stehen. Zur Sicherstellung, dass auch in den allermeisten Fällen die vom Prozessor benötigten Daten im Cache-Speicher vorhanden sind, hat sich der Einsatz von prefetch bewährt. Im Rahmen eines prefetch-Vorgangs werden Daten, die zwar im Moment noch nicht vom Prozessor benötigt werden, von denen es aber wahrscheinlich ist, dass sie aufgrund des Prozessorzugriffs im Cache-Speicher in Kürze benötigt werden, bereits vorher aus dem Hauptspeicher in den Cache-Speicher oder ein entsprechendes Register eingeladen, so dass sie dann, wenn der Prozessor diese Daten benötigt, dort auch vorhanden sind. Ein Cache-Speicher ist in eine Vielzahl von Zeilen, die zumeist auch in Spalten strukturiert sind, unterteilt, wobei jeder Speicherzeile eine bestimmte Adresszeile zugeordnet ist. In jeweils eine Zeile des Cache-Speichers wird eine Zeile des Hauptspeichers geschrieben, wobei die im Cache-Speicher angegebene Zeilenadresse mit der des Hauptspeichers korrespondiert. Jede Zeile hat also eine bestimmte Länge, über welche die Daten eingeschrieben sind. Die Arbeitsweise eines solchen Speichersystems ist nun derart, dass bei einem Treffer in der Cache-Zeile N, die die Daten der Adresse A enthält, gleichzeitig überprüft wird, ob sich in der Cache-Zeile N+1 die Daten der Adresse A + Zeilenlänge befinden. Es wird also bei einem Treffer in der Cache-Zeile N stets überprüft, ob in der nachfolgenden Zeile sequentielle Daten vorhanden sind, die der Prozessor voraussichtlich in Kürze benötigen wird. Ist dies nicht der Fall, werden die Daten der Adresse A + Zeilenlänge in einem prefetch-Vorgang entweder in den Cache-Speicher oder aber ein entsprechendes zugeordnetes Register geladen.In known memory systems, a plurality of data stored in the main memory are already loaded into a cache memory before being polled by an associated processor executing a corresponding program. A cache memory is a fast buffer memory, from which the required data is available to the processor much faster than from the much slower working main memory. To ensure that even in the vast majority of cases, the data required by the processor in the cache memory are present, the use of prefetch has proven. In the context of a prefetch operation, data which is not yet needed by the processor at the moment, but which is likely to be needed shortly due to processor access in the cache memory, is previously transferred from the main memory into the cache memory. Memory or a corresponding register so that they, if the processor requires this data, there are also available. A cache memory is subdivided into a multiplicity of lines, which are usually also structured in columns, with each memory line being assigned a specific address line. One line of the main memory is written to one line of the cache memory, with the line address specified in the cache memory corresponding to that of the main memory. Each line thus has a certain length over which the data is written. The operation of such a memory system is now such that when a hit in the Cache line N, which contains the data of the address A, is checked simultaneously, whether in the cache line N + 1, the data of the address A + line length are. Thus, a hit in the cache line N always checks whether sequential data are present in the subsequent line which the processor is expected to require shortly. If this is not the case, the data of the address A + line length in a prefetch operation are either loaded into the cache memory or a corresponding associated register.
Grundsätzlich erhöht der Einsatz von prefetch für einen Cache-Speicher die Trefferrate, die im Cache-Speicher vom Prozessor erzielt wird. Eine hohe Trefferrate wiederum führt zu einer besseren Leistung des Systems, da der Speicherzugriff bei einem Treffer ohne Verzögerung durchgeführt werden kann, während ein Fehlschlag zu einem Zugriff auf den langsameren Hauptspeicher führt und das System bremst.Basically, the use of prefetch for a cache increases the hit rate that is achieved in the cache memory by the processor. In turn, a high hit rate results in better performance of the system because memory access can be performed on a hit without delay, while a miss results in access to the slower main memory and slows the system down.
Durch den Einsatz von prefetch entstehen aber auch Nachteile. Zum einen werden Daten vom Hauptspeicher in den Cache-Speicher geladen, die eventuell überhaupt nicht benötigt werden, wobei diese Gefahr insbesondere bei großen Zeilenlängen besteht, da in diesem Fall sehr viele Daten im Rahmen eines prefetch-Vorganges in den Cache-Speicher übertragen werden. Da jeder Datentransfer zum Stromverbrauch beiträgt, steigt die Verlustleistung an. Zum anderen entsteht ein Konflikt, wenn bei einem laufenden prefetch-Vorgang ein Fehlschlag im Cache-Speicher entsteht, wenn der Prozessor also dort die gewünschten Daten nicht findet. Dann muss der prefetch-Vorgang entweder abgebrochen oder abgeschlossen werden bevor das Laden der benötigten Daren erfolgen kann. In beiden Fällen entstehen zusätzliche Verzögerungen, die die Leistungsfähigkeit des Systems verringern.The use of prefetch but also disadvantages. On the one hand, data is loaded from the main memory into the cache memory, which may not be needed at all, whereby this danger exists in particular for large line lengths, since in this case a large amount of data is transferred into the cache memory during a prefetching process. Since every data transfer contributes to power consumption, the power loss increases. On the other hand, a conflict arises if a cache cache fails during a prefetch operation, ie if the processor does not find the data there. Then the prefetch process must either be aborted or completed before the required Daren can be loaded. In both cases, additional delays occur which reduce the performance of the system.
Das Dokument "
Das Dokument "
Das Dokument "
Der Erfindung liegt damit das Problem zugrunde, ein Verfahren zum Betrieb eines Speichersystems anzugeben, welches bei Einsatz einer prefetch-Strategie die Leistungsfähigkeit des Systems verbessert und den Stromverbrauch verringertThe invention is therefore based on the problem to provide a method for operating a memory system, which improves the performance of the system and reduces the power consumption when using a prefetch strategy
Besondere und bevorzugte Aspekte der Erfindung sind in den beigefügten unabhängigen und abhängigen Ansprüchen dargelegt.Particular and preferred aspects of the invention are set forth in the appended independent and dependent claims.
Zur Lösung dieses Problems ist bei einem Verfahren der eingangs genannten Art erfindungsgemäß vorgesehen, dass ein prefetch nur dann erfolgt, wenn der Prozessor auf einen innerhalb einer Zeile liegenden, vorbestimmten Zeilenabschnitt zugreift.To solve this problem, it is provided according to the invention in a method of the type mentioned in the introduction that a prefetch takes place only when the processor accesses a predetermined line section lying within a line.
Beim erfindungsgemäßen Verfahren wird also vorteilhaft ein prefetch-Vorgang überhaupt nur dann initiiert, wenn der Prozessor innerhalb einer Zeile auf Daten, die in einem vorbestimmten Zeilenabschnitt liegen, zugreift. Nur in diesem Fall wird, wenn sich bei der Überprüfung betreffend das Vorhandensein der sequentiellen Daten ergibt, dass diese fehlen, ein prefetch durchgeführt. Dem liegt die Erkenntnis zugrunde, dass die Wahrscheinlichkeit, dass die Daten der sequentiell der bearbeiteten Zeile nachfolgenden Zeile vom Prozessor benötigt werden, um so größer ist, je weiter hinten in der Zeile N der Treffer liegt bzw. der Prozessor Daten ausliest. Denn dann ist anzunehmen, dass der Prozessor, der ein bestimmtes Programm abarbeitet, die Daten der Zeile N weiter ausliest und in Kürze die sequentiellen Daten der nachfolgenden Zeile N+1, wie sie im Hauptspeicher abgelegt ist, benötigen würde. Durch die Definition eines bestimmten Zeilenabschnitts, welche den Teil einer Zeile definiert, in den der Prozessor zugreifen muss, damit überhaupt die Möglichkeit eines prefetch-Vorgangs besteht, kann dem Rechnung getragen werden. Greift der Prozessor auf eine Adresse in der Zeile zu, die außerhalb des vorbestimmten Zeilenabschnitts liegt, kann ein prefetch (noch) nicht initiiert werden.In the method according to the invention, therefore, a prefetch process is advantageously only initiated if the processor accesses data which lies within a predetermined line segment within a line. Only in this case, if the check for the presence of the sequential data reveals that they are missing, a prefetch is performed. This is based on the knowledge that the probability that the data of the line following the row being processed sequentially is required by the processor is greater, the further back in line N the hit lies or the processor reads out data. Because then it can be assumed that the processor executing a particular program, the data of the line N reads out further and shortly the sequential data of the subsequent line N + 1, as stored in the main memory, would need. This can be taken into account by the definition of a specific line section, which defines the part of a line that the processor must access in order for there to be any possibility of prefetching. If the processor accesses an address in the line which is outside the predetermined line segment, a prefetch can not (yet) be initiated.
Hierdurch kann die Leistungsfähigkeit des Systems erhöht werden, da ein prefetch-Vorgang eben nur dann initiiert wird, wenn es sehr wahrscheinlich ist, dass die damit vom Hauptspeicher geholten Daten auch tatsächlich gebraucht werden. Ein prefetch wird also nicht zwingend jedes Mal bei Feststellen des Nichtvorhandenseins der sequentiellen Daten durchgeführt, sondern nur in bevorzugten Fällen. Damit sinkt die Anzahl der während eines Programmablaufs durchgeführten prefetch-Vorgänge, so dass hierdurch auch der Stromverbrauch, der durch das Laden von Daten in den Cache-Speicher oder das diesem zugehörende Register verursacht wird, deutlich sinkt. Gegenüber einer konventionellen prefetch-Strategie kann der Stromverbrauch für den Datentransfer um mehr als 30 % bei verbesserter Systemleistung gesenkt werden. Ein besonders bevorzugter Anwendungsfall dieser prefetch-Strategie liegt im Bereich der mobilen Funktelefonie, insbesondere bei Einsatz des Speichersystems in einem mobilen Endgerät der mobilen Funktelefonie.This can increase the performance of the system, since a prefetch operation is initiated only when it is very likely that the data retrieved from main memory will actually be needed. Thus, a prefetch is not necessarily performed each time upon detection of the absence of the sequential data, but only in preferred cases. This reduces the number of prefetch operations performed during a program run, so that the power consumption caused by the loading of data into the cache memory or the register associated with it also drops significantly. Compared to a conventional prefetch strategy, power consumption for data transfer can be reduced by more than 30% with improved system performance. A particularly preferred application of this prefetch strategy is in the field of mobile radiotelephony, in particular when using the storage system in a mobile terminal of the mobile radiotelephony.
Zur einfachen Festlegung des Zeilenabschnitts kann dieser erfindungsgemäß mittels einer vorbestimmten Zeilenabschnittsadresse definiert werden und sämtliche Adressen einer Zeile umfassen, die größer oder gleich als die Zeilenabschnittsadresse sind, wobei die Adresse innerhalb der Zeile, auf die der Prozessor zugreift, mit der Zeilenabschnittsadresse verglichen wird, um anhand dieses Vergleichs festzustellen, ob der Prozessor nun außerhalb oder innerhalb des vorbestimmten Zeilenabschnitts arbeitet, so dass entschieden werden kann, ob ein prefetch initiiert werden kann oder nicht. Die letztendliche Platzierung bzw. Auswahl der Zeilenabschnittsadresse ist im Wesentlichen abhängig von den Leistungsparametern des Prozessors, insbesondere des Prozessortaktes sowie der Datenübertragungsrate aus dem Hauptspeicher wie aber auch der Programmstruktur des Programms, welches der Prozessor abarbeitet, da über dieses vorgegeben ist, ob der Prozessor im Wesentlichen kontinuierlich Adresse für Adresse einer Zeile bzw. zeilenweise die Daten abarbeitet, oder ob während des Abarbeitens des Programms sehr viele Zeilensprünge erfolgen. Grundsätzlich sollte die Zeilenabschnittsadresse so gelegt werden, dass in dem größten Teil der Fälle der prefetch-Vorgang bereits abgeschlossen ist, bevor ein Zugriff auf die Daten vom Prozessor aus erfolgt.For easy definition of the line section, this can be defined according to the invention by means of a predetermined line section address and comprise all addresses of a line which are greater than or equal to the line section address, wherein the Address within the line, which is accessed by the processor is compared with the line section address to determine from this comparison, whether the processor now outside or within the predetermined line section works, so that it can be decided whether a prefetch can be initiated or not. The final placement or selection of the line section address is essentially dependent on the performance parameters of the processor, in particular the processor clock and the data transfer rate from the main memory as well as the program structure of the program, which the processor executes because this is predetermined by whether the processor in Essentially continuously address for line or line by line processing the data, or whether during execution of the program very many interlaced. Basically, the line segment address should be set so that in the majority of cases the prefetch operation is already completed before accessing the data from the processor.
Jede Zeile des Cache-Speichers kann in mehrere Zeilenbereiche bestimmter Länge unterteilt sein, denen jeweils eine eigene Zeilenbereichsadresse zugeordnet ist, wobei die Zeilenbereichsadresse mit der Zeilenabschnittsadresse verglichen wird. Dies trägt der Möglichkeit Rechnung, einen Cache-Speicher in verschiedener Form zu konfigurieren und betrifft im Wesentlichen die Aufteilung jeder Cache-Speicherzeile bzw. des gesamten Cache-Speichers in mehrere Spalten, wobei die Spalten je nach Cache-Speicher-Konfiguration unterschiedlicher Länge sein können. In jeden Zeilenbereich kann eine bestimmte Datenmenge eingeschrieben werden, die anhand der Zeilenbereichsadresse, die im Rahmen der Überprüfung des "Arbeitsortes" des Prozessors innerhalb der Zeile mit der Zeilenabschnittsadresse verglichen wird, auffindbar sind. Dabei kann die Zeilenabschnittsadresse in Form eines Zahlenwertes gegeben sein, wobei anhand der Adresse innerhalb der Zeile oder der Zeilenbereichsadresse, wo der Prozessor gerade zugreift, ein Vergleichszahlenwert ermittelt wird. Gemäß einer zweckmäßigen Weiterbildung des Erfindungsgedankens kann vorgesehen sein, dass mehrere Zeilenabschnitte unterschiedlicher Länge bestimmt sind, wobei der zur Ermittlung eines gegebenenfalls auszuführenden prefetch heranzuziehende Zeilenabschnitt abhängig von dem Programmabschnitt eines vom Prozessor abzuarbeitenden Programms, innerhalb welchem der Prozessor im Zeitpunkt der Ermittlung arbeitet gewählt wird. Der Zeilenabschnitt bzw. die Zeilenabschnittsadresse wird während der Laufzeit eines vom Prozessor abzuarbeitenden Programms vom Programm selbst verändert. Es ist gemäß dieser Erfindungsausgestaltung also eine dynamische Änderung des Zeilenabschnitts realisiert, so dass es möglich ist, für verschiedene Programmteile, die gegebenenfalls unterschiedlich strukturiert sind, separate Zeilenabschnitte zu definieren und für jeden Programmabschnitt die Auslösebedingungen für einen prefetch separat zu definieren. Auf diese Weise kann auf verschiedene Programmstrukturen reagiert werden. Implementiert man also die Entscheidung über die Initiierung eines prefetch-Vorgangs, so dass sie durch Software konfigurierbar ist, ist eine dynamische Veränderung dieses Parameters möglich.Each line of the cache memory may be divided into a plurality of line areas of particular length, each associated with a separate line area address, the line area address being compared to the line section address. This takes account of the possibility of configuring a cache memory in various forms and essentially relates to the division of each cache line or the entire cache memory into several columns, wherein the columns may be of different lengths depending on the cache configuration , A specific amount of data can be written into each line area, which can be found using the line area address, which is compared within the context of the checking of the "work location" of the processor within the line segment line. In this case, the line segment address may be given in the form of a numerical value, wherein a comparative numerical value is determined on the basis of the address within the line or the line region address where the processor is currently accessing. According to an expedient development of the inventive concept, provision can be made for a plurality of line sections of different lengths to be determined, the line section to be used for determining a prefetch to be executed being selected depending on the program section of a program to be executed by the processor within which the processor is operating at the time of the determination. The line section or the line section address is read during the runtime of a Processor to be processed by the program itself. Thus, according to this embodiment of the invention, a dynamic change of the line section is realized, so that it is possible to define separate line sections for different program parts, which may be differently structured, and to separately define the trigger conditions for a prefetch for each program section. In this way it is possible to react to different program structures. So if you implement the decision to initiate a prefetch process so that it can be configured by software, a dynamic change of this parameter is possible.
Neben dem Verfahren betrifft die Erfindung ferner ein Speichersystem für Daten, umfassend einen Hauptspeicher und einen in mehreren Zeilen strukturierten Cache-Speicher, in den aus dem Hauptspeicher ausgelesene Daten einschreibbar und bei Bedarf von einem Prozessor auslesbar sind, sowie ein prefetch-Mittel zur Bestimmung, ob ein prefetch auf Daten des Hauptspeichers zur Übertragung an den Cache-Speicher erfolgen soll, sowie zur Durchführung eines prefetch. Dieses Speichersystem zeichnet sich dadurch aus, dass im prefetch-Mittel wenigstens eine auf die Länge einer Zeile des Cache-Speichers bezogene Zeilenabschnittsadresse bestimmt oder bestimmbar ist, mittels welcher ein Zeilenabschnitt innerhalb der Zeile des Cache-Speichers definiert wird, wobei die Mittel zum Vergleichen der Zeilenabschnittsadresse mit einer Adresse innerhalb einer Zeile, auf die der Prozessor zugreift, und zur Bestimmung, ob der Zugriff innerhalb des Zeilenabschnitts erfolgt, sowie zur Durchführung eines prefetch in Abhängigkeit des Vergleichsergebnisses ausgebildet sind.In addition to the method, the invention further relates to a data storage system comprising a main memory and a multi-row cache memory in which data read from the main memory is writable and readable by a processor when needed, and a prefetch means for determining; whether to prefetch data from the main memory for transmission to the cache, as well as to prefetch. This memory system is characterized in that in prefetch means at least one of the length of a line of the cache memory related line section address is determined or determinable, by means of which a line section is defined within the line of the cache memory, wherein the means for comparing the Line section address having an address within a line accessed by the processor, and for determining whether the access is within the line section, as well as for performing a prefetch depending on the comparison result.
Jede Zeile des Cache-Speichers kann in mehrere Zeilenbereiche mit jeweils einer separaten, mit der Zeilenabschnittsadresse vergleichbaren Zeilenbereichsadresse unterteilt sein, wobei diese Zeilenbereichsadresse den Zugriffsort des Prozessors angibt und entsprechend mit der Zeilenabschnittsadresse seitens des prefetch-Mittels verglichen wird um zu bestimmen, ob ein prefetch überhaupt möglich wäre.Each row of the cache memory may be divided into a plurality of row areas each having a separate row area address comparable to the row slot address, this row area address indicating the access location of the processor and correspondingly comparing to the row address address of the prefetch means to determine whether a prefetch would be possible at all.
Als zweckmäßig hat es sich ferner erwiesen, wenn mehrere Zeilenabschnittsadressen zur Bestimmung mehrerer Zeilenabschnitte bestimmt oder bestimmbar sind, wobei die jeweils für den Vergleich heranzuziehende Zeilenabschnittsadresse in Abhängigkeit des Programmabschnitts eines vom Prozessor bearbeiteten Programms, welcher im Zeitpunkt des Zugriffs des Prozessors auf den Cache-Speicher vom Prozessor bearbeitet wird, gewählt wird.It has also proven to be useful if a plurality of line section addresses are determined or determinable for determining a plurality of line sections, wherein the line section address to be used for the comparison in each case depends on the program section a program processed by the processor, which is processed at the time of access of the processor to the cache memory by the processor is selected.
Um zu vermeiden, dass etwaige in einem prefetch aus dem Hauptspeicher geladene Daten direkt in den Cache-Speicher einzuschreiben sind, hat es sich als zweckmäßig erwiesen, wenn der Cache-Speicher einen Registerspeicher umfasst, in den die prefetch-Daten mittels des prefetch-Mittels einschreibbar sind. Werden die im Registerspeicher des Cache-Speichers abgelegten prefetch-Daten benötigt, werden sie von diesem an den Prozessor gegeben und für nachfolgende Zugriffe auch im Cache-Speicher abgelegt.In order to avoid that any data loaded from the main memory in a prefetch should be written directly into the cache memory, it has proven to be expedient for the cache memory to comprise a register memory into which the prefetch data is to be fetched by means of the prefetch means are writable. If the prefetch data stored in the register memory of the cache memory is required, it is given by the latter to the processor and also stored in the cache memory for subsequent accesses.
Bei Verwendung nur eines Registerspeichers kann der Fall auftreten, dass, nachdem prefetch-Daten im Registerspeicher eingeschrieben sind, das Programm jedoch in eine andere Cache-Speicher-Zeile springt, wo ein neuer prefetch nötig wird. Die alten prefetch-Daten im Registerspeicher sind bis dahin noch nicht ausgelesen, können jedoch in Kürze bei einem Rücksprung des Programms wieder gebraucht werden. Da nur ein Registerspeicher vorhanden ist, werden diese aber aufgrund des neuen prefetch überschrieben. Springt das Programm wieder zurück, ist möglicherweise ein erneuter prefetch erforderlich. Um hier Abhilfe zu schaffen hat es sich als zweckmäßig erwiesen, wenn mehrere Registerspeicher vorgesehen sind, in die jeweils die prefetch-Daten verschiedener prefetch-Vorgänge einschreibbar sind. Dabei sollten neu einzuschreibende prefetch-Daten in den Registerspeicher eingeschrieben werden, in dem die jüngsten prefetch-Daten unter Überschreiben der ältesten vorhanden sind.When using only one register memory, it may happen that after prefetch data is written to the register memory, the program jumps to another cache line where a new prefetch becomes necessary. The old prefetch data in the register memory are not yet read out, but can be used again soon when the program returns. Since there is only one register memory, they will be overwritten because of the new prefetch. If the program jumps back, a new prefetch may be required. To remedy this situation, it has proven to be expedient if a plurality of register memories are provided, in each of which the prefetch data of various prefetch operations can be written. In this case, new prefetch data to be written in should be written to the register memory in which the most recent prefetch data exists while overwriting the oldest.
Weitere Vorteile, Merkmale und Einzelheiten der Erfindung ergeben sich aus den im folgenden beschriebenen Ausführungsbeispielen sowie anhand der Zeichnungen. Dabei zeigen:
-
Fig. 1 eine Prinzipskizze eines Speichersystems mit zugeordnetem Prozessor, -
Fig. 2 eine Prinzipskizze der Struktur eines Cache-Speichers einer ersten Ausführungsform zur Darstellung der Vorgänge zur Initiierung eines prefetch, und -
Fig. 3 eine Cache-Speicher-Struktur entsprechendFig. 1 mit mehreren, dem Cache-Speicher zugeordneten Registerspeichern.
-
Fig. 1 a schematic diagram of a memory system with associated processor, -
Fig. 2 a schematic diagram of the structure of a cache memory of a first embodiment illustrating the processes for initiating a prefetch, and -
Fig. 3 a cache memory structure accordinglyFig. 1 with a plurality of register memories associated with the cache memory.
Die in
Der Adressabschnitt II, der eine Zeilenadresse darstellt, gibt an, in welcher Zeile des Cache-Speichers 3 die unter dem Adressabschnitt I abgelegten Daten vorhanden sind. Diese Information stellt also das Bindeglied zwischen dem Cache-Adress-directory und dem Cache-Speicher dar. Im gezeigten Beispiel lautet diese Adressinformation "Z5". Schließlich wird mit der Information im Adressabschnitt III die Adresse der jeweiligen Daten, die innerhalb der Zeile ausgelesen werden sollen, angegeben. Wie
Im gezeigten Beispiel erfolgt also der Zugriff auf Daten, die sich im Cache-Speicher 3 in Zeile Z5 und dort im Spaltenblock S6 befinden. Die auszulesende Information lautet im gezeigten Beispiel "17".In the example shown, access is therefore made to data which is located in the
Erhält nun der Cache-Speicher die Adresse 6 vom Prozessor, wird automatisch überprüft, ob auch die nachfolgenden Daten, die im Hauptspeicher in der entsprechend folgenden Zeile abgelegt sind, bereits im Cache-Speicher 3 vorhanden sind. Es wird also überprüft, ob die der Adresse "0x01/Z5" nachfolgende Datenzeile, nämlich die Zeile "0x01/Z6" ebenfalls bereits im Cache-Speicher 3 vorhanden ist. Dies erfolgt durch Überprüfung des Cache-directories 7 im Schritt 9. Wird das Vorhandensein der Zeile festgestellt, stehen diese Daten zur Verfügung, fehlt diese Zeile, ist möglicherweise ein prefetch zum Laden dieser Zeile aus dem Hauptspeicher erforderlich.If the cache memory now receives the
Um einen prefetch-Vorgang nun tatsächlich zu initiieren, muss neben dem Fehlen der gesuchten Zeile noch eine weitere Bedingung erfüllt sein. Diese besteht darin, dass der Prozessor im Cache-Speicher 3 auf eine Adresse zugreifen muss, die innerhalb eines vorbestimmten Zeilenabschnitts liegt. Dieser Zeilenabschnitt wird im gezeigten Beispiel anhand der Zeilenabschnittsadresse "X" bestimmt. Der Zeilenabschnitt, der hierdurch definiert wird, umfasst sämtliche Adressen einer der Zeilen Z1, Z2 ... ZN, die gleich oder größer als die Zeilenabschnittsadresse sind. Würde beispielsweise die Zeilenabschnittsadresse X = S5 lauten, so würde der Zeilenabschnitt sämtliche Spalten S5, S6 ... SM umfassen. Liegt die momentan vom Prozessor abgerufene Adresse S6 außerhalb dieses Zeilenabschnitts, erfolgt kein prefetch. Liegt sie innerhalb, wird ein prefetch durchgeführt. Dies wird im Schritt 10 überprüft. Sind nun die Bedingungen der Schritte 9 und 10 beide erfüllt, was im Schritt 11 überprüft wird, wird ein prefetch durchgeführt, die entsprechenden Daten werden aus dem Hauptspeicher ausgelesen und an den Cache-Speicher 3 übertragen. Die Schritte 9, 10 und 11 werden im prefetch-Mittel 4 durchgeführt, wozu dieses über eine entsprechende Logik und Software verfügt. Es bleibt darauf hinzuweisen, dass der Zeilenabschnittswert X letztlich beliebig gesetzt werden kann, das heißt, der jeweilige Zeilenabschnitt kann in seiner Größe beliebig eingestellt werden. Soll beispielsweise, wie bisher im Stand der Technik, bei jedem Prozessorzugriff die Möglichkeit eines prefetch gegeben sein, wird der Zeilenabschnittswert X auf S1 gesetzt, der Zeilenabschnitt umfasst dann sämtliche Spaltenabschnitte. Soll zu keinem Zeitpunkt ein prefetch durchgeführt werden, wird der Zeilenabschnittswert X beispielsweise auf 0 gesetzt.In order to actually initiate a prefetch process, another condition must be fulfilled in addition to the missing line. This is that the processor in the
Claims (11)
- Method for operating a memory system (1) comprising a main memory (2) having a cache memory (3) assigned to it that is structured in address-based lines and in which data can be loaded from the main memory (2) and, if needed, read out by a processor (5), the method comprising:- the processor accessing data of a specific address in the cache memory (3) where specific data of a corresponding address from the main memory (2) is filed, and- is checking during this access if sequential data is filed in an address of the cache memory following the specific address,- if the sequential data is not filed in the cache memory (3), the sequential data from the main memory is loaded (11) into the cache memory (3) as part of a prefetch,
characterized in that the implementation of the prefetch further comprises:making a decision (10) that a prefetch will only be implemented when the processor (5) accesses a cache line section or a predetermined cache line section located within a cache line, wherein the predetermined cache line section or the cache line section is determined based on a line section address (X);dynamically adjusting the predetermined cache line section or cache line section by setting the line section address (X) based on a current status of operation of the processor (5). - Method according to claim 1,
characterized in that
the cache line section is defined by means of a predetermined cache line section address and comprises all the addresses of a line that are larger than or equal to the cache line section address, wherein the address within the line, that the processor (5) accesses, is compared with the cache line section address. - Method according to claim 2,
characterized in that
each line is divided into several line areas having a specific length to each of which is assigned its own line section address, wherein the line section address is compared with the cache line section address. - Method according to claim 2 or 3,
characterized in that
the cache line section address is provided in the form of a numerical value, wherein a comparative numerical value is determined with the help of the address within the line or the line section address. - Method according to one of the previous claims,
characterized in that
several line sections of different lengths are determined, wherein the line section, that is used to determine if the implementation of a prefetch is indicated, is selected based on the program section of a program, in which the processor (5) is working at the time of the determination, to be processed by the processor (5). - Memory system (1) for data, comprising:- a main memory (2);- a cache memory (3) structured in several lines,- a processor (5) configured to access, if necessary, data of a specific address of the cache memory (3), at which address specific data of a corresponding address was filed by the main memory (2), and- prefetch means (4) designed for testing during the accessing of the data of the specific address of the cache memory (3) if sequential data is filed in an address in the cache memory (3) following the specific address, wherein these prefetch means (4) are designed in such a way that if the sequential data is not in the cache memory (3) they will load (11) the sequential data from the main memory into the cache memory (3) as part of a prefetch,
characterized in that
these prefetch means (4) are designed in such a way to make a decision (10) that a prefetch will only be executed when the processor (5) accesses a cache line section or a predetermined cache line section located within a cache line, wherein the predetermined cache line section or the cache line section is determined based on a line section address (X),
these prefetch means (4) are furthermore designed to dynamically adjust the predetermined cache line section or cache line section, by setting the line section address (X) based on a current status of operation of the processor (5). - Memory system according to claim 6,
characterized in that
each line (Z1, ..., ZN)
of the cache memory (3) is divided into several line areas (S1, ..., SM) each having a separate line area address comparable with the line section address. - Memory system according to one of the claims 6 or 7,
characterized in that
several line section addresses are determined or can be determined for determining several line sections, wherein the line section address respectively used for the comparison is selected based on the program section of a program, which is being processed by the processor (5) at the time when the processor (5) is accessing the cache memory (3), - Memory system according to one of the claims 6 to 8,
characterized in that
the cache memory (3) comprises a register memory (12', 12", 12"', 12"") in which the prefetch data can be written by means of the prefetch means (4). - Memory system according to claim 9,
characterized in that
several register memories (12', 12", 12"', 12"") are provided in which can be written respectively the prefetch data of different prefetch processes. - Memory system according to claim 10,
characterized in that
prefetch data to be written for the first time can be written into the register memory (12', 12", 12"', 12"") by the most recent prefetch data being available by overwriting the oldest ones.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE19939764A DE19939764A1 (en) | 1999-08-21 | 1999-08-21 | Method for operating a storage system and storage system |
DE19939764 | 1999-08-21 |
Publications (2)
Publication Number | Publication Date |
---|---|
EP1079307A1 EP1079307A1 (en) | 2001-02-28 |
EP1079307B1 true EP1079307B1 (en) | 2014-10-01 |
Family
ID=7919203
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP00202908.0A Expired - Lifetime EP1079307B1 (en) | 1999-08-21 | 2000-08-14 | Method for operating a memory system as well as memory system |
Country Status (4)
Country | Link |
---|---|
US (1) | US6594731B1 (en) |
EP (1) | EP1079307B1 (en) |
JP (1) | JP4210024B2 (en) |
DE (1) | DE19939764A1 (en) |
Families Citing this family (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4093741B2 (en) * | 2001-10-03 | 2008-06-04 | シャープ株式会社 | External memory control device and data driven information processing device including the same |
US7260704B2 (en) * | 2001-11-30 | 2007-08-21 | Intel Corporation | Method and apparatus for reinforcing a prefetch chain |
US7162588B2 (en) * | 2002-08-23 | 2007-01-09 | Koninklijke Philips Electronics N.V. | Processor prefetch to match memory bus protocol characteristics |
JP2005322110A (en) * | 2004-05-11 | 2005-11-17 | Matsushita Electric Ind Co Ltd | Program converting device and processor |
US7634585B2 (en) * | 2005-11-04 | 2009-12-15 | Sandisk Corporation | In-line cache using nonvolatile memory between host and disk device |
DE102010027287A1 (en) * | 2010-07-16 | 2012-01-19 | Siemens Aktiengesellschaft | Method and device for checking a main memory of a processor |
US9135157B2 (en) * | 2010-11-22 | 2015-09-15 | Freescale Semiconductor, Inc. | Integrated circuit device, signal processing system and method for prefetching lines of data therefor |
CN104951340B (en) * | 2015-06-12 | 2018-07-06 | 联想(北京)有限公司 | A kind of information processing method and device |
US9542290B1 (en) | 2016-01-29 | 2017-01-10 | International Business Machines Corporation | Replicating test case data into a cache with non-naturally aligned data boundaries |
US10169180B2 (en) | 2016-05-11 | 2019-01-01 | International Business Machines Corporation | Replicating test code and test data into a cache with non-naturally aligned data boundaries |
US10055320B2 (en) * | 2016-07-12 | 2018-08-21 | International Business Machines Corporation | Replicating test case data into a cache and cache inhibited memory |
US10223225B2 (en) | 2016-11-07 | 2019-03-05 | International Business Machines Corporation | Testing speculative instruction execution with test cases placed in memory segments with non-naturally aligned data boundaries |
US10261878B2 (en) | 2017-03-14 | 2019-04-16 | International Business Machines Corporation | Stress testing a processor memory with a link stack |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4980823A (en) * | 1987-06-22 | 1990-12-25 | International Business Machines Corporation | Sequential prefetching with deconfirmation |
JPH06222990A (en) * | 1992-10-16 | 1994-08-12 | Fujitsu Ltd | Data processor |
-
1999
- 1999-08-21 DE DE19939764A patent/DE19939764A1/en not_active Withdrawn
-
2000
- 2000-08-14 EP EP00202908.0A patent/EP1079307B1/en not_active Expired - Lifetime
- 2000-08-16 JP JP2000246733A patent/JP4210024B2/en not_active Expired - Fee Related
- 2000-08-17 US US09/640,728 patent/US6594731B1/en not_active Expired - Lifetime
Also Published As
Publication number | Publication date |
---|---|
JP2001075866A (en) | 2001-03-23 |
EP1079307A1 (en) | 2001-02-28 |
DE19939764A1 (en) | 2001-02-22 |
US6594731B1 (en) | 2003-07-15 |
JP4210024B2 (en) | 2009-01-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
DE1499182C3 (en) | Data storage system | |
EP1079307B1 (en) | Method for operating a memory system as well as memory system | |
DE2630323A1 (en) | DATA STORAGE DEVICE | |
DE1956604A1 (en) | Data processing system with a storage system | |
EP0013737A1 (en) | Multilevel storage hierarchy for a data processing system | |
DE68924719T2 (en) | Device and method for executing a subroutine in a data processing system with block switching. | |
DE2346525B2 (en) | Virtual storage facility | |
DE3102150A1 (en) | "CIRCUIT ARRANGEMENT WITH A CACHE STORAGE FOR A CENTRAL UNIT OF A DATA PROCESSING SYSTEM | |
DE69131917T2 (en) | Reconfigurable block length cache and method therefor | |
DE69130414T2 (en) | Processor and method for parallel processing | |
DE3015875A1 (en) | MEMORY ACCESS SYSTEM AND METHOD FOR ACCESSING A DIGITAL MEMORY SYSTEM | |
EP0764906B1 (en) | Method of operating a real time computer system controlled by a real time operating system | |
DE3046912C2 (en) | Circuit arrangement for the selective deletion of cache memories in a multiprocessor data processing system | |
DE19526008A1 (en) | Vertically partitioned primary instruction cache | |
DE3750175T2 (en) | Microprocessor with a cache memory. | |
DE69714532T2 (en) | Synchronous semiconductor memory device with macro instruction memories and execution method therefor | |
EP1230590B1 (en) | Processor system | |
DE69908772T2 (en) | DEVICE WITH CONTEXT SWITCHING ABILITY | |
EP1182560A2 (en) | Processor-memory system | |
DE69030368T2 (en) | Tandem cache memory | |
DE69815656T2 (en) | Computer system with a multiple jump instruction pointer and procedure | |
DE69525850T2 (en) | SYSTEM AND METHOD FOR PROCESSING STORAGE DATA AND COMMUNICATION SYSTEMS WITH THIS SYSTEM | |
DE2507405A1 (en) | PROCEDURE AND ARRANGEMENT FOR SYNCHRONIZING THE TASKS IN PERIPHERAL DEVICES IN A DATA PROCESSING SYSTEM | |
EP0970426B1 (en) | Dependency controller for overlapping memory access operations | |
DE69519939T2 (en) | SYSTEM AND METHOD FOR DATA PROCESSING AND COMMUNICATION SYSTEM EQUIPPED WITH IT |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): DE FR GB |
|
AX | Request for extension of the european patent |
Free format text: AL;LT;LV;MK;RO;SI |
|
17P | Request for examination filed |
Effective date: 20010828 |
|
AKX | Designation fees paid |
Free format text: DE FR GB |
|
RAP1 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: PHILIPS CORPORATE INTELLECTUAL PROPERTY GMBH Owner name: KONINKLIJKE PHILIPS ELECTRONICS N.V. |
|
RAP1 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: PHILIPS INTELLECTUAL PROPERTY & STANDARDS GMBH Owner name: KONINKLIJKE PHILIPS ELECTRONICS N.V. |
|
RAP1 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: NXP B.V. |
|
17Q | First examination report despatched |
Effective date: 20100706 |
|
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
INTG | Intention to grant announced |
Effective date: 20140606 |
|
GRAS | Grant fee paid |
Free format text: ORIGINAL CODE: EPIDOSNIGR3 |
|
GRAA | (expected) grant |
Free format text: ORIGINAL CODE: 0009210 |
|
AK | Designated contracting states |
Kind code of ref document: B1 Designated state(s): DE FR GB |
|
REG | Reference to a national code |
Ref country code: GB Ref legal event code: FG4D Free format text: NOT ENGLISH |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R081 Ref document number: 50016387 Country of ref document: DE Owner name: OCT CIRCUIT TECHNOLOGIES INTERNATIONAL LTD., IE Free format text: FORMER OWNERS: KONINKLIJKE PHILIPS ELECTRONICS N.V., EINDHOVEN, NL; PHILIPS CORPORATE INTELLECTUAL PROPERTY GMBH, 20099 HAMBURG, DE |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R096 Ref document number: 50016387 Country of ref document: DE Effective date: 20141113 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R097 Ref document number: 50016387 Country of ref document: DE |
|
PLBE | No opposition filed within time limit |
Free format text: ORIGINAL CODE: 0009261 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT |
|
26N | No opposition filed |
Effective date: 20150702 |
|
REG | Reference to a national code |
Ref country code: FR Ref legal event code: PLFP Year of fee payment: 17 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R082 Ref document number: 50016387 Country of ref document: DE Representative=s name: GRUENECKER PATENT- UND RECHTSANWAELTE PARTG MB, DE Ref country code: DE Ref legal event code: R081 Ref document number: 50016387 Country of ref document: DE Owner name: OCT CIRCUIT TECHNOLOGIES INTERNATIONAL LTD., IE Free format text: FORMER OWNER: NXP B.V., EINDHOVEN, NL |
|
REG | Reference to a national code |
Ref country code: FR Ref legal event code: PLFP Year of fee payment: 18 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: FR Payment date: 20170720 Year of fee payment: 18 Ref country code: GB Payment date: 20170719 Year of fee payment: 18 Ref country code: DE Payment date: 20170719 Year of fee payment: 18 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R119 Ref document number: 50016387 Country of ref document: DE |
|
GBPC | Gb: european patent ceased through non-payment of renewal fee |
Effective date: 20180814 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: DE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20190301 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: FR Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20180831 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: GB Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20180814 |