CN85106711A - The storage administration of microprocessor system - Google Patents

The storage administration of microprocessor system Download PDF

Info

Publication number
CN85106711A
CN85106711A CN85106711.5A CN85106711A CN85106711A CN 85106711 A CN85106711 A CN 85106711A CN 85106711 A CN85106711 A CN 85106711A CN 85106711 A CN85106711 A CN 85106711A
Authority
CN
China
Prior art keywords
page
data
memory
address
microprocessor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
CN85106711.5A
Other languages
Chinese (zh)
Other versions
CN1008839B (en
Inventor
约翰·H·克劳福德
保罗·S·里斯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Intel Corp
Original Assignee
Intel Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Intel Corp filed Critical Intel Corp
Publication of CN85106711A publication Critical patent/CN85106711A/en
Publication of CN1008839B publication Critical patent/CN1008839B/en
Expired legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F7/00Methods or arrangements for processing data by operating upon the order or content of the data handled
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/14Protection against unauthorised use of memory or access to memory
    • G06F12/1416Protection against unauthorised use of memory or access to memory by checking the object accessibility, e.g. type of access defined by the memory independently of subject rights
    • G06F12/145Protection against unauthorised use of memory or access to memory by checking the object accessibility, e.g. type of access defined by the memory independently of subject rights the protection being virtual, e.g. for virtual blocks or segments before a translation mechanism
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/10Address translation
    • G06F12/1027Address translation using associative or pseudo-associative address translation means, e.g. translation look-aside buffer [TLB]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

As the microporocessor architecture of an address translation unit, it provides the secondary of cache memory, and it will obtain explanation in the present invention.Each segmentation register and a related segmentation table in primary memory, it provide storage administration a first order it comprise be used to protect, each attribute bit of priority scheduling.It comprises page guides and page table as the association in the primary memory one second page of cache memory, and it is provided at a second level that has the management of independent protective on the page or leaf level.

Description

The storage administration of microprocessor system
The invention relates to field, especially in microprocessor system as the address translation unit (address translation units) of storage administration (memory management).
Many devices that people all know (mechanisms) are arranged as storage administration.In some system, one (lager address (virtual address (virtual address)) is translated a less actual address (smaller physical address) in the location significantly.In other, a less address (smaller address) is used for access one big storage space (larger memory space), for example, and application memory body conversion (bank switching).The invention relates to the former class, that is, big there virtual address (lager virtual address) is to be used for access one limited actual storage (limited physical memory).
In storage management system (memory management systems), also know the protective device (protection mechanisms) that provides different.For example, a system can prevent that the user from writing an operating system (operating system) or general even prevent that the user from reading external channel (external pcrts) with operating system.As being about to see; the present invention supplies with the parts (part) of a protective device as a main control system (broader control scheme), and this system goes up in two special levels (distinct levels) and distributes " attribute " (attributes) to give data.
The immediate prior art that the applicant knows is to be described in United States Patent (USP) 4,442, in 484.This patents state the storage administration and the protective device that constitute by microprocessor Intel 286 available on the market.This microprocessor comprises the segmentation descriptor register (segmentation descriptor registers) that contains segment base address (segment base addresses), limit information (limit information) and attribute (for example, safeguard bit (protection bits)).Section is described vocabulary (segment descriptor table) and section descriptor register (segment descriptor registers), and both contain and determine that different control device resembles (bits) of the type of privilege level (privilege level), protection etc.These control device are at length to be described in United States Patent (USP) 4,442, in 484.
It is that field offset (segment offset) is limited to 64K byte (bytes) that Intel 286 has a problem.It also requires to provide a continuous storage unit for one in actual storage, and this point may not always keep easily.As seeing, an advantage of the system of invention is that field offset is big as actual address space (physical address space).But the system of invention also provides compatible (compatibility) to the segmentation device (prior segmentation mechanism) that has earlier that finds in Intel 286.Additional advantage and the prior art system of discussing in above-mentioned patent and its coml realize that the difference between (Intel 286 microprocessor) will be conspicuous from of the present invention being described in detail.
One of the microprocessor system that comprises a microprocessor and a data-carrier store is improved and will be described.Microprocessor comprises that one-part form dispatching device (segmentation mech-anism) is as translating a virtual memory address (virtnal memoryabbress) to one second memory address (linear address and the attribute that is used as check and control data memory paragraph (data memory segments).Improvement of the present invention be included on the microprocessor one page cache memory (page cache memory) as the translation for a hit (hit) or " coupling " condition (match cond condition) from linear address one first field (first field).Data-carrier store (data memory) is memory page mapping (enum) data (page mapping data) also, especially, and a page guides (page directory) and a page table (page table).If hit first field access page guides and the page table do not take place in the page or leaf cache memory.Not to be exactly to propose to be used for a physical base address at one page of storer from the output of page table from the page or leaf cache memory.Another field of linear address provides a skew (offset) in page or leaf.
Page or leaf cache memory and the page or leaf mapping (enum) data both in data-carrier store store the signal (signals) of the attribute of the data of representative in a special page or leaf (particular page).These attributes comprise the read and write protection, and whether the indication page or leaf writes in advance, and other information.Importantly, page or leaf level protection (page level prot-ection) is provided at second grade of 1 on data control in the storer.It is to separate with section attribute (segment attributes) and have any different.
Fig. 1 is whole architectures that a block scheme is represented microprocessor, and the present invention realizes at present therein.
Fig. 2 is that a block diagram illustrating is included in the segmentation mechanical hook-up in the microprocessor of Fig. 1.
Fig. 3 is block diagram illustrating page or leaf field mappings (page field mapping) for a hit or " coupling " in the page or leaf cache memory.
Fig. 4 be a block diagram illustrating Fig. 3 in the page or leaf cache memory for the page or leaf field mappings of hit or " coupling " not.For this situation, page guides in the primary memory (main mem-ory) and page table be employed and, therefore, in Fig. 4, show.
Fig. 5 is that a sketch is used for illustrating the attribute that is stored in page guides, page table and the page or leaf cache memory.
Fig. 6 is the mechanism (organization) of a block diagram illustrating content-addressed memory (CAM) (content addressable memory) and is included in data-carrier store (data storage) in page cache memory.
Fig. 7 is an electrical schematic diagram (electrical schematic) of the some (portion) of the content-addressed memory (CAM) of Fig. 6.
Fig. 8 is the electrical schematic diagram of the logic (logic circuits) relevant with the detecting device (detector) of Fig. 6.
One microprocessor system and particularly will being described as the memory management unit (mem-ory management mechanism) of system.In the narration below, the concrete number (specific number) of many distinctive details such as position (bits) etc. will be stated, will be for a understanding of removing the end of the present invention will be provided.This will be significantly, yet, for persons skilled in the art, there are not these distinctive details the present invention can implement yet.In other example, structure common to all is not presented in the detailed catalogue in order not make indigestion of the present invention redundantly.
In this present optimum implementation, microprocessor system comprises the microprocessor 10 of Fig. 1.This microprocessor is to go up with complementary metal oxide semiconductor (CMOS) (CMOS) processing at a single silicon chip (single substrate) to manufacture.Any can use during many kinds CMOS common to all handles, and the present invention clearly can use other technologies, and for example, n type raceway groove (n-channel), bipolar (biplar) silicon on sapphire (SOS) wait and realize.
Memory management unit for some conditional request access to the table that is stored in the primary memory.One random-access memory (ram), 13 its functions are that the primary memory as system is presented among Fig. 1.One common RAM can use dynamic storage (employing dyna-mic memories) to use as one.
As shown in Figure 1, microprocessor 10 has one 32 actual address (physical address), and processor itself is one 32 bit processing machines.The other parts of one microprocessor system usually use resembles driver, mathematics processor etc., does not draw in Fig. 1.
The storage administration of invention utilize segmentation and paging (paging) the two.Each section described vocabulary by one group of section and determined that is to separate the description paginal translation from page table.Two devices are fully separately and independently.One virtual address is to translate an actual address in two different steps (steps), uses two different mapping devices (mapping mechanism).The one-part form dispatching technique is that the first translation step (first translation step) was used, and a page dispatching technique is that the second translation step was used.Paging translation can become and produces single step (one-step) translation that segmentation only arranged, it and 286 be compatible.
One 48 virtual address to one 32 bit linear (centre) addresses of segmentation (first translation).48 virtual addresses are made up of 32 biased the moving (offset) of one 16 segment selectors and in this section.16 segment selector identification bursts, and be used for access and describe the project (entry) of vocabulary from section.The different attribute of the length (limit) of one base address of this section descriptor project section of comprising, section and section.The 32 biased in-migrations that the translation step is added to segment base (segmentbase) in the virtual address obtain 32 bit linear address.At the same time, in the virtual address 32 biased move be with the section limiting proportion, and the pattern of access is right with the section attribute nucleus.If 32 biased moving are scopes of the section of the exceeding limit, if or the pattern of access be unallowed for the section attribute, a fault (fault) generates and the addressing process exception finishes (aborted).
(second translation one 32 bit linear address to one 32 actual addresses are with a secondary (two-level) paging table, in the process that is described in detail below for paging.
This two step is fully independently.This allows one (big) section to form by several pages, or one page is by being made up of several (little) section.
One section energy is initial on any border and be random length, and is not limited to initially on a page boundary, or a length is arranged is the definite multiple of a number of pages.This allows each section to describe the protection zone of storer respectively, and this zone originates in arbitrary address and is random length.
Segmentation can be used for assembling some segments, and each section has its unique protection attribute and length, enters an independent page or leaf.If like this, segmentation provides the protection attribute, and paging provides the facilitated method of the actual storage mapping of the relevant unit of a kind of a group (must protect respectively).
Paging can be used for very big section is divided into many little unit, as the actual storage management.This will provide a single identifier (single identifier) (segment selector) and a single descriptor (section descriptor) to be used as an independent protective unit of storer, and not require the use of many page or leaf descriptors.One section the inside, paging provides the extra level of a mapping, and it allows big section mapping to enter in the page or leaf that separates that does not need to connect in actual storage.In fact, paging allows a big section mapping, so as only several pages reside in simultaneously in the actual storage, be mapped on the disk with the remainder of section.Paging also is supported in the definition of the minor structure of a big section the inside, for example, can write fashionablely when other pages, and some pages or leaves of a big section are protected.
Segmentation provides 10 fens comprehensive protection patterns, and it is operated in " nature " that the programmer uses (natural) on the unit: on the random length sheet on the linear addressed memory.Paging is provided for managing actual storage method the most easily, comprises the two management method of main system memory and back-up disk storer.The combination of two methods in the present invention provides 10 fens strong with function flexibly memory protection patterns.
In Fig. 1, microprocessor comprises a Bus Interface Unit (bus interface) 14.Bus unit comprises that impact damper is used as the transmission that allows 32 address signals and is used as 32 that receive and send data.In microprocessor inside, communicate by letter on internal bus 19 in unit 14.Bus unit comprises that a pre-fetch unit (pre-fetch unit) communicates by letter with the command unit of its and instruction decoding unit 16 of a prefetch queue (pre-fetch queue) as fetching from the instruction of RAM12.Queue instruction is to comprise the performance element 18(arithmetic and logic unit of one 32 bit register files) in handle.This unit and decoding unit are communicated by letter with internal bus 19.
The present invention is the center with address translation unit 20.This unit provides two functions; The one, connection section descriptor register and another are to connect a page or leaf descriptor cache memory.Segment register is parts maximum in the known prior art; Even so, they do more detailed narration together with Fig. 2.Page or leaf cache memory and it will be discussed in conjunction with Fig. 3-7 with the interaction of page table with the page guides that is stored in the primary memory 13, and these key elements form basis of the present invention.
The segmentation unit of Fig. 1 receives a virtual address and the suitable register segmentation schedule information of access from performance element 18.Register contains the segment base address, and this address connects on online 23 to a page or leaf unit simultaneously with skew from virtual address.
Fig. 2 explanation when the segmentation register be the map information of a new section of packing into, the access of in primary memory, showing.The section field is the index that the section in primary memory 13 is described vocabulary.The content of table provides a base address and in addition, and the attribute about data in the section is provided.Section limiting proportion in base address and skew and the comparer 27; The output of this comparer provides a fault-signal.Totalizer 26 as microprocessor unit makes base and skew in conjunction with providing on one " reality " address online 31.This address can be used as an actual address or by the paging unit by microprocessor.This is to finish another some program that has microprocessor (Intel 286) to write earlier to provide compatible.For Intel 286, the actual address space is 24.
The section attribute comprises that the detailed catalogue (details) on the descriptor resembles different privilege level, and this use is to be described in United States Patent (USP) 4,442, in 484.
In prior art, the fact is that known segmentation device is represented by dotted line 28 in Fig. 2, and its indication dotted line left side is the structure of prior art.
Page or leaf field mappings piece 30 comprises the page or leaf unit of Fig. 1 and it and is stored in page guides in the primary memory and the interaction of page table that this piece can be represented in Fig. 3 to Fig. 7.
The segmentation dispatching device uses shadow register (shadow registers) in present optimum implementation but then, and it also can be used when using with the paging device together with a cache memory.
In Fig. 3, the page or leaf descriptor cache memory of the page or leaf unit 22 of Fig. 1 is illustrated among the dotted line 22a.This storer comprises two arrays, a content-addressed memory (CAM) (CAM) 34 and page data (base) storer 35.Two storeies are realized with static storage cell (static memory cells).Storer 34 and 35 mechanism will narrate in conjunction with Fig. 6.The special circuit that is used for CAM34 has unique shielding (masking) characteristics, will be in conjunction with Fig. 7 and Fig. 8 narration.
Linear address from segment unit 21 is the page or leaf unit 22 that is connected to Fig. 1.As shown in Figure 3, this linear address comprises two fields, a page information field (20) and a displacement field (12).In addition, there are one or four page attribute fields to provide by microcode.The content of 20 page information fields and CAM34 relatively.Similarly, four attribute bit (" dirty " (dirty), " effectively " (valid), " U/S " and " W/R ") also necessary and those couplings in CAM before a hit takes place.(when " shielding " used as will be discussed, being an exception) to this
For a hit condition, storer 35 provides one 20 basic words, this word combines by the totalizer (summer) 36 of Fig. 3 with 12 Bit Shift fields of linear address to be represented, and synthetic actual address is selected from the 4K byte page frame (dyt(dyte page frame) in the primary memory 13.
An one page guides 13a and a page table 13b are stored in (see figure 4) in the primary memory 13.The base address of page guides be provide from microprocessor and among Fig. 4, be expressed as page guides base 38.Ten of the page information field enter page guides as an index (index) (factor conversion backs with 1) and are indicated by the totalizer among Fig. 4 40.Page guides provides one 32 words.20 bases of this word as page table.10 of the page information field similarly enter page table as being indicated by totalizer 41 as an index (converting with one 4 the factor again) in addition.Page table also provides one 32 words, and wherein 20 is the page base of actual address.This page base address combines with 12 Bit Shift fields as is indicated so that one 32 actual addresses to be provided by totalizer 42.
From five of 12 bit fields of page guides and table be used in particular for attribute " dirty " (dirty, " access " (access), " U/S ", " R/W " (present) that occur.These will discuss in more detail together with Fig. 5.The remaining bit of this field is unappropriated.
Be connected to steering logic circuit 75 from page guides with the attribute of the storage of table and the assignment information that connects linear address 4.The parts of this logic are to be illustrated among the figure of back, and discuss together with these figure.
Page guides word, Page Table Word and CAM word occur once more in Fig. 5.Composing and giving four protection controlled attribute of page guides word is to list in the bracket 43.Four identical attributes and an other attribute are to be used for Page Table Word and to be presented at bracket 44.It is to be presented at bracket 45 that four attributes are used for the CAM word.
Attribute is to be used for following purpose:
1. " dirty ".This position is to indicate whether one page writes.This position is to have write fashionable change at one page.This position is to be used for, and for example, the whole page or leaf of notifying operation system one is that " cleaning " (clean).This position be stored in the page table and CAM in (not in page guides).When one page is to write fashionable processor in mid-this position of page table.
2. " access ".This position is only to be stored in page guides and the table (not in CAM) and be to be used to refer to one page access.In case one page access, this position is to be changed in storer by processor.Be different from dirty position, this position indicates whether that one page access is not that to be used to write be to be used to read.
3.U/S。This state indicates whether that the content of page or leaf is that user (user) and supervisory routine can accesses (binary one) or only be supervisory routine accessible (Binary Zero).
4.R/W。This read/write safeguard bit must be that a binary one allows a user class program (user level program) to write this page or leaf.
5. " appearance ".This indicates whether that in page table page or leaf (associated page) relevant in actual storage occurs.This indicates whether that in page guides page table relevant in actual storage occurs.
6. " effectively ".This position only is stored among the CAM, is whether to be used to refer to that the content of CAM is effective.One first state is put in this position when initialization, when an effective CAM word is to change then when packing (loaded) into.
Five from page guides and table is to be connected to steering logic circuit 75 appropriate fault-signal is provided in microprocessor.
Carry out logical as by door 46 expressions, providing the R/W among the CAM34 that is stored in Fig. 3 the position from the user program/supervisory routine position of page guides and table.Similarly, be logical from the read/write of page guides and table, provide the W/R that is stored among the CAM position by door 47.From " dirty " in page table position is to be stored among the CAM.These are parts of the steering logic 75 of Fig. 4.
Being stored in attribute among the CAM is " automatically " check, because they are as the section processes of address, and to four bit comparisons from microcode.Even one effective page base takes place a fault condition is to be stored among the CAM, if for example, linear address indication one " user program " write cycle time is to take place to go into one page with R/W=0.
From the U/S position of page guides and table " with " assurance " the worst situation " is to be stored in the cache memory.Similarly, the R/W position " with " provide the worst situation for cache memory.
The CAM34 that is illustrated among Fig. 6 is organized in 8 groups (sets) that the establishment of 4 words is arranged in every group.21 (17 addresses and 4 attributes) are to be used for seeking in this array a coupling.Four comparer lines from four memory words in every group are to be connected to a detecting device.For example, the comparer line of four words of group 1 is to be connected to detecting device 53.Similarly, the comparer line of four words of group 2 to 8 is connected to comparer separately.The comparer line is to detect input (21) coupling decide which word and cam array in group by detecting device.Each detecting device comprises " hardwired " (hard wired) logic, and this logic allows to depend on from 3 state of 20 page information fields that are connected to each detecting device and makes one selection in each detecting device.(17 that note this other page information field is to be connected to cam array).
For the purpose of explaining, eight detecting devices are implicit from Fig. 6.Only detecting device is as being connected to detecting device with three groups of being used for selecting four lines in present implementation.Detecting device itself is to show in Fig. 8.
The data store branch of cache memory enrolls four arrays shown in array 35a to 35d.Corresponding to every group the data word of CAM is with in each in four arrays of the memory allocated of a word.For example, the data word of being selected by the word 1 of a hit and group 1 (base address) is in array 35a, and the data word of being selected by the word 2 of a hit and group 1 is that array 35b is medium.Be used for selecting a detecting device three also is a word that is used for selecting in each array.Thereby side by side, word is to be selected from each of four arrays.Last selection from a word of array is to finish through multiplex adapter 55.This multiplex adapter is by four comparer line traffic controls in the detecting device.
When store cache during by access, one relatively the comparison process of low speed process begin through 21 use.Three potential energies in addition select one group of four lines and detecting device to prepare as the pressure drop on the detection comparator line at once.(as will be discussed, whole comparer (OK) line is to come pre-charge but not the selection wire measuring with selection (hit) line that remains charged).Side by side, four words from choice set are accesses in array 35a to 35d.And if when a coupling generation, word and this information of detecting device energy identification group the inside are to be sent to the multiplex adapter 55 that allows the data word selection.This architecture advances the access time in cache memory.
Among Fig. 7, shown 21 that are connected to cam array, be connected to complement code generator and overload circuit 56 17,4 attribute bit are connected to VUDW logic 57.3 of the selection of the connection detecting device of Fig. 6 narration do not show in Fig. 7.
Circuit 56 generates real (true) and benefit (comp-lement) signal for each address signal and connects on their parallel lines in the cam array, for example line 59 and 60.Similarly, VUDW logic 57 generates the real and binary signal mended as attribute bit with connect on their parallel lines in the array.Line 59 and 60 is to be double (duplicated) (that is, 21 contrapositions and bit line) for each reality and bit line that mend.
Each is gone in 32 row in cam array for example line 68 and 70 of the line of pair of parallel.One normal static storage unit (static memory cell) for example unit 67 is to be connected between each and the bit line (row) and to connect a pair of line.In present optimum implementation, storage unit comprises common trigger (flip-flop) static cell of using P type channel transistor.Every pair a line (line 70) of line when data be to allow storage unit to connect when writing array to put in place and bit line.Otherwise, the content of storage unit be with alignment on the data comparison and the result of comparison is connected on the hit line 68.Relatively be to be finished by comparer, comparer and each unit (cell) connect.Comparer comprises n type channel transistor 61-64.Every pair of comparator transistor, for example, transistor 61 and 62 is to be connected between the bit line on the one side of storage unit and opposite.
Tentation data is that the node that is stored in the unit of the most close bit line 59 of storage unit 67 neutralization is high.When the tested timing of the content of CAM, initial hit line 68 is through transistor 69 pre-charges.The signal that is connected to CAM then is placed on the alignment.Suppose that initial line 59 is high.Transistor 62 is non-conductive because line 60 is low.Transistor 63 is non-conductive because one side of connected unit is low.For these conditions, line 68 is not discharge, and indication one coupling occurs in the unit.Hit line 68 provide the comparison that follows generation " with " if a coupling does not take place, one or more comparers will cause the discharge of hit line.
Generating an overload signal during circuit 56 and 57 pre-charges, to cause whole alignments (position and position the two) to become low.This prevents that comparer from leaking electricity from the hit line before relatively beginning.
Should be noted that comparer calibrating " binary one " condition, and in fact, ignore " Binary Zero " condition.That is, for example, if the grid of transistor 64 (gate) are that high (line 59 height) are so transistor 63 and 64 control ratios are.Similarly, if bit line 60 is high, so transistor 61 and 62 control ratios are.This feature of comparer allows the unit to be left in the basket.Like this, when a word is connected to CAM, the two shields sample for low to some potential energy from compare process with position and bit line.This makes to it seems it is that the content of unit and the condition on the alignment are complementary.VUDW logic 57 utilizes these features.
Little code signal is connected to logic 57 to be caused becoming a low function (function) as little code position as one position selecting attribute bit and bit line.This attribute that causes connecting that is left in the basket.This feature is used for, and for example, ignores the U/S position in the management attitude.That is, management attitude (supervisory mode) can accesses user data.Similarly, when read maybe when the management attitude when being movable the read/write potential energy be left in the basket.Position dirty when reading also is left in the basket.(not bringing into play) for effective position feature.
When attribute bit is stored in the primary memory, they can be by access and calibrating and logic is used for controlling access, for example, and according to 1 or 0 state of U/S position.Yet, had cache memory just not adopt discreet logic.The two is low for force bit and bit line, in fact, and by allowing a coupling (or preventing a fault) that additional logic is provided, even the bit pattern of attribute bit (bit patterns) does not match.
From the detecting device of Fig. 6, as shown in Figure 8, comprise more than one yuan NOR gate for example the door 81,82,83 and 84.Three from the hit line of the group of CAM line options are connected to door 81; These are represented with line A, B and C.One various combination of line is connected to each other NOR gate.For example, NOR gate 84 receives hit line D, A and B.The output of each NOR gate is for example input of NOT-AND gate 86 of one to one NOT-AND gate.One hit line is provided to each NOT-AND gate one input.This line is that one one of (A, B, C, D four) is not the line to the input of NOR gate.This also is the bit line from the group project that will select.For example, door 86 should select to connect the group of hit line D.For example, in the situation of NOR gate 81, the hit line is connected to NOT-AND gate 86.Similarly, for NOT-AND gate 90, hit line C also has the output of door 84, is input to this door.One allows read signal also to be connected to NOT-AND gate this logic output when preventing to allow to write.The output of NOT-AND gate, for example line 87, are to be used for the multiplex adapter 55 of control chart 6.In fact, from the signal of NOT-AND gate, the signal on for example online 87 is through P type channel transistor control multiplex adapter.For illustrative purposes, an additional phase inverter 88 shows together with output line 89.
Advantage to this detecting device is to allow the pre-charge line to use in multiplex adapter 55.A static array can adopt on the other hand, but this will require more power.The arrangement that shows among Fig. 8 will keep from the output of phase inverter that the oil pressure on one of hit line descends in the identical state.When that took place, only the oil pressure of a single output line descended, and allowed traffic pilot to select correct word.
Like this, a unique address translation unit is narrated, and it uses the secondary of high-speed memory, and one-level is used for segmentation and one-level is used for paging.Independently data attribute control provides on each level.
Errata
CPCP855433
Figure 85106711_IMG2

Claims (25)

1, comprises a microprocessor and a data-carrier store in the microprocessor system, microprocessor has the one-part form dispatching device as translation-virtual memory address to one second memory address with as the data of control according to attribute herein, and one improves (improvement) comprises.
Become whole one page cache memory to provide one second field under certain conditions as one first field that receives said second memory address with as it and the content of said page of cache memory are compared with said microprocessor,
Said data-carrier store comprises the storer that is used for page mapping (enum) data, first field of said second memory address is to be connected to said data-carrier store not select one the 3rd field from said page data when said some condition of said page or leaf cache memory does not satisfy
Said microprocessor system comprises a circuit as making one of the said second and the 3rd field combine an actual address that is provided for said data-carrier store with a offset field from said first address,
Therefore the actual addressability (addressibility) of said data-carrier store is improved.
2, the improvement that is limited by claim 1, wherein said page or leaf cache memory and the storer that is used for page data comprise the information on each attribute of each memory page.
3, the improvement that is limited by claim 2, the storer that wherein is used for said page or leaf mapping (enum) data comprises at least one page guides and at least one page table.
4, the improvement that is limited by claim 3, wherein the storage of each said page guides and said page table is used for each attribute of said each memory page.
5, the improvement that is limited by claim 4, wherein some each said attribute that is stored in said page guides and the said page table logically is combined in and is stored in the said page or leaf cache memory at least.
6, the improvement that is limited by claim 5, wherein said microprocessor provides a page guides base for said page guides.
7, a first of wherein said first field of improvement that is limited by claim 6 provides the index of a storage unit in the said page guides to said page guides base.
8, the improvement that is limited by claim 7, wherein the second portion of each page table base of said each cell stores in said page guides and wherein said first field provides the index of a page table storage unit in the said data-carrier store to said page table.
9, the improvement that is limited by claim 8, wherein said each storage unit in said page table provides a base to each page in said data-carrier store.
10, the improvement that limits by claim 2, wherein said page or leaf cache memory comprises a content-addressed memory (CAM) (CAM) and a page base storer, the output of said CAM said data-carrier store from said page base storer is selected page base.
11, the improvement that is limited by claim 10, wherein said CAM stores each attribute of each data storage page or leaf.
12, the improvement that is limited by claim 11, wherein said CAM comprises as the device that selectively shields one of said at least each attribute between the said comparable period.
13, improving and to comprise of storage administration that is used for a microprocessor system:
One microprocessor has the attribute that the one-part form dispatching device is used as translation one virtual memory address to, second memory address and is used as the check data memory paragraph;
One data-carrier store is connected to said microprocessor;
Said microprocessor comprises that one becomes whole page or leaf cache memory as one first field that receives said second memory address with as next one second field that provides is provided for it and the content of said page or leaf cache memory with said microprocessor under certain makes condition
Said data-carrier store comprises the storer that is used for page mapping (enum) data, said first field of said second memory address is to be connected to said data-carrier store not select one the 3rd field from said page data when said some condition of said page or leaf cache memory does not satisfy
Said microprocessor system comprises a circuit as making one of the said second and the 3rd field combine an actual address that is provided for said data-carrier store with a offset field from said first address,
Therefore the actual addressability of said data-carrier store takes in.
14, the improvement that limits by claim 13, wherein said segmentation device comprises:
Become with microprocessor a whole section descriptor register as provide a segment base and
Said data-carrier store comprises one section description vocabulary, and it comes access by one section field of said first address.
15, the improvement that is limited by claim 14, wherein said page or leaf cache memory and the said storer that is used for said page data comprise the information on each attribute of each memory page.
16, by the improvement that claim 15 limited, the said storer that wherein is used for said page or leaf mapping (enum) data comprises a page guides and a page table.
17, by the improvement that claim 16 limited, wherein the storage of each said page guides and page table is used for each said attribute of each said memory page.
18,, wherein have at least this each said attribute that is stored in said page guides and the said page or leaf logically to be combined in and to be stored in page cache memory by the improvement that claim 17 limited.
19, address translation unit formation is used as together with a data memory operations as the parts of a microprocessor, and this unit comprises:
Section descriptor storage provides a segment base as receiving a virtual address and being used as,
Said microprocessor is used as the addressing that an address that is provided for data-carrier store allows one section description vocabulary in the said data-carrier store, and describe vocabulary for said section said segment base address is provided,
Said microprocessor uses the part of said second base address and said virtual address that one second memory address is provided,
One page cache memory compares to provide one second field as one first field that receives said second memory address with it and the content of said page or leaf cache memory under some second condition,
Said microprocessor, if when each said second condition does not satisfy, provide said second field as providing said first field to be used as in one page tables of data in said data-carrier store,
Said second field is provided for a page base of said data-carrier store,
Therefore the actual addressability of said data-carrier store is improved.
20, the unit that limits by claim 19, wherein each each segment data attribute of said section descriptor register-stored and wherein said each page data attribute of page or leaf cache memories store.
21, a content-addressed memory (CAM) (CAM) comprises:
More than one yuan impact damper, each provides each said first signal and each secondary signal as receiving each first signal and being used as, and each said secondary signal is the complement code of each said first signal,
More than one yuan common parallel lines are right, and each is used for receiving one of each said first and second signal to coupled,
More than one yuan storage unit be connected in each line between, each said unit (cells) is arranged in rows, and to be generally perpendicular to each said line right,
More than one yuan capable comparer line, one connect each said row of each unit,
More than one yuan comparer, one is used as in each each said storage unit, their each lines separately to and one of each said comparer line between connect, said comparer is as a binary condition and each said first and second signals comparison that will be stored in the said storage unit
Charging apparatus, as from each said line to the data of packing into to each said unit,
Each said comparer is when their line separately all remains on being under an embargo of a binary condition of determining to the two.
Therefore by causing that some each said impact damper at least is provided for said definite binary condition of each said first and second signal, one of said unit of each of selection can be ignored for said comparison.
22, the CAM that limits by claim 21 each said capable comparer line line that is pre-charge wherein.
23, the CAM that limits by claim 22 comprise a memory (stor-age memory) it comprise more than one yuan joint (sections) and wherein data are accesses side by side and be to select in each whole said joints through each said line from the output of one of each said joint.
24, the CAM that is limited by claim 23 comprises each detecting device of the said line that is connected to a predetermined number, each said detecting device as in the line that detects said predetermined number which to continue be charging.
25, the CAM that is limited by claim 24 is wherein made by each said detecting device from the selection of the output of one of each said joint.
CN85106711A 1985-06-13 1985-09-06 Storage management of microprocessing system Expired CN1008839B (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US74438985A 1985-06-13 1985-06-13
USUSSN06/744,389 1985-06-13
US744,389 1985-06-13

Publications (2)

Publication Number Publication Date
CN85106711A true CN85106711A (en) 1987-02-04
CN1008839B CN1008839B (en) 1990-07-18

Family

ID=24992533

Family Applications (1)

Application Number Title Priority Date Filing Date
CN85106711A Expired CN1008839B (en) 1985-06-13 1985-09-06 Storage management of microprocessing system

Country Status (8)

Country Link
JP (1) JPH0622000B2 (en)
KR (1) KR900005897B1 (en)
CN (1) CN1008839B (en)
DE (1) DE3618163C2 (en)
FR (1) FR2583540B1 (en)
GB (2) GB2176918B (en)
HK (1) HK53590A (en)
SG (1) SG34090G (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1068687C (en) * 1993-01-20 2001-07-18 联华电子股份有限公司 Dynamic allocation method storage with stored multi-stage pronunciation
CN100367242C (en) * 2004-10-22 2008-02-06 富士通株式会社 System and method for providing a way memoization in a processing environment
CN100390756C (en) * 2001-08-15 2008-05-28 智慧第一公司 Virtual set high speed buffer storage for reorientation of stored data
CN100445964C (en) * 2002-12-27 2008-12-24 英特尔公司 Mechanism for post remapping virtual machine storage page
CN102789429A (en) * 2007-06-01 2012-11-21 英特尔公司 Virtual to physical address translation instruction returning page attributes
CN101663644B (en) * 2007-04-19 2013-03-20 国际商业机器公司 Apparatus and method for handling exception signals in a computing system
CN110537192A (en) * 2017-04-28 2019-12-03 阿诺特尔布莱恩公司 The automatic method and associated apparatus of non-volatile memories, retrieval and management are carried out to message/label association and label/message relating using maximum likelihood
CN111354406A (en) * 2018-12-20 2020-06-30 爱思开海力士有限公司 Memory device, operating method thereof, and memory system including the same

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1988007721A1 (en) * 1987-04-02 1988-10-06 Unisys Corporation Associative address translator for computer memory systems
US5251308A (en) * 1987-12-22 1993-10-05 Kendall Square Research Corporation Shared memory multiprocessor with data hiding and post-store
US5055999A (en) * 1987-12-22 1991-10-08 Kendall Square Research Corporation Multiprocessor digital data processing system
US5226039A (en) * 1987-12-22 1993-07-06 Kendall Square Research Corporation Packet routing switch
US5341483A (en) * 1987-12-22 1994-08-23 Kendall Square Research Corporation Dynamic hierarchial associative memory
US5761413A (en) 1987-12-22 1998-06-02 Sun Microsystems, Inc. Fault containment system for multiprocessor with shared memory
US5313647A (en) * 1991-09-20 1994-05-17 Kendall Square Research Corporation Digital data processor with improved checkpointing and forking
CA2078312A1 (en) 1991-09-20 1993-03-21 Mark A. Kaufman Digital data processor with improved paging
CA2078315A1 (en) * 1991-09-20 1993-03-21 Christopher L. Reeve Parallel processing apparatus and method for utilizing tiling
US5895489A (en) * 1991-10-16 1999-04-20 Intel Corporation Memory management system including an inclusion bit for maintaining cache coherency
GB2260629B (en) * 1991-10-16 1995-07-26 Intel Corp A segment descriptor cache for a microprocessor
EP0613090A1 (en) * 1993-02-26 1994-08-31 Siemens Nixdorf Informationssysteme Aktiengesellschaft Method for checking the admissibility of direct memory accesses in a data processing systems
US5548746A (en) * 1993-11-12 1996-08-20 International Business Machines Corporation Non-contiguous mapping of I/O addresses to use page protection of a process
US5590297A (en) * 1994-01-04 1996-12-31 Intel Corporation Address generation unit with segmented addresses in a mircroprocessor
KR100406924B1 (en) * 2001-10-12 2003-11-21 삼성전자주식회사 Content addressable memory cell
US7689485B2 (en) 2002-08-10 2010-03-30 Cisco Technology, Inc. Generating accounting data based on access control list entries
US7149862B2 (en) 2002-11-18 2006-12-12 Arm Limited Access control in a data processing apparatus
GB2396930B (en) 2002-11-18 2005-09-07 Advanced Risc Mach Ltd Apparatus and method for managing access to a memory
GB2396034B (en) 2002-11-18 2006-03-08 Advanced Risc Mach Ltd Technique for accessing memory in a data processing apparatus
WO2004046934A2 (en) 2002-11-18 2004-06-03 Arm Limited Secure memory for protecting against malicious programs
US7171539B2 (en) 2002-11-18 2007-01-30 Arm Limited Apparatus and method for controlling access to a memory
EP1654657A4 (en) * 2003-07-29 2008-08-13 Cisco Tech Inc Force no-hit indications for cam entries based on policy maps
KR101671494B1 (en) 2010-10-08 2016-11-02 삼성전자주식회사 Multi Processor based on shared virtual memory and Method for generating address translation table

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA784373A (en) * 1963-04-01 1968-04-30 W. Bremer John Content addressed memory system
GB1281387A (en) * 1969-11-22 1972-07-12 Ibm Associative store
US3761902A (en) * 1971-12-30 1973-09-25 Ibm Functional memory using multi-state associative cells
GB1457423A (en) * 1973-01-17 1976-12-01 Nat Res Dev Associative memories
GB1543736A (en) * 1976-06-21 1979-04-04 Nat Res Dev Associative processors
US4376297A (en) * 1978-04-10 1983-03-08 Signetics Corporation Virtual memory addressing device
GB1595740A (en) * 1978-05-25 1981-08-19 Fujitsu Ltd Data processing apparatus
US4377855A (en) * 1980-11-06 1983-03-22 National Semiconductor Corporation Content-addressable memory
GB2127994B (en) * 1982-09-29 1987-01-21 Apple Computer Memory management unit for digital computer
US4442482A (en) * 1982-09-30 1984-04-10 Venus Scientific Inc. Dual output H.V. rectifier power supply driven by common transformer winding
USRE37305E1 (en) * 1982-12-30 2001-07-31 International Business Machines Corporation Virtual memory address translation mechanism with controlled data persistence

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1068687C (en) * 1993-01-20 2001-07-18 联华电子股份有限公司 Dynamic allocation method storage with stored multi-stage pronunciation
CN100390756C (en) * 2001-08-15 2008-05-28 智慧第一公司 Virtual set high speed buffer storage for reorientation of stored data
CN100445964C (en) * 2002-12-27 2008-12-24 英特尔公司 Mechanism for post remapping virtual machine storage page
CN100367242C (en) * 2004-10-22 2008-02-06 富士通株式会社 System and method for providing a way memoization in a processing environment
CN101663644B (en) * 2007-04-19 2013-03-20 国际商业机器公司 Apparatus and method for handling exception signals in a computing system
US9164917B2 (en) 2007-06-01 2015-10-20 Intel Corporation Linear to physical address translation with support for page attributes
US8799620B2 (en) 2007-06-01 2014-08-05 Intel Corporation Linear to physical address translation with support for page attributes
US9158703B2 (en) 2007-06-01 2015-10-13 Intel Corporation Linear to physical address translation with support for page attributes
CN102789429A (en) * 2007-06-01 2012-11-21 英特尔公司 Virtual to physical address translation instruction returning page attributes
US9164916B2 (en) 2007-06-01 2015-10-20 Intel Corporation Linear to physical address translation with support for page attributes
CN102789429B (en) * 2007-06-01 2016-06-22 英特尔公司 The virtual address of page attributes is to the conversion of physical address
US11074191B2 (en) 2007-06-01 2021-07-27 Intel Corporation Linear to physical address translation with support for page attributes
CN110537192A (en) * 2017-04-28 2019-12-03 阿诺特尔布莱恩公司 The automatic method and associated apparatus of non-volatile memories, retrieval and management are carried out to message/label association and label/message relating using maximum likelihood
CN110537192B (en) * 2017-04-28 2023-05-26 阿诺特尔布莱恩公司 Associative memory storage unit, device and method
CN111354406A (en) * 2018-12-20 2020-06-30 爱思开海力士有限公司 Memory device, operating method thereof, and memory system including the same
CN111354406B (en) * 2018-12-20 2023-08-29 爱思开海力士有限公司 Memory device, method of operating the same, and memory system including the same

Also Published As

Publication number Publication date
GB2176920A (en) 1987-01-07
GB2176918B (en) 1989-11-01
JPH0622000B2 (en) 1994-03-23
SG34090G (en) 1990-08-03
KR870003427A (en) 1987-04-17
FR2583540A1 (en) 1986-12-19
GB8519991D0 (en) 1985-09-18
KR900005897B1 (en) 1990-08-13
DE3618163C2 (en) 1995-04-27
CN1008839B (en) 1990-07-18
GB2176918A (en) 1987-01-07
HK53590A (en) 1990-07-27
JPS61286946A (en) 1986-12-17
GB2176920B (en) 1989-11-22
FR2583540B1 (en) 1991-09-06
GB8612679D0 (en) 1986-07-02
DE3618163A1 (en) 1986-12-18

Similar Documents

Publication Publication Date Title
CN85106711A (en) The storage administration of microprocessor system
EP0650124B1 (en) Virtual memory computer system address translation mechanism that supports multiple page sizes
US4972338A (en) Memory management for microprocessor system
CN102792285B (en) For the treatment of the apparatus and method of data
KR920005280B1 (en) High speed cache system
CN1153145C (en) Method and apparatus for preloading different default address translation attributes
US5123101A (en) Multiple address space mapping technique for shared memory wherein a processor operates a fault handling routine upon a translator miss
US3761881A (en) Translation storage scheme for virtual memory system
CN1118027C (en) Memory access protection
US4589092A (en) Data buffer having separate lock bit storage array
EP0095033A2 (en) Set associative sector cache
JPH04319747A (en) Address converting mechanism
JP2002073412A (en) Access method to memory and memory
US4254463A (en) Data processing system with address translation
GB2365167A (en) Virtual address aliasing architecture
US5173872A (en) Content addressable memory for microprocessor system
US4821171A (en) System of selective purging of address translation in computer memories
EP0229932A2 (en) High-capacity memory for multiprocessor systems
JPH07120312B2 (en) Buffer memory controller
US4580217A (en) High speed memory management system and method
US11755211B2 (en) Overhead reduction in data transfer protocol for NAND memory
US5408674A (en) System for checking the validity of two byte operation code by mapping two byte operation codes into control memory in order to reduce memory size
EP0055579A2 (en) Cache memories with double word access
US6226731B1 (en) Method and system for accessing a cache memory within a data-processing system utilizing a pre-calculated comparison array
JP2000284996A (en) Memory managing device and its method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C13 Decision
C14 Grant of patent or utility model
C17 Cessation of patent right