US9183150B2 - Memory sharing by processors - Google Patents

Memory sharing by processors Download PDF

Info

Publication number
US9183150B2
US9183150B2 US13/707,801 US201213707801A US9183150B2 US 9183150 B2 US9183150 B2 US 9183150B2 US 201213707801 A US201213707801 A US 201213707801A US 9183150 B2 US9183150 B2 US 9183150B2
Authority
US
United States
Prior art keywords
processor
interfaces
request
memory
control unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related, expires
Application number
US13/707,801
Other versions
US20130159632A1 (en
Inventor
Victoria Caparros Cabezas
Rik Jongerius
Martin L. Schmatz
Phillip Stanley-Marbell
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Business Machines Corp
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Assigned to INTERNATIONAL BUSINESS MACHINES CORPORATION reassignment INTERNATIONAL BUSINESS MACHINES CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: STANLEY-MARBELL, PHILLIP, CAPARROS CABEZAS, VICTORIA, JONGERIUS, Rik, Schmatz, Martin L.
Publication of US20130159632A1 publication Critical patent/US20130159632A1/en
Application granted granted Critical
Publication of US9183150B2 publication Critical patent/US9183150B2/en
Expired - Fee Related legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0806Multiuser, multiprocessor or multiprocessing cache systems
    • G06F12/0815Cache consistency protocols
    • G06F12/0831Cache consistency protocols using a bus scheme, e.g. with bus monitoring or watching means
    • G06F12/0835Cache consistency protocols using a bus scheme, e.g. with bus monitoring or watching means for main memory peripheral accesses (e.g. I/O or DMA)
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0804Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with main memory updating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/14Protection against unauthorised use of memory or access to memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/16Handling requests for interconnection or transfer for access to memory bus
    • G06F13/1605Handling requests for interconnection or transfer for access to memory bus based on arbitration
    • G06F13/1652Handling requests for interconnection or transfer for access to memory bus based on arbitration in a multiprocessor architecture
    • G06F13/1663Access to shared memory

Definitions

  • the invention relates to the field of computer science, and more specifically, to a method implemented by a logic of a computer memory control unit for memory sharing by processors, to a computer memory control unit comprising such logic, to a computer program comprising instructions for configuring such logic and to a data storage medium having recorded thereon such program.
  • Shared-memory architectures enable several processes to share portions of their address spaces.
  • Existing shared-memory hardware architectures and their corresponding protocols for sharing memory assume a set of cooperative processors.
  • One existing possibility is that all the processors implement a same memory access interface hardware, which is not standard, but is adapted for cooperation between the processors in order for them to access the shared memory in a smooth manner.
  • Another existing possibility is that all processors have specific software components installed thereon that allow them to communicate together or with a central hardware in order to cooperate to emulate a virtual shared memory.
  • Such existing possibilities require a specific component installed on each processor sharing the memory: a specific hardware interface adapted for cooperation in one case, or specific software and a virtual shared memory emulated using the unshared memories of the individual processors in the other case. This makes such architectures costly and complicated to achieve in the former case, or lagging the performance of physically shared memories in the latter.
  • a method of memory sharing implemented by logic of a computer memory control unit, the control unit comprising at least one first interface and second interfaces and is adapted to be connected with a main physical memory via the first interface, and a set of N ⁇ 2 non-cooperative processors via the second interfaces, the logic operatively coupled to the first and second interfaces.
  • the method includes receiving, via the second interfaces, a request to access data of the main physical memory from a first processor of the set; evaluating if a second processor has previously accessed the data requested by the first processor; and deferring the request from the first processor when the evaluation is positive, or, granting the request from the first processor when the evaluation is negative.
  • a system in another embodiment, includes a computer memory control unit having at least one first interface and second interfaces and is adapted to be connected with a main physical memory via the first interface, and a set of N ⁇ 2 non-cooperative processors via the second interfaces; logic operatively coupled to the first and second interfaces, the logic configured to: receive, via the second interfaces, a request to access data of the main physical memory from a first processor of the set; evaluate if a second processor has previously accessed the data requested by the first processor; and defer the request from the first processor when the evaluation is positive, or, granting the request from the first processor when the evaluation is negative.
  • a computer readable storage medium having computer readable instructions stored thereon, that when executed by a computer implement a method of memory sharing implemented by logic of a computer memory control unit, the control unit comprising at least one first interface and second interfaces and is adapted to be connected with a main physical memory via the first interface, and a set of N ⁇ 2 non-cooperative processors via the second interfaces, the logic operatively coupled to the first and second interfaces.
  • the method includes receiving, via the second interfaces, a request to access data of the main physical memory from a first processor of the set; evaluating if a second processor has previously accessed the data requested by the first processor; and deferring the request from the first processor when the evaluation is positive, or, granting the request from the first processor when the evaluation is negative.
  • FIG. 1 is a flowchart of a method of memory sharing according to an exemplary embodiment
  • FIG. 2 shows a graphical representation of a computing system having computer memory control unit suitable for implementing the method of FIG. 1 .
  • Proposed is a method implemented by logic of a computer memory control unit.
  • the control unit comprises at least one first interface and (several) second interfaces and is adapted to be connected with a main physical memory via the first interface, and a set of N ⁇ 2 non-cooperative processors via the second interfaces.
  • the logic is operatively coupled to the first and second interfaces.
  • the method comprises receiving, via the second interfaces, a request to access data of the main physical memory from a first processor of the set.
  • the method also comprises evaluating if a second processor has previously accessed the data requested by the first processor.
  • the method further comprises deferring the request from the first processor when the evaluation is positive, or, granting the request from the first processor when the evaluation is negative.
  • Such a method allows N processors connected to the control unit via the second interfaces to share a memory connected to the control unit via the first interface in an improved way.
  • the (computer memory) control unit is a hardware material suitable for controlling access to a physical memory by a plurality of processors, which may possibly be integrated into a system, such as a computer.
  • the control unit comprises at least one first interface and second interfaces.
  • the control unit comprises a first set of at least one interface (i.e., the “first interface”) and a second set of at least two interfaces (the “second interfaces”).
  • the first interface, on the one side, and the second interfaces, on the other side, are all tools for connecting with hardware, comprising hardware elements (such as connections) and software elements (such as a program for interpreting signals received by the control unit through a given interface and/or for transmitting signals via a given interface), the first interface and the second interfaces differing in that they are suited to connect the control unit with different material.
  • control unit is adapted to be connected with a main physical memory (of the computer, should it be the case) via the first interface, which is thus suited therefore.
  • control unit is adapted to be connected with processors via the second interfaces. There are at least two second interfaces, such that there are at least three interfaces in total (at least one first interface and at least two second interfaces). This way, the control unit may be connected with a set of N ⁇ 2 non-cooperative processors.
  • the main memory may be one unit with a corresponding interface matching the first interface.
  • the main memory may have several interfaces, the control unit having possibly several first interfaces in this case, or the main memory may consist in several units each having one or several interfaces, the control unit having in this case several first interfaces, e.g., at least one per unit.
  • the processors are said to be non-cooperative because they may be standard processors.
  • the processors may exclude any hardware component specifically designed for cooperation between the processors.
  • the processor may exclude having recorded thereon complex software designed for allowing communication between the processors in view of cooperation.
  • the method allows sharing memory by several processors in a cheap and easy manner.
  • the processors may have different microarchitectures and virtual memory interface definitions.
  • the second interfaces may be double-data-rate interfaces, which is a widely known standard and thus cheap and easy to implement. Also, at least two of the second interfaces may be different.
  • the method covers a hardware architecture and corresponding access protocol that allows processors that possibly define different architectures and virtual memory interfaces, to communicate through a shared region of memory.
  • the method may thus work at the level of standard DRAM electrical and protocol interfaces.
  • an industry standard double-data-rate (DDR) interface is extended with a signal indicating the availability of data from DRAM arrays.
  • Existing DDR interfaces already have analogous (but optional) DQS and RDQS signals. As these signals are optional, many implementations currently assume data is available after a fixed delay.
  • the method relies on the fact that commodity processors, irrespective of their architecture and virtual memory interface definition, access main memory through physical, standardized memory interfaces, such as the JEDEC standardized DDR memory interface.
  • the control unit (which may also be referred to as the coherence control unit or CCU) is thus connected to every processor and to the main memory, both connections possibly using standardized memory interfaces, for example the JEDEC standard DDR memory interfaces. Therefore, all the memory addresses mentioned hereafter may refer to physical DRAM addresses. Any other memory interface that is widely supported across processor architectures and memory modules could however also be used.
  • the control unit further comprises the logic.
  • Logic comprises hardware having processing capabilities and following predetermined schemes, e.g. thanks to instructions stored on a memory.
  • the logic is operatively coupled to the first and second interfaces.
  • the logic has access to the interfaces and may thus process information passing via the interfaces.
  • the logic may thus receive and process information received by the control unit from outside the control unit via the interfaces, and/or the logic may order the control unit to send information outside the control unit via the interfaces.
  • FIG. 1 represents a flowchart of an example of the method.
  • the method of the example comprises staying in an idle state (S 5 ).
  • the logic waits for the first active event of the method to occur before performing actions. Also, the logic may return to the idle state when it has finished its actions.
  • the method comprises receiving (S 10 ), via the second interfaces, a request to access data of the main physical memory from a first processor of the set.
  • a processor of the set the “first” processor, requires to access to the main physical memory. This request is performed as if the processor were connected directly to the main memory.
  • the processor does not “see” the control unit, and acts as connected directly to the main memory, as this is usually the case.
  • the method then comprises evaluating (S 20 ) if a second processor has previously accessed the data requested by the first processor.
  • the logic verifies if another processor, the “second” processor, has already accessed the same data that are requested by the first processor.
  • the evaluation (S 20 ) comprises checking in a database of the control unit if a second processor is associated to the data requested by the first processor.
  • the control unit may thus include such a database, and thus a dedicated memory for storing it (e.g., an internal memory, or a part of the main memory).
  • Such a database may consist of a lookup table associating to (e.g., an identifier of) data of the main memory a (e.g., identifier of a) processor. This allows the control unit to avoid collisions between processors requesting substantially simultaneous access to the same data of the main memory.
  • the method comprises deferring (S 30 ) (i.e., putting on hold) the request from the first processor in such a case. This provides time for implementing actions to ensure that there will be no collision and avoid such. Otherwise (i.e., if the evaluation (S 20 ) yields the result that no processor has requested the data yet), the method comprises granting (S 41 ) the request from the first processor when the evaluation is negative.
  • the method comprises, in parallel, sending (S 35 ) a request to the second processor to write back cache lines to the main physical memory.
  • the cache lines that are required to be written back may be related to the data requested by the first processor.
  • the logic requests the second processor, i.e. the one that has accessed the same data that is requested by the first processor, to send all the data (or the data related to the requested data only) stored on its cache(s) to the control unit. This may be performed via an interrupt pin of the second processor.
  • the logic monitors reception of this data (not represented in the flowchart) and orders (S 36 ) the control unit to transmit the requested cache lines, that are received by the control unit from the second processor, to the main physical memory. In other words, the logic ensures the transfer of the data to the main memory. Thus, the work on the data which was being performed by the second processor is committed to the main memory.
  • the method may comprise granting (S 42 ) the request from the first processor.
  • the granting (S 41 ) and the granting (S 42 ) implement the same actions, but they have a different reference due to the fact that they are preceded by different steps of the method.
  • the method further comprises associating (S 38 ) the first processor to the data requested by the first processor in the database. This way, it is ensured that further executions of the method will work.
  • the first processor may replace the second processor in the database.
  • granting (S 41 , S 42 ) the request of the first processor comprises ordering (S 39 ) the control unit to transmit the request to the main physical memory and then ordering (S 40 ) the control unit to forward returned data from the main physical memory to the first processor.
  • the processors including the first processor act as if they were directly connected to the main memory, but the control unit intercepts all signals and manages them. This will be made clearer when describing the control unit with reference to FIG. 2 .
  • the addresses of data items in memory may be grouped into blocks referred to within the CCU as CCU physical pages, and all management of data ownership performed at such granularity. This may, for example, reduce the number of entries the CCU will have to maintain in its database, but may require more data to be written back to memory in step S 35 .
  • the method offers a shared-memory protocol and is better performed with a homogeneous memory organization.
  • the control unit may thus define a common address space, and the physical memory may be divided into shared, physical pages—independent of page sizes used in the individual processors.
  • Each processor may implement a different virtual memory interface, each with its own number of pages and page size.
  • the physical page size may be 4 KB, whereas for DEC/Compaq Alpha, it may be 8 KB (one of the processors may present such features). This is explained for example in a paper by B. Jacob and T. Mudge entitled “Virtual memory in contemporary microprocessors” IEEE Micro, 18:60-75, July 1998.
  • the control unit may translate the physical address requests by each processor to the corresponding control unit physical page number in a database (e.g. a unified memory address space), such as physical page ID lookup table (PPIDT).
  • a database e.g. a unified memory address space
  • PPIT physical page ID lookup table
  • Shared data is thus tracked in this case at the control unit's physical page granularity; i.e., a data access to a shared memory location is identified by the physical page identifier (PPI) associated with that location.
  • PPI physical page identifier
  • the CCU monitors every DRAM memory access. Using a mapping table, it records the identity of the processor that initiated the request upon occurrence of a memory access to a shared location, and the PPI associated with the accessed, shared page. This information is used to guarantee consistency when a different processor attempts to access to the same physical location.
  • the CCU is in an idle state.
  • a processor initiates a read from memory
  • the request leaving the processor going out to DRAM is intercepted by the CCU.
  • the CCU then performs a lookup in the processor to physical page ID lookup table (PPIDT).
  • PIDT physical page ID lookup table
  • This lookup is used to determine whether the physical page, where the requested data is located, has been previously requested (and, therefore, potentially modified) by a different processor. If the PPID corresponding to the address is not in the table, the processor accesses DRAM, and the CCU updates the PPIDT, adding a new entry that maps the processor to the corresponding common physical page. The data movement from DRAM to processor is then initiated by the CCU.
  • the CCU sends a request to processor to write back all cache lines corresponding to the physical page which is listed alongside processor in the PPDIT. This request is implemented using an interrupt signal through a generic processor interrupt pin. The CCU must wait until the processor signals back that the memory write has been completed, which can be done through a general-purpose input-output pin, or a write to a non-cacheable address.
  • the CCU updates the PPIDT removing the old entry in the PPIDT and creating a new one for processor containing the PPID of the corresponding CCU physical memory page. Finally, the CCU initiates the request to main memory so that data can be transferred to the corresponding processor.
  • the PPIDT is updated whenever a processor initiates a new transaction.
  • Processor caches may be flushed in the interim, or the data that caused a new entry to be added into the table may be evicted to allocate more recent data.
  • the CCU may not be able to detect the change in the processor cache.
  • the CCU must, therefore, internally model the cache replacement for the different processor caches so that it can predict whether a specific data item has been evicted from memory. In situations where modeling within the CCU is not possible, all lines in the respective caches might have to be evicted.
  • An alternative implementation could consist of having one control unit per processor, and synchronization of the different instances of the control unit, possibly using an interconnect network between the CCUs.
  • the logic may be part of a computer memory control unit.
  • the control unit comprises at least one first interface and second interfaces and is adapted to be connected with a main physical memory via the first interface, and a set of N ⁇ 2 non-cooperative processors via the second interfaces.
  • the control unit further comprises the logic, which, again, is operatively coupled to the first and second interfaces and is configured to perform the method.
  • Such a computer memory control unit allows a plurality of non-cooperative processors to share a same main physical memory in an efficient, cheap, and simple manner.
  • the computer memory control unit may itself be part of a system comprising the computer memory control unit connected with a main physical memory via the first interface and with a set of N ⁇ 2 non-cooperative processors via the second interfaces
  • FIG. 2 is a block diagram of hardware of an example of the system, which may be a computer system with multiple processors sharing a main physical memory, or a computer network comprising multiple computers (and thus multiple processors) sharing a main physical memory (possibly a virtual memory).
  • system 200 comprises computer memory control unit 100 .
  • Computer memory control unit 100 is connected with main physical memory 210 (also part of system 200 ) via first interfaces 110 .
  • Computer memory control unit 100 is further connected with three processors 220 via second interfaces 120 .
  • Control unit 100 thus comprises at least one first interface 110 and at least two second interfaces 120 (eight in the example) and is adapted to be connected with main physical memory 210 via first interfaces 110 , and the non-cooperative processors 220 via the second interfaces 120 .
  • Control unit 100 further comprises (control) logic 130 , which is operatively coupled to first interfaces 110 and second interfaces 120 via datapath 115 , and is configured to perform the method.
  • Second interfaces 120 each comprise interruptor 125 , which is adapted to send a signal through an interrupt pin of processors 220 in order to send to a request to the processors 220 to write back cache lines.
  • Datapath 115 is adapted to centralize information sent by control logic 130 or received from any interface ( 110 , 120 ).
  • Datapath 115 may comprise for that any or a combination of means for directing data, such as redirection multiplexers, write-back buffers, and/or a bus.
  • Control unit 100 also comprises request queue 140 , which is adapted to queue requests received by processors 220 .
  • Control unit 100 also comprises configuration registers 150 and internal memory 160 which stores PPIDT 165 . This way, control unit 100 is adapted to interpret a request received from a processor 220 as a request for accessing a given page of main memory 210 , and may, thanks to lookup table 165 , be evaluated if another processor 220 has previously accessed the given page.
  • aspects of the present invention may be embodied as a computerized system, method for using or configuring the system, or computer program product for performing the method. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) (i.e., data storage medium(s)) having computer readable program code recorded thereon.
  • computer readable medium(s) i.e., data storage medium(s)
  • the computer readable medium may be a computer readable signal medium or a computer readable storage medium.
  • a computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing.
  • a computer readable storage medium i.e., data storage medium, may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
  • a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof.
  • a computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
  • Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
  • Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the likes and conventional procedural programming languages, such as the “C” programming language or similar programming languages.
  • the program code may be executed entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
  • LAN local area network
  • WAN wide area network
  • Internet Service Provider an Internet Service Provider
  • the invention is embodied as a method implemented by a logic of a computer memory control unit.
  • the control unit comprises at least one first interface and second interfaces and is adapted to be connected with a main physical memory via the first interface, and a set of N ⁇ 2 non-cooperative processors via the second interfaces.
  • the logic is operatively coupled to the first and second interfaces.
  • the method comprises receiving, via the second interfaces, a request to access data of the main physical memory from a first processor of the set.
  • the method also comprises evaluating if a second processor has previously accessed the data requested by the first processor.
  • the method further comprises deferring the request from the first processor when the evaluation is positive, or, granting the request from the first processor when the evaluation is negative.
  • the method may comprise one or more of the following features: the method comprises, while deferring the request from the first processor, sending a request to the second processor to write back cache lines, that are related to the data requested by the first processor, to the main physical memory; sending the request to the second processor is performed via an interrupt pin of the second processor; the method comprises, while deferring the request from the first processor, and after sending the request to the second processor, ordering the control unit to transmit the requested cache lines, that are received by the control unit from the second processor, to the main physical memory; the method comprises, once the second processor has written back all requested cache lines to the main physical memory, granting the request from the first processor; the evaluation comprises checking in a database of the control unit if a second processor is associated to the data requested by the first processor; the method further comprises associating the first processor to the data requested by the first processor in the database; the second interfaces are double-data-rate dynamic random access memory (DDR DRAM) interfaces; and/or the granularity of management of access is by ranges (DDR
  • the invention is embodied as a computer memory control unit.
  • the control unit comprises at least one first interface and second interfaces and is adapted to be connected with a main physical memory via the first interface, and a set of N ⁇ 2 non-cooperative processors via the second interfaces.
  • the control unit comprises a logic operatively coupled to the first and second interfaces and configured to perform the above method.
  • the invention is embodied as a system comprising the above computer memory control unit connected with a main physical memory via the first interface and with a set of N ⁇ 2 non-cooperative processors via the second interfaces.
  • the invention is embodied as a computer program comprising instructions for configuring a logic, that is adapted to be operatively coupled to a first interface and second interfaces of a computer memory control unit comprising the logic, the control unit being adapted to be connected with a main physical memory via the first interface and with a set of N ⁇ 2 non-cooperative processors via the second interfaces the processors, the instructions being for configuring the logic to perform the above method.
  • the invention is embodied as a data storage medium having recorded thereon the above computer program.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • Memory System Of A Hierarchy Structure (AREA)
  • Multi Processors (AREA)

Abstract

A method of memory sharing implemented by logic of a computer memory control unit, the control unit comprising at least one first interface and second interfaces and is adapted to be connected with a main physical memory via the first interface, and a set of N≧2 non-cooperative processors via the second interfaces, the logic operatively coupled to the first and second interfaces. The method includes receiving, via the second interfaces, a request to access data of the main physical memory from a first processor of the set; evaluating if a second processor has previously accessed the data requested by the first processor; and deferring the request from the first processor when the evaluation is positive, or, granting the request from the first processor when the evaluation is negative.

Description

PRIORITY
This application claims priority to European Patent Application No. 11194116.7, filed 16 Dec. 2011, and all the benefits accruing therefrom under 35 U.S.C. §119, the contents of which in its entirety are herein incorporated by reference.
BACKGROUND
The invention relates to the field of computer science, and more specifically, to a method implemented by a logic of a computer memory control unit for memory sharing by processors, to a computer memory control unit comprising such logic, to a computer program comprising instructions for configuring such logic and to a data storage medium having recorded thereon such program.
Shared-memory architectures enable several processes to share portions of their address spaces. Existing shared-memory hardware architectures and their corresponding protocols for sharing memory assume a set of cooperative processors. One existing possibility is that all the processors implement a same memory access interface hardware, which is not standard, but is adapted for cooperation between the processors in order for them to access the shared memory in a smooth manner. Another existing possibility is that all processors have specific software components installed thereon that allow them to communicate together or with a central hardware in order to cooperate to emulate a virtual shared memory. Such existing possibilities require a specific component installed on each processor sharing the memory: a specific hardware interface adapted for cooperation in one case, or specific software and a virtual shared memory emulated using the unshared memories of the individual processors in the other case. This makes such architectures costly and complicated to achieve in the former case, or lagging the performance of physically shared memories in the latter.
With the growing popularity of heterogeneous architectures, there is an increased interest in the implementation of mechanisms that allow non-homogeneous architectures to execute processes that may communicate through a shared region of memory, even though the processors in question may not implement the same (or any) shared-memory protocol interface.
There is thus a need for an improved solution for memory sharing.
SUMMARY
In one embodiment, a method of memory sharing implemented by logic of a computer memory control unit, the control unit comprising at least one first interface and second interfaces and is adapted to be connected with a main physical memory via the first interface, and a set of N≧2 non-cooperative processors via the second interfaces, the logic operatively coupled to the first and second interfaces. The method includes receiving, via the second interfaces, a request to access data of the main physical memory from a first processor of the set; evaluating if a second processor has previously accessed the data requested by the first processor; and deferring the request from the first processor when the evaluation is positive, or, granting the request from the first processor when the evaluation is negative.
In another embodiment, a system includes a computer memory control unit having at least one first interface and second interfaces and is adapted to be connected with a main physical memory via the first interface, and a set of N≧2 non-cooperative processors via the second interfaces; logic operatively coupled to the first and second interfaces, the logic configured to: receive, via the second interfaces, a request to access data of the main physical memory from a first processor of the set; evaluate if a second processor has previously accessed the data requested by the first processor; and defer the request from the first processor when the evaluation is positive, or, granting the request from the first processor when the evaluation is negative.
In another embodiment, a computer readable storage medium having computer readable instructions stored thereon, that when executed by a computer implement a method of memory sharing implemented by logic of a computer memory control unit, the control unit comprising at least one first interface and second interfaces and is adapted to be connected with a main physical memory via the first interface, and a set of N≧2 non-cooperative processors via the second interfaces, the logic operatively coupled to the first and second interfaces. The method includes receiving, via the second interfaces, a request to access data of the main physical memory from a first processor of the set; evaluating if a second processor has previously accessed the data requested by the first processor; and deferring the request from the first processor when the evaluation is positive, or, granting the request from the first processor when the evaluation is negative.
BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS
A system and a process embodying the invention will now be described, by way of non-limiting example, and in reference to the accompanying drawings, where:
FIG. 1 is a flowchart of a method of memory sharing according to an exemplary embodiment; and
FIG. 2 shows a graphical representation of a computing system having computer memory control unit suitable for implementing the method of FIG. 1.
DETAILED DESCRIPTION
Proposed is a method implemented by logic of a computer memory control unit. The control unit comprises at least one first interface and (several) second interfaces and is adapted to be connected with a main physical memory via the first interface, and a set of N≧2 non-cooperative processors via the second interfaces. The logic is operatively coupled to the first and second interfaces. The method comprises receiving, via the second interfaces, a request to access data of the main physical memory from a first processor of the set. The method also comprises evaluating if a second processor has previously accessed the data requested by the first processor. The method further comprises deferring the request from the first processor when the evaluation is positive, or, granting the request from the first processor when the evaluation is negative. Such a method allows N processors connected to the control unit via the second interfaces to share a memory connected to the control unit via the first interface in an improved way.
The (computer memory) control unit is a hardware material suitable for controlling access to a physical memory by a plurality of processors, which may possibly be integrated into a system, such as a computer.
The control unit comprises at least one first interface and second interfaces. In other words, the control unit comprises a first set of at least one interface (i.e., the “first interface”) and a second set of at least two interfaces (the “second interfaces”). The first interface, on the one side, and the second interfaces, on the other side, are all tools for connecting with hardware, comprising hardware elements (such as connections) and software elements (such as a program for interpreting signals received by the control unit through a given interface and/or for transmitting signals via a given interface), the first interface and the second interfaces differing in that they are suited to connect the control unit with different material. Indeed, on the one hand, the control unit is adapted to be connected with a main physical memory (of the computer, should it be the case) via the first interface, which is thus suited therefore. On the other hand, the control unit is adapted to be connected with processors via the second interfaces. There are at least two second interfaces, such that there are at least three interfaces in total (at least one first interface and at least two second interfaces). This way, the control unit may be connected with a set of N≧2 non-cooperative processors.
The main memory may be one unit with a corresponding interface matching the first interface. Alternatively, the main memory may have several interfaces, the control unit having possibly several first interfaces in this case, or the main memory may consist in several units each having one or several interfaces, the control unit having in this case several first interfaces, e.g., at least one per unit.
The processors are said to be non-cooperative because they may be standard processors. For example, the processors may exclude any hardware component specifically designed for cooperation between the processors. Also, the processor may exclude having recorded thereon complex software designed for allowing communication between the processors in view of cooperation. Thus, the method allows sharing memory by several processors in a cheap and easy manner. In an example, the processors may have different microarchitectures and virtual memory interface definitions. Thus, the method allows sharing memory by several processors even though these processors may be different. The second interfaces may be double-data-rate interfaces, which is a widely known standard and thus cheap and easy to implement. Also, at least two of the second interfaces may be different.
The method covers a hardware architecture and corresponding access protocol that allows processors that possibly define different architectures and virtual memory interfaces, to communicate through a shared region of memory. The method may thus work at the level of standard DRAM electrical and protocol interfaces. In one embodiment, an industry standard double-data-rate (DDR) interface is extended with a signal indicating the availability of data from DRAM arrays. Existing DDR interfaces already have analogous (but optional) DQS and RDQS signals. As these signals are optional, many implementations currently assume data is available after a fixed delay. The method relies on the fact that commodity processors, irrespective of their architecture and virtual memory interface definition, access main memory through physical, standardized memory interfaces, such as the JEDEC standardized DDR memory interface.
The control unit (which may also be referred to as the coherence control unit or CCU) is thus connected to every processor and to the main memory, both connections possibly using standardized memory interfaces, for example the JEDEC standard DDR memory interfaces. Therefore, all the memory addresses mentioned hereafter may refer to physical DRAM addresses. Any other memory interface that is widely supported across processor architectures and memory modules could however also be used.
The control unit further comprises the logic. Logic comprises hardware having processing capabilities and following predetermined schemes, e.g. thanks to instructions stored on a memory. The logic is operatively coupled to the first and second interfaces. In other words, the logic has access to the interfaces and may thus process information passing via the interfaces. The logic may thus receive and process information received by the control unit from outside the control unit via the interfaces, and/or the logic may order the control unit to send information outside the control unit via the interfaces.
The method performed by the logic of the control unit will now be described with reference to FIG. 1, which represents a flowchart of an example of the method.
The method of the example comprises staying in an idle state (S5). In such a case, the logic waits for the first active event of the method to occur before performing actions. Also, the logic may return to the idle state when it has finished its actions.
The method comprises receiving (S10), via the second interfaces, a request to access data of the main physical memory from a first processor of the set. In other words, a processor of the set, the “first” processor, requires to access to the main physical memory. This request is performed as if the processor were connected directly to the main memory. In other words, the processor does not “see” the control unit, and acts as connected directly to the main memory, as this is usually the case.
The method then comprises evaluating (S20) if a second processor has previously accessed the data requested by the first processor. In other words, the logic verifies if another processor, the “second” processor, has already accessed the same data that are requested by the first processor. In the example, the evaluation (S20) comprises checking in a database of the control unit if a second processor is associated to the data requested by the first processor. The control unit may thus include such a database, and thus a dedicated memory for storing it (e.g., an internal memory, or a part of the main memory). Such a database may consist of a lookup table associating to (e.g., an identifier of) data of the main memory a (e.g., identifier of a) processor. This allows the control unit to avoid collisions between processors requesting substantially simultaneous access to the same data of the main memory.
Indeed, when the evaluation (S20) is positive (i.e., a second processor has previously requested access to the same data that is now requested by the first processor), there may be collision (i.e., the two processors may be working on the same data and possibly modifying it without sharing the modifications with the other, or reading it while it is modified by the other without knowing that it is modified, and thus not reading the modifications). Thus, the method comprises deferring (S30) (i.e., putting on hold) the request from the first processor in such a case. This provides time for implementing actions to ensure that there will be no collision and avoid such. Otherwise (i.e., if the evaluation (S20) yields the result that no processor has requested the data yet), the method comprises granting (S41) the request from the first processor when the evaluation is negative.
In the example, while deferring (S30) the request from the first processor (i.e., while the request from the first processor is put on hold), the method comprises, in parallel, sending (S35) a request to the second processor to write back cache lines to the main physical memory. The cache lines that are required to be written back may be related to the data requested by the first processor. In other words, the logic requests the second processor, i.e. the one that has accessed the same data that is requested by the first processor, to send all the data (or the data related to the requested data only) stored on its cache(s) to the control unit. This may be performed via an interrupt pin of the second processor. If there is such data (i.e., the second processor has not written back its cache lines yet), the logic monitors reception of this data (not represented in the flowchart) and orders (S36) the control unit to transmit the requested cache lines, that are received by the control unit from the second processor, to the main physical memory. In other words, the logic ensures the transfer of the data to the main memory. Thus, the work on the data which was being performed by the second processor is committed to the main memory.
Once the second processor has written back all requested cache lines to the main physical memory, which may be verified by receiving (S37) (by the logic) a completion signal from the second processor, the method may comprise granting (S42) the request from the first processor.
The granting (S41) and the granting (S42) implement the same actions, but they have a different reference due to the fact that they are preceded by different steps of the method.
In the example, before granting (S41, S42) the request of the first processor, and upon such granting (S41, S42), the method further comprises associating (S38) the first processor to the data requested by the first processor in the database. This way, it is ensured that further executions of the method will work. In the case there is a second processor and the result of the evaluation (S20) was positive, the first processor may replace the second processor in the database.
In the example, granting (S41, S42) the request of the first processor comprises ordering (S39) the control unit to transmit the request to the main physical memory and then ordering (S40) the control unit to forward returned data from the main physical memory to the first processor. Indeed, the processors (including the first processor) act as if they were directly connected to the main memory, but the control unit intercepts all signals and manages them. This will be made clearer when describing the control unit with reference to FIG. 2.
As an efficiency optimization, the addresses of data items in memory may be grouped into blocks referred to within the CCU as CCU physical pages, and all management of data ownership performed at such granularity. This may, for example, reduce the number of entries the CCU will have to maintain in its database, but may require more data to be written back to memory in step S35.
The method offers a shared-memory protocol and is better performed with a homogeneous memory organization. The control unit may thus define a common address space, and the physical memory may be divided into shared, physical pages—independent of page sizes used in the individual processors. Each processor may implement a different virtual memory interface, each with its own number of pages and page size. For example, for a 32-bit PowerPC and x86 architectures (one of the processors may present such features), the physical page size may be 4 KB, whereas for DEC/Compaq Alpha, it may be 8 KB (one of the processors may present such features). This is explained for example in a paper by B. Jacob and T. Mudge entitled “Virtual memory in contemporary microprocessors” IEEE Micro, 18:60-75, July 1998.
The control unit may translate the physical address requests by each processor to the corresponding control unit physical page number in a database (e.g. a unified memory address space), such as physical page ID lookup table (PPIDT). Several criteria can be used to choose the common control unit page size. Larger control unit page sizes will lead to smaller numbers of entries in the PPIDT, but will require more items to be written-back to memory when the control unit notifies a processor of a write-back request and thus degrade performance.
Shared data is thus tracked in this case at the control unit's physical page granularity; i.e., a data access to a shared memory location is identified by the physical page identifier (PPI) associated with that location. The CCU monitors every DRAM memory access. Using a mapping table, it records the identity of the processor that initiated the request upon occurrence of a memory access to a shared location, and the PPI associated with the accessed, shared page. This information is used to guarantee consistency when a different processor attempts to access to the same physical location.
Another example of carrying out the method, similar to the one provided with reference to FIG. 1, is now discussed. The features of this example may be integrated to the example provided with reference to FIG. 1.
In this example, initially, the CCU is in an idle state. When a processor initiates a read from memory, the request leaving the processor going out to DRAM is intercepted by the CCU. The CCU then performs a lookup in the processor to physical page ID lookup table (PPIDT). This lookup is used to determine whether the physical page, where the requested data is located, has been previously requested (and, therefore, potentially modified) by a different processor. If the PPID corresponding to the address is not in the table, the processor accesses DRAM, and the CCU updates the PPIDT, adding a new entry that maps the processor to the corresponding common physical page. The data movement from DRAM to processor is then initiated by the CCU.
If a different processor attempts to access data in the same physical memory page, the PPIDT lookup will reveal that the page has previously been read by a different processor. In that case, to guarantee data consistency, the CCU sends a request to processor to write back all cache lines corresponding to the physical page which is listed alongside processor in the PPDIT. This request is implemented using an interrupt signal through a generic processor interrupt pin. The CCU must wait until the processor signals back that the memory write has been completed, which can be done through a general-purpose input-output pin, or a write to a non-cacheable address.
Once it is guaranteed that the data has been written back to memory, the CCU updates the PPIDT removing the old entry in the PPIDT and creating a new one for processor containing the PPID of the corresponding CCU physical memory page. Finally, the CCU initiates the request to main memory so that data can be transferred to the corresponding processor.
It must be ensured that the processors does not begin to clock in data words after a fixed delay, as would be the case if it were connected directly to DRAM (via its memory controller). One approach to ensure this, in the embodiment where the standard interface is DDR, is to ensure that processors using a CCU implement (and faithfully monitor) the optional DQS DDR interface signal on each read request; this enables the processors to know when valid data is available. Possible alternative approaches might include overriding the existing memory timing setup information, which is stored in serial presence detect (SPD) EEPROMs on memory DIMMs, and read by the processors at startup. When the data from DRAM is ready, it is forwarded by the CCU to the corresponding processor.
According to the foregoing description, the PPIDT is updated whenever a processor initiates a new transaction. Processor caches, however, may be flushed in the interim, or the data that caused a new entry to be added into the table may be evicted to allocate more recent data. In this situation, the CCU may not be able to detect the change in the processor cache. The CCU must, therefore, internally model the cache replacement for the different processor caches so that it can predict whether a specific data item has been evicted from memory. In situations where modeling within the CCU is not possible, all lines in the respective caches might have to be evicted.
An alternative implementation could consist of having one control unit per processor, and synchronization of the different instances of the control unit, possibly using an interconnect network between the CCUs.
Now, the logic may be part of a computer memory control unit. The control unit comprises at least one first interface and second interfaces and is adapted to be connected with a main physical memory via the first interface, and a set of N≧2 non-cooperative processors via the second interfaces. The control unit further comprises the logic, which, again, is operatively coupled to the first and second interfaces and is configured to perform the method. Such a computer memory control unit allows a plurality of non-cooperative processors to share a same main physical memory in an efficient, cheap, and simple manner.
The computer memory control unit may itself be part of a system comprising the computer memory control unit connected with a main physical memory via the first interface and with a set of N≧2 non-cooperative processors via the second interfaces
FIG. 2 is a block diagram of hardware of an example of the system, which may be a computer system with multiple processors sharing a main physical memory, or a computer network comprising multiple computers (and thus multiple processors) sharing a main physical memory (possibly a virtual memory).
In the example, system 200 comprises computer memory control unit 100. Computer memory control unit 100 is connected with main physical memory 210 (also part of system 200) via first interfaces 110. Computer memory control unit 100 is further connected with three processors 220 via second interfaces 120.
Control unit 100 thus comprises at least one first interface 110 and at least two second interfaces 120 (eight in the example) and is adapted to be connected with main physical memory 210 via first interfaces 110, and the non-cooperative processors 220 via the second interfaces 120. Control unit 100 further comprises (control) logic 130, which is operatively coupled to first interfaces 110 and second interfaces 120 via datapath 115, and is configured to perform the method. Second interfaces 120 each comprise interruptor 125, which is adapted to send a signal through an interrupt pin of processors 220 in order to send to a request to the processors 220 to write back cache lines.
Datapath 115 is adapted to centralize information sent by control logic 130 or received from any interface (110, 120). Datapath 115 may comprise for that any or a combination of means for directing data, such as redirection multiplexers, write-back buffers, and/or a bus.
Control unit 100 also comprises request queue 140, which is adapted to queue requests received by processors 220. Control unit 100 also comprises configuration registers 150 and internal memory 160 which stores PPIDT 165. This way, control unit 100 is adapted to interpret a request received from a processor 220 as a request for accessing a given page of main memory 210, and may, thanks to lookup table 165, be evaluated if another processor 220 has previously accessed the given page.
As has been appreciated by one skilled in the art, aspects of the present invention may be embodied as a computerized system, method for using or configuring the system, or computer program product for performing the method. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) (i.e., data storage medium(s)) having computer readable program code recorded thereon.
Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium, i.e., data storage medium, may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing. Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the likes and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The program code may be executed entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
According to one aspect, the invention is embodied as a method implemented by a logic of a computer memory control unit. The control unit comprises at least one first interface and second interfaces and is adapted to be connected with a main physical memory via the first interface, and a set of N≧2 non-cooperative processors via the second interfaces. The logic is operatively coupled to the first and second interfaces. The method comprises receiving, via the second interfaces, a request to access data of the main physical memory from a first processor of the set. The method also comprises evaluating if a second processor has previously accessed the data requested by the first processor. The method further comprises deferring the request from the first processor when the evaluation is positive, or, granting the request from the first processor when the evaluation is negative.
In examples, the method may comprise one or more of the following features: the method comprises, while deferring the request from the first processor, sending a request to the second processor to write back cache lines, that are related to the data requested by the first processor, to the main physical memory; sending the request to the second processor is performed via an interrupt pin of the second processor; the method comprises, while deferring the request from the first processor, and after sending the request to the second processor, ordering the control unit to transmit the requested cache lines, that are received by the control unit from the second processor, to the main physical memory; the method comprises, once the second processor has written back all requested cache lines to the main physical memory, granting the request from the first processor; the evaluation comprises checking in a database of the control unit if a second processor is associated to the data requested by the first processor; the method further comprises associating the first processor to the data requested by the first processor in the database; the second interfaces are double-data-rate dynamic random access memory (DDR DRAM) interfaces; and/or the granularity of management of access is by ranges (blocks or pages) of physical memory addresses.
According to another aspect, the invention is embodied as a computer memory control unit. The control unit comprises at least one first interface and second interfaces and is adapted to be connected with a main physical memory via the first interface, and a set of N≧2 non-cooperative processors via the second interfaces. The control unit comprises a logic operatively coupled to the first and second interfaces and configured to perform the above method.
According to another aspect, the invention is embodied as a system comprising the above computer memory control unit connected with a main physical memory via the first interface and with a set of N≧2 non-cooperative processors via the second interfaces.
According to another aspect, the invention is embodied as a computer program comprising instructions for configuring a logic, that is adapted to be operatively coupled to a first interface and second interfaces of a computer memory control unit comprising the logic, the control unit being adapted to be connected with a main physical memory via the first interface and with a set of N≧2 non-cooperative processors via the second interfaces the processors, the instructions being for configuring the logic to perform the above method.
According to another aspect, the invention is embodied as a data storage medium having recorded thereon the above computer program.

Claims (20)

The invention claimed is:
1. A method of memory sharing implemented by logic of a computer memory control unit, the control unit comprising at least one first interface and at least two second interfaces and is adapted to be connected with a main physical memory via the first interface, and a set of N≧2 non-cooperative processors via a corresponding one of the at least two second interfaces, the logic operatively coupled to the first and second interfaces, the method comprising:
receiving, via the second interfaces, a request to access data of the main physical memory from a first processor of the set;
evaluating if a second processor has previously accessed the data requested by the first processor; and
deferring the request from the first processor when the evaluation is positive, or, granting the request from the first processor when the evaluation is negative;
wherein at least two of the N≧2 non-cooperative processors are different from one another with respect to a virtual memory interface having different physical page sizes with respect to one another;
and wherein the interfaces comprise both hardware connections and software to interpret control signals.
2. The method of claim 1, further comprising: while deferring the request from the first processor, sending a request to the second processor to write back cache lines, that are related to the data requested by the first processor, to the main physical memory.
3. The method of claim 2, wherein sending the request to the second processor is performed via an interrupt pin of the second processor.
4. The method of claim 3, further comprising: while deferring the request from the first processor, and after sending the request to the second processor, ordering the control unit to transmit the requested cache lines, that are received by the control unit from the second processor, to the main physical memory.
5. The method of claim 4, further comprising: once the second processor has written back all requested cache lines to the main physical memory, granting the request from the first processor.
6. The method of claim 1, wherein the evaluation comprises checking in a database of the control unit if a second processor is associated to the data requested by the first processor.
7. The method of claim 6, wherein the method further comprises associating the first processor to the data requested by the first processor in the database.
8. The method of claim 1, wherein the second interfaces are double-data-rate dynamic random access memory interfaces.
9. The method of claim 1, wherein the granularity of management of access is by ranges of physical memory addresses.
10. A system, comprising:
a computer memory control unit having at least one first interface and at least two second interfaces and is adapted to be connected with a main physical memory via the first interface, and a set of N≧2 non-cooperative processors via a corresponding one of the at least two second interfaces;
logic operatively coupled to the first and second interfaces, the logic configured to:
receive, via the second interfaces, a request to access data of the main physical memory from a first processor of the set;
evaluate if a second processor has previously accessed the data requested by the first processor; and
defer the request from the first processor when the evaluation is positive, or, granting the request from the first processor when the evaluation is negative;
wherein at least two of the N≧2 non-cooperative processors are different from one another with respect to a virtual memory interface having different physical page sizes with respect to one another;
and wherein the interfaces comprise both hardware connections and software to interpret control signals.
11. The system of claim 10, wherein the computer memory control unit is connected with the main physical memory via the first interface and with the set of N≧2 non-cooperative processors via the second interfaces.
12. A non-transitory computer readable storage medium having computer readable instructions stored thereon, that when executed by a computer implement a method of memory sharing implemented by logic of a computer memory control unit, the control unit comprising at least one first interface and at least two second interfaces and is adapted to be connected with a main physical memory via the first interface, and a set of N≧2 non-cooperative processors via a corresponding one of the at least two second interfaces, the logic operatively coupled to the first and second interfaces, the method comprising:
receiving, via the second interfaces, a request to access data of the main physical memory from a first processor of the set;
evaluating if a second processor has previously accessed the data requested by the first processor; and
deferring the request from the first processor when the evaluation is positive, or, granting the request from the first processor when the evaluation is negative;
wherein at least two of the N≧2 non-cooperative processors are different from one another with respect to a virtual memory interface having different physical page sizes with respect to one another;
and wherein the interfaces comprise both hardware connections and software to interpret control signals.
13. The computer readable storage medium of claim 12, wherein the method further comprises: while deferring the request from the first processor, sending a request to the second processor to write back cache lines, that are related to the data requested by the first processor, to the main physical memory.
14. The computer readable storage medium of claim 13, wherein sending the request to the second processor is performed via an interrupt pin of the second processor.
15. The computer readable storage medium of claim 14, wherein the method further comprises: while deferring the request from the first processor, and after sending the request to the second processor, ordering the control unit to transmit the requested cache lines, that are received by the control unit from the second processor, to the main physical memory.
16. The computer readable storage medium of claim 15, wherein the method further comprises: once the second processor has written back all requested cache lines to the main physical memory, granting the request from the first processor.
17. The computer readable storage medium of claim 12, wherein the evaluation comprises checking in a database of the control unit if a second processor is associated to the data requested by the first processor.
18. The computer readable storage medium of claim 17, wherein the method further comprises associating the first processor to the data requested by the first processor in the database.
19. The computer readable storage medium of claim 12, wherein the second interfaces are double-data-rate dynamic random access memory interfaces.
20. The computer readable storage medium of claim 12, wherein the granularity of management of access is by ranges of physical memory addresses.
US13/707,801 2011-12-16 2012-12-07 Memory sharing by processors Expired - Fee Related US9183150B2 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
EP11194116.7 2011-12-16
EP11194116 2011-12-16
EP11194116 2011-12-16

Publications (2)

Publication Number Publication Date
US20130159632A1 US20130159632A1 (en) 2013-06-20
US9183150B2 true US9183150B2 (en) 2015-11-10

Family

ID=47520195

Family Applications (1)

Application Number Title Priority Date Filing Date
US13/707,801 Expired - Fee Related US9183150B2 (en) 2011-12-16 2012-12-07 Memory sharing by processors

Country Status (6)

Country Link
US (1) US9183150B2 (en)
JP (1) JP6083714B2 (en)
CN (1) CN103999063B (en)
DE (1) DE112012004926B4 (en)
GB (1) GB2511446B (en)
WO (1) WO2013088283A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11409673B2 (en) * 2019-02-14 2022-08-09 Intel Corporation Triggered operations for collective communication

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016018421A1 (en) * 2014-07-31 2016-02-04 Hewlett-Packard Development Company, L.P. Cache management for nonvolatile main memory
US9781225B1 (en) * 2014-12-09 2017-10-03 Parallel Machines Ltd. Systems and methods for cache streams
US9535606B2 (en) 2014-12-22 2017-01-03 Intel Corporation Virtual serial presence detect for pooled memory
CN105868134B (en) * 2016-04-14 2018-12-28 烽火通信科技股份有限公司 More mouthfuls of DDR controllers of high-performance and its implementation
CN106484521A (en) * 2016-10-21 2017-03-08 郑州云海信息技术有限公司 A kind of data request processing method and device
US11409655B2 (en) * 2019-03-01 2022-08-09 Canon Kabushiki Kaisha Interface apparatus, data processing apparatus, cache control method, and medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5950228A (en) 1997-02-03 1999-09-07 Digital Equipment Corporation Variable-grained memory sharing for clusters of symmetric multi-processors using private and shared state tables
US6397306B2 (en) 1998-10-23 2002-05-28 Alcatel Internetworking, Inc. Per memory atomic access for distributed memory multiprocessor architecture
US6418515B1 (en) 1998-04-22 2002-07-09 Kabushiki Kaisha Toshiba Cache flush unit
US6438660B1 (en) * 1997-12-09 2002-08-20 Intel Corporation Method and apparatus for collapsing writebacks to a memory for resource efficiency
US6829683B1 (en) 2000-07-20 2004-12-07 Silicon Graphics, Inc. System and method for transferring ownership of data in a distributed shared memory system
US20050138277A1 (en) * 2003-12-18 2005-06-23 Samsung Electronics Co., Ltd. Data control circuit for DDR SDRAM controller
US7392352B2 (en) 1998-12-17 2008-06-24 Massachusetts Institute Of Technology Computer architecture for shared memory access
US7509457B2 (en) 2001-03-22 2009-03-24 International Business Machines Corporation Non-homogeneous multi-processor system with shared memory
US7882310B2 (en) 2005-01-07 2011-02-01 Sony Computer Entertainment Inc. Methods and apparatus for managing a shared memory in a multi-processor system
US20110125974A1 (en) 2009-11-13 2011-05-26 Anderson Richard S Distributed symmetric multiprocessing computing architecture
US8386750B2 (en) * 2008-10-31 2013-02-26 Cray Inc. Multiprocessor system having processors with different address widths and method for operating the same

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7380085B2 (en) * 2001-11-14 2008-05-27 Intel Corporation Memory adapted to provide dedicated and or shared memory to multiple processors and method therefor
US7032079B1 (en) 2002-12-13 2006-04-18 Unisys Corporation System and method for accelerating read requests within a multiprocessor system
US20060112226A1 (en) * 2004-11-19 2006-05-25 Hady Frank T Heterogeneous processors sharing a common cache
JP5021978B2 (en) * 2006-08-11 2012-09-12 エヌイーシーコンピュータテクノ株式会社 Multiprocessor system and operation method thereof
JP2008176612A (en) * 2007-01-19 2008-07-31 Nec Electronics Corp Multiprocessor system

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5950228A (en) 1997-02-03 1999-09-07 Digital Equipment Corporation Variable-grained memory sharing for clusters of symmetric multi-processors using private and shared state tables
US6438660B1 (en) * 1997-12-09 2002-08-20 Intel Corporation Method and apparatus for collapsing writebacks to a memory for resource efficiency
US6418515B1 (en) 1998-04-22 2002-07-09 Kabushiki Kaisha Toshiba Cache flush unit
US6397306B2 (en) 1998-10-23 2002-05-28 Alcatel Internetworking, Inc. Per memory atomic access for distributed memory multiprocessor architecture
US7392352B2 (en) 1998-12-17 2008-06-24 Massachusetts Institute Of Technology Computer architecture for shared memory access
US6829683B1 (en) 2000-07-20 2004-12-07 Silicon Graphics, Inc. System and method for transferring ownership of data in a distributed shared memory system
US7509457B2 (en) 2001-03-22 2009-03-24 International Business Machines Corporation Non-homogeneous multi-processor system with shared memory
US20050138277A1 (en) * 2003-12-18 2005-06-23 Samsung Electronics Co., Ltd. Data control circuit for DDR SDRAM controller
US7882310B2 (en) 2005-01-07 2011-02-01 Sony Computer Entertainment Inc. Methods and apparatus for managing a shared memory in a multi-processor system
US8386750B2 (en) * 2008-10-31 2013-02-26 Cray Inc. Multiprocessor system having processors with different address widths and method for operating the same
US20110125974A1 (en) 2009-11-13 2011-05-26 Anderson Richard S Distributed symmetric multiprocessing computing architecture

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Bennett, et al.; Munin: Distributed Shared Memory Based on Type-Specific Memory Coherence, Department of Electrical and Computer Engineering, Department of Computer Science, Rice University, Houston, TX, 1990, pp. 168-176.
Bruce Jacob, et al., "Virtual Memory in Contemporary Microprocessors," IEEE Micro, Jul. 18, 1998, pp. 60-75.
PCT International Search Report and Written Opinion; International Application No: PCT/IB2012/056562; International Filing Date: Nov. 20, 2012; Date of Mailing: Aug. 5, 2013; pp. 1-8.

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11409673B2 (en) * 2019-02-14 2022-08-09 Intel Corporation Triggered operations for collective communication

Also Published As

Publication number Publication date
JP6083714B2 (en) 2017-02-22
JP2015504205A (en) 2015-02-05
CN103999063B (en) 2016-10-05
US20130159632A1 (en) 2013-06-20
GB201408707D0 (en) 2014-07-02
DE112012004926T5 (en) 2014-08-14
WO2013088283A3 (en) 2013-11-07
WO2013088283A2 (en) 2013-06-20
DE112012004926B4 (en) 2023-12-07
CN103999063A (en) 2014-08-20
GB2511446A (en) 2014-09-03
GB2511446B (en) 2016-08-10

Similar Documents

Publication Publication Date Title
US9183150B2 (en) Memory sharing by processors
US11822786B2 (en) Delayed snoop for improved multi-process false sharing parallel thread performance
US7613882B1 (en) Fast invalidation for cache coherency in distributed shared memory system
EP3639146B1 (en) Low power multi-core coherency
US10613999B2 (en) Device, system and method to access a shared memory with field-programmable gate array circuitry without first storing data to computer node
CN103927277A (en) CPU (central processing unit) and GPU (graphic processing unit) on-chip cache sharing method and device
TW200534110A (en) A method for supporting improved burst transfers on a coherent bus
US9213656B2 (en) Flexible arbitration scheme for multi endpoint atomic accesses in multicore systems
US9361230B2 (en) Three channel cache-coherency socket protocol
US20220114098A1 (en) System, apparatus and methods for performing shared memory operations
KR20140098096A (en) Integrated circuits with cache-coherency
US9304925B2 (en) Distributed data return buffer for coherence system with speculative address support
CN116057514A (en) Scalable cache coherency protocol
WO2023121766A1 (en) System, apparatus and methods for direct data reads from memory
US9372796B2 (en) Optimum cache access scheme for multi endpoint atomic access in a multicore system
US9372795B2 (en) Apparatus and method for maintaining cache coherency, and multiprocessor apparatus using the method
KR100978082B1 (en) Asynchronous remote procedure calling method in shared memory multiprocessor and computer-redable redording medium recorded asynchronous remote procedure calling program
US8627016B2 (en) Maintaining data coherence by using data domains
KR100978083B1 (en) Procedure calling method in shared memory multiprocessor and computer-redable recording medium recorded procedure calling program
EP4453733A1 (en) System, apparatus and methods for direct data reads from memory
EP4453736A1 (en) System, apparatus and methods for performing shared memory operations
JP2023507293A (en) Offloading of the system direct memory access engine
CN118633075A (en) Method, device and system for processing request

Legal Events

Date Code Title Description
AS Assignment

Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:CAPARROS CABEZAS, VICTORIA;JONGERIUS, RIK;SCHMATZ, MARTIN L.;AND OTHERS;SIGNING DATES FROM 20121205 TO 20121207;REEL/FRAME:029424/0779

STCF Information on status: patent grant

Free format text: PATENTED CASE

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

LAPS Lapse for failure to pay maintenance fees

Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20191110