EP2186093B1 - Shared memory - Google Patents
Shared memory Download PDFInfo
- Publication number
- EP2186093B1 EP2186093B1 EP08787215.6A EP08787215A EP2186093B1 EP 2186093 B1 EP2186093 B1 EP 2186093B1 EP 08787215 A EP08787215 A EP 08787215A EP 2186093 B1 EP2186093 B1 EP 2186093B1
- Authority
- EP
- European Patent Office
- Prior art keywords
- memory
- shared memory
- input
- data
- address
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Not-in-force
Links
- 230000015654 memory Effects 0.000 title claims description 289
- 239000004065 semiconductor Substances 0.000 claims description 13
- 239000002184 metal Substances 0.000 claims description 8
- 229910052751 metal Inorganic materials 0.000 claims description 8
- 150000002739 metals Chemical class 0.000 claims description 7
- 230000001939 inductive effect Effects 0.000 claims description 5
- 230000003068 static effect Effects 0.000 claims description 3
- 230000002457 bidirectional effect Effects 0.000 claims description 2
- 238000012163 sequencing technique Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 238000004519 manufacturing process Methods 0.000 description 5
- 230000003071 parasitic effect Effects 0.000 description 4
- 230000036316 preload Effects 0.000 description 4
- XUIMIQQOPSSXEZ-UHFFFAOYSA-N Silicon Chemical compound [Si] XUIMIQQOPSSXEZ-UHFFFAOYSA-N 0.000 description 3
- 230000008901 benefit Effects 0.000 description 3
- 238000005265 energy consumption Methods 0.000 description 3
- 230000001965 increasing effect Effects 0.000 description 3
- 238000000034 method Methods 0.000 description 3
- 229910052710 silicon Inorganic materials 0.000 description 3
- 239000010703 silicon Substances 0.000 description 3
- 230000004913 activation Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000000295 complement effect Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 230000003213 activating effect Effects 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 238000005304 joining Methods 0.000 description 1
- 238000004377 microelectronic Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C7/00—Arrangements for writing information into, or reading information out from, a digital store
- G11C7/10—Input/output [I/O] data interface arrangements, e.g. I/O data control circuits, I/O data buffers
- G11C7/1075—Input/output [I/O] data interface arrangements, e.g. I/O data control circuits, I/O data buffers for multiport memories each having random access ports and serial ports, e.g. video RAM
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C5/00—Details of stores covered by group G11C11/00
- G11C5/02—Disposition of storage elements, e.g. in the form of a matrix array
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C7/00—Arrangements for writing information into, or reading information out from, a digital store
- G11C7/10—Input/output [I/O] data interface arrangements, e.g. I/O data control circuits, I/O data buffers
- G11C7/1006—Data managing, e.g. manipulating data before writing or reading out, data bus switches or control circuits therefor
- G11C7/1012—Data reordering during input/output, e.g. crossbars, layers of multiplexers, shifting or rotating
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C7/00—Arrangements for writing information into, or reading information out from, a digital store
- G11C7/10—Input/output [I/O] data interface arrangements, e.g. I/O data control circuits, I/O data buffers
- G11C7/1048—Data bus control circuits, e.g. precharging, presetting, equalising
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C2207/00—Indexing scheme relating to arrangements for writing information into, or reading information out from, a digital store
- G11C2207/10—Aspects relating to interfaces of memory device to external buses
- G11C2207/108—Wide data ports
Definitions
- the present invention relates to a shared memory in particular by several processors integrated in the same circuit based on semiconductors.
- Shared memory finds its particular application in the field of microelectronics.
- the processors can work in parallel on the same application. It is therefore necessary that the processors can exchange a large number of data between them. In order to reduce the data transfer time between the processors, it is advantageous that all the processors have access to the same memory.
- a first processor can thus work on a first task and then update the first data in the memory according to the results it has obtained. It can then signal to a second processor that the first data in the memory is ready for further processing. The second processor can then use the first data of the memory in a second task. Meanwhile, the first processor can process other data.
- a known method is to divide the memory into several independent memory banks. In this way, as long as the processors are each working with a different memory bank, memory sharing does not slow down their processing.
- An existing solution uses multiplexers connected at their output to an input of the memory banks.
- the inputs of the multiplexers are connected to the outputs of the different processors to have access to the memory.
- An input data bus of each processor is connected to an output of a multiplexer receiving as input the output data buses of all memory banks.
- a problem encountered with this implementation is the large number of necessary connections.
- the connecting connections the output data buses of the memory banks with the multiplexers located on the input data buses of the processors are in large numbers.
- the surface of these connections may be greater than the area of the memories and the total area of the shared memory can thus be doubled.
- Such an increase in the area of a shared memory generates significant production costs.
- the large length of these connections induces in particular a large parasitic capacitance. This stray capacitance can slow down data transfer and cause significant energy consumption.
- a multi-port memory is a memory having a number k of inputs / outputs.
- the multi-port memory comprises for example several memory cells, each memory cell having a number k of switches.
- the k switches of each memory cell make it possible to connect each memory cell to a memory output data bus, among k memory output data buses.
- Such memory allows k processors to have access to said memory.
- this other solution also requires a large number of connections. This implies that a component integrating such a memory is of relatively large size.
- the patent US5875470 describes in particular a memory providing multiple processors simultaneous read-write access to the data it contains.
- An object of the invention is in particular to overcome the aforementioned drawbacks.
- the subject of the invention is a shared memory made on a first semiconductor-based integrated circuit as described in the claims.
- the main advantages of the invention include allowing the production of shared memory, having high data access speeds as well as reduced power consumption at low cost.
- the figure 1a represents, schematically, an architecture of a first shared memory 1 by a set of several elementary processors 2, according to the prior art.
- the first shared memory 1 may include a number b of memory banks.
- three memory banks are represented for the example: a first memory bank 15, a second memory bank 16 and a third memory bank 17.
- Each memory bank 15, 16, 17 comprises in particular an input E and an output S.
- Each input E of each memory bank 15, 16, 17 is connected to a multiplexer 6, 7, 8, called the memory input multiplexer 6, 7, 8.
- Each output S of each memory bank 15, 16, 17 is connected to a first memory data bus 12, 13, 14 specific to each memory bank 15, 16, 17.
- the shared memory 1 can be shared by a number n of elementary processors 2.
- three elementary processors are represented: a first elementary processor PE1, a second elementary processor PE2, and a third elementary processor PE3.
- Each elementary processor PE1, PE2, PE3 comprises a data bus and processor addresses 3, 4, 5, connected to its output SP.
- a data and address bus makes it possible to convey signals comprising data, addresses and also control signals.
- the control signals make it possible to control the memory.
- the control signals may in particular control: an activation of the memory, a reading and writing of data in memory and possibly a sequencing of previously named operations.
- Each elementary processor PE1, PE2, PE3 comprises an input EP connected to a processor input multiplexer 9, 10, 11.
- Each data and address bus of each processor 3, 4, 5 is connected to the memory input multiplexers 6, 7, 8.
- the data bus and processor address 3, 4, 5 can be used to write to memory one or more data calculated by an elementary processor PE1, PE2, PE3 for example.
- the data and processor address bus 3, 4, 5 also enables the elementary processor PE1, PE2, PE3 to perform data queries stored in the shared memory 1.
- the memory input multiplexers 6, 7, 8 for example, to select which data coming from the different elementary processors PE1, PE2, PE3 must arrive in each of the memory banks 15, 16, 17.
- Each output S of each memory bank 15, 16, 17 is connected to each processor input multiplexer 9, 10, 11 via the memory data buses 12, 13, 14.
- Each input processor multiplexer 9, 10 11 is therefore connected to all the first memory data buses 12, 13, 14 of each memory bank 15, 16, 17.
- This type of shared memory architecture therefore requires a large number of connections occupying a large area of the shared memory.
- the length of the connections implies a large parasitic capacitance.
- the figure 1b represents in a simplified way a multi-port memory 100, according to the prior art, performed on a chip based on semiconductors.
- the multi-port memory 100 comprises in particular several input / output ports on which several connections can be connected. processors, not shown on the figure 1 b.
- the multi-port memory 100 has several memory cells 101, 102, 103. On the figure 1b three memory cells 101, 102, 103 are shown. Each memory cell 101, 102, 103 is connected to an integer number k of switches. On the figure 1b each memory cell 101, 102, 103 is connected to two switches 1010, 1011, 1020, 1021, 1030, 1031. For example an input / output of a first memory cell 101 is connected to a first switch 1010, and a second switch 1011.
- the first switch 1010 is connected on the one hand to a first word line 107, and on the other hand to a first wire 105 of a first data bus 106.
- the first data bus 106 is for example connected to a pin an input / output port of the multi-port memory 100.
- the second switch 1011 is connected on the one hand to a second word line 104, and on the other hand to a first wire 108 of a second data bus 109.
- a second memory cell 102 is connected to two switches: a third switch 1020 and a fourth switch 1021.
- the third switch 1020 is connected on the one hand to the first word line 107 and on the other hand a second wire 110 of the first data bus 106.
- the fourth switch 1021 is connected firstly to the second word line 104, and secondly to a second wire 111 of the second data bus 109.
- a third memory cell 103 is likewise connected to a fifth switch 1030 and to a sixth switch 1031.
- the fifth switch 1030 is connected on the one hand to a third word line 112 and on the other hand to the first wire 105 of the first data bus 106.
- the sixth switch 1031 is connected on the one hand to a fourth word line 113 and on the other hand to the first wire 108 of the second data bus 109.
- each memory cell 101, 102, 103 is connected to each data bus 109, 106.
- Each word line 104, 107, 112, 113 is connected to a word line decoder 114, 115.
- the first word line 107 is connected to a first word line decoder 114, the second line of the word line decoder. words 104 is, in turn, connected to a second word line decoder 115.
- the third word line 112 is connected to the second word line decoder 115 and the fourth word line 113 is connected to the second word line decoder 115.
- first word line decoder 114 is connected to a word line decoder 114, 115.
- Each word line decoder 114, 115 therefore takes several word lines 104, 107, 112, 113 into input.
- An output of each word line decoder 114, 115, is connected to an address bus 116, 117
- the first word line decoder 114 is connected to a first address bus 116 and the second word line decoder 117 is connected to a second address bus 117.
- the two address buses 116, 117 are connected to an input / output port of the multi-port memory 100.
- a multi-port memory if it provides memory access to several processors in parallel, has a large area, in particular due to the large number of connections to be made.
- the figure 2 schematically represents an example of a structure of a shared memory 20 according to the invention.
- the memories 210, 220, 230 may be single port memories, for example SRAM type, acronym for the English expression Static Random Access Memory meaning static random access memory. SRAM type memories notably allow quick access to the data stored by said memories.
- the shared memory 20 is shared by a number m of processors 25, m being an integer, for example greater than one. On the figure 2 , eight of the m processors 25 are represented for example.
- the m processors 25 may be elementary processors.
- the shared memory 20 is described hereinafter more particularly for a first memory bank 21 and a first processor 26, this same description can be applied to all of the memory banks 21, 22, 23 and to all the memory modules. processors 25.
- the structure of the shared memory 20 further includes a set of bidirectional parallel data buses 24, for example.
- the parallel data buses 24 can be located physically, above the memory banks 21, 22, 23.
- the memory banks 21, 22, 23 can for example to be in the same plane. Placing the parallel data buses 24 physically above the memory banks 21, 22, 23 saves the silicon area used to achieve the shared memory 20 according to the invention. The silicon economy achieved advantageously makes it possible to improve the production cost of a shared memory 20 according to the invention.
- the parallel data buses 24 may be, for example, thirty-two-bit or sixty-four-bit buses or even one hundred and twenty-eight bits.
- the parallel data buses 24 may be differential buses.
- Each of the parallel data buses 24 can make it possible to receive data coming from one of the memory banks 21, 22, 23, or to transmit data to these memory banks, via the buses 213, 223, 233. These transfers can be done simultaneously from or to the m processors 25, because of as many simultaneous transfers as there is data bus 24 (assuming a number of memory banks greater than or equal to the number of data buses 24).
- the parallel data buses 24 may represent EP / SP data inputs / outputs of the shared memory 20 or be physically connected to EP / SP data inputs / outputs of the shared memory 20.
- the EP / SP data I / O of the shared memory 20 is part of the input / output interfaces of the shared memory 20 with electronic components external to the shared memory 20.
- each input / output interface of the shared memory 20 is connected with one of the m processors 25.
- Each bit of an input / output of data EP / SP of the shared memory 20 can be made on the same wire or on two separated wires, thus realizing a differential connection.
- Each of the parallel data buses 24 corresponds to one of the m processors 25.
- each of the processors 25 is connected to one of the parallel buses 24.
- the data entering and leaving each of the m processors 25 flows on one of the corresponding parallel buses 24 of corresponding data.
- the first processor 26 is connected to a first parallel data bus 27 via a first input / output EP / SP of the shared memory 20.
- the first parallel data bus 27 is itself connected to a data bus 27.
- the first switch 215 is part of a first set 214 of switches of the first memory bank 21.
- the first memory data bus 213 is in particular connected to an I / O data input / output of a first memory 210 forming part of the first memory bank 21.
- the memory data buses 213, 223, 233 may be differential buses.
- the first switch 215 makes it possible in particular to select data coming in particular from the first memory bank 21 to be circulated on the first parallel data bus 27 in the reading phase of the shared memory 20 by the first processor 26.
- Each of the parallel data buses 24 is connected to an I / O data input / output of each of the memories 210, 220, 230 of the memory banks 21, 22, 23 through one of each set 214 , 224, 234 of switches of each of the banks of memory 21, 22, 23.
- each of the banks of memory 21, 22, 23 has of a number m of switches enabling it to distribute data from the memories 210, 220, 230 over all the parallel data buses 24. This amounts to distributing a function of multiplexers connected to the parallel data buses 24, on each of the memory banks 21, 22, 23.
- the address and control buses 200 are connected to m address and control entries SP 'of the shared memory 20.
- the first address and control bus 204 forming part of the address and address buses. 200, is connected to an address and control input SP 'of the shared memory 20.
- the address and control input SP' of the shared memory 20 is connected to an address and control bus of the first processor 26.
- the address and control inputs SP ' are part of the input / output interfaces EP / SP, SP' of the shared memory 20.
- Each address and control bus 200 is connected to each of the memory banks 21, 22, 23.
- each address and control bus 200 is connected to an input of each multiplexer 211, 221, 231 of each Memory bus 21, 22, 23.
- the address and control bus 204 is connected to an input of a first multiplexer 211 forming part of the first memory bank 21.
- the address and control bus 204 is also connected. an input of a second multiplexer 221 forming part of a second memory bank 22, and an input of a third multiplexer 231, forming part of a third memory bank 23.
- Each multiplexer 211, 221, 231 is connected to a control and control input E of a memory 210, 220, 230.
- a command and control input E of the first memory 210 is connected to the output of the first multiplexer 211.
- the parallel address and control buses 200 make it possible to convey to the p memory banks 21, 22, 23 requests from each of the m processors 25.
- connections between the memory banks 21, 22, 23 and the microprocessors 25 are therefore via the parallel data buses 24 and the parallel address and control buses 200.
- Each processor 25 is therefore connected to one parallel data buses 24 and one of the parallel address and control buses 20, which allows access to short data segments.
- the connections between the memory banks 21, 22, 23 and the microprocessors 25 thus generate little or no additional surface with respect to the surface occupied by the memory banks 21, 22, 23, these connections being able for example to be performed physically above the memory banks 21, 22, 23.
- the shared memory 20 as well as the m processors 25 may be integrated on a first semiconductor-based chip for example.
- the parallel data buses 24, the address and control buses 200 can be made in layers of metals located physically above the layers forming the p memory banks 21, 22, 23.
- the memory banks 21, 22, 23 are indeed made using, inter alia, several layers of different metals.
- the parallel data buses 24, the address and control buses 200 can be made in layers of metals superimposed on the layers making up the p memory banks 21, 22, 23.
- the realization of the memories 210 , 220, 230 is also known to those skilled in the art.
- connections can be made differentially. This amounts to doubling each connection and joining each pair of connection by a differential amplifier thus allowing to use a small voltage excursion of the signal carried by the connections. Such an embodiment is described more precisely later.
- a similar device is commonly used on bit lines of memories for example.
- the figure 3 represents a detailed example of an embodiment of an interconnection between a fourth memory bank 30, part of the shared memory 20 according to the invention, and a second processor 31.
- the figure also shows an interconnection between the fourth memory bank 30 and a third processor 32.
- the second processor 31 and the third processor 32 are, for example, part of the m processors 25 shown in FIG. figure 2 .
- the reading request passes notably through the address and control buses 200 represented on the figure 2 .
- the address and control buses 200 comprise in particular a wire 313, 323, represented on the figure 3 , carrying a sequencing signal of the processor 31, 32 for controlling the bus driver 203 at the appropriate time.
- the son 313, 323 are called sequencing signals 313, 323.
- the control module 311, 321 of the processor 31, 32 activates the sequencing signal 313, 323.
- the sequencing signal is intended to connect the output S of the second memory 300 to a second data bus memory 203.
- the sequencing signal 313, 323 is connected to a control component 33.
- the control component 33 controls the operation of one or more drivers 34.
- a driver 34 is a pilot electronic component. The drivers 34 are responsible for connecting the output S of the memory 300 to the parallel data buses 24.
- the parallel data buses 24 are represented by two pairs of wires 314, 324, each pair of wires 314, 324 being connected to the One of the processors 31, 32.
- the two pairs of wires 314, 324 are called data buses 314, 324.
- Each of the two pairs of wires 314, 324 represents, for example, thirty-two pairs of wires in each other. the case of a thirty-two bit data bus.
- the control component 33 can be realized by means of an "OR" logic gate.
- the control component 33 may be an element of a multiplexer as the first multiplexer 211 shown on the figure 2 for the first memory bank 21.
- the first driver or drivers 34 are, for example, three-state drivers.
- the function of the first driver 34 can be performed by a first differential reading amplifier of the second memory 300 for example.
- a first differential amplifier having one or two stages may be used, depending on the capacitive load of the parallel data buses 24, shown in FIG. figure 2 .
- a second memory data bus 203 is connected to the output of the driver 34.
- the control module 311, 321, or the arbitrator can activate a selection control of a data bus 314, 324 of a processor 31, 32 having previously requested data.
- the selection control is performed by means of switches 315, 325, such as switches 214, 224, 234 shown in FIG. figure 2 .
- each data bus 314, 324 being a differential data bus 314, 324.
- each selection control uses in particular two switching elements, for example transistors, by switch 315, 325, or an element switching by differential bus line.
- the selection command makes it possible to connect the output of the driver 34 to a data bus 314, 324 itself connected to a processor 31, 32.
- a pre-load module 312, 322 of the data bus 314, 324 cuts the preload.
- the pre-charge module 312, 322 has previously loaded the parasitic capacitance of the data bus lines 314, 324 with a voltage VDD or supply voltage. The voltage VDD is then the same in each of the lines of the data bus 314, 324.
- the driver or drivers 34 When the output signal of the memory is sufficiently strong, the driver or drivers 34 are activated. The drivers 34 then unload one of the complementary lines of the second memory data bus 203. The voltage difference is transmitted on the data bus 314, 324. Then, the voltage difference is detected by a second differential amplifier 316, 326 connected to each other. a part to an input or a data bus Din of the processor 31, 32 and secondly to the data bus 314, 324. The second differential amplifier 316, 326 can act as a driver for the data bus Din of the processor 31, 32. As soon as the output of the second differential amplifier 316, 326 has switched, the processor 31, 32 can record the data, the signal having recovered at the output of the second differential amplifier 316, 326 its full excursion. The data bus 314, 324 can then be preloaded by the pre-charge module 312, 322. The pre-charge module 312, 322 can be controlled by the control module 311, 321.
- the writing of a data in memory by the processor 31, 32 can be done in the same way.
- the data conveyed by the data bus 314, 324 is in this case controlled by the processor 31, 32.
- data to be stored in memory is encoded on a signal.
- the signal leaves the processor 31, 32 by an output Dout of the processor 31, 32 and is then transmitted in differential mode by a third differential amplifier 317, 327.
- the pre-charge of the data bus 314, 324 being cut off, one of the complementary lines data bus 314, 324 is charged by the output of the third differential amplifier 317, 327.
- the voltage difference of the data bus 314, 324 propagates to the second memory data bus 203.
- the voltage difference is then detected by the first differential amplifier 34 of the fourth memory bank 30. Once the signal, detected by the first differential amplifier 34, has switched on an output of the first differential amplifier 34, the data contained in the signal is recorded in the fourth memory bank 30.
- Achieving shared memory using differential means enables increased efficiency. Indeed, it is not necessary to obtain a full signal excursion to propagate data from one bus to another, for example. This allows a saving of time as well as an energy saving during the transfer of data from the shared memory 20 to the processors 31, 32.
- the figure 4 represents an example of different phases of the reading of a data by a processor 31, 32, for example, on the fourth memory bank 30 of the shared memory 20 according to the invention.
- the different phases are presented in the form of several chronograms 40, 43, 45, 48, 49, 403, 404 showing the variations of the different signals during each phase of the reading.
- a first timing diagram 40 represents a signal transiting on the first address and control bus 200 represented on the figure 2 .
- a data read request 41 modifies the signal of the first address and control bus 200.
- the processor 31, 32 sends a read request 41 to the fourth memory bank 30, this triggers the reading of a value on the fourth memory bank 30 and thus the appearance of a first signal 44 on a bit line of the fourth memory bank 30.
- the appearance of the first signal 44 on the bit line of the fourth memory bank 30 is represented on the second timing diagram 45.
- the pre-load of the data bus 314, 324 is cut off.
- the cut-off 42 of the pre-charge, performed by the pre-charge module 312, 322, is represented on a third timing diagram 43.
- a first command 47 of the drivers 34 connected to the second memory data bus 203 is activated.
- the first command 47 of the drivers 34 is represented on a fourth timing diagram 48.
- a fifth timing 49 represents the appearance of a second signal 400 on the lines of the data bus 314, 324.
- a second command 402 is sent by the control module 311, 321 to the driver 316, 326 of the data bus Din processor 31, 32.
- the second control 402 is shown on a sixth timing 403.
- the execution of the second command 402 by the driver 316, 326 causes a positioning 406 of the Din input of the processor 31, 32 to the value read in the fourth memory bank 30.
- the positioning 406 of the Din input of the processor 31 , 32 is represented on a seventh timing diagram 404 illustrating two possible state changes: a first change of state from zero to one and a second change of state from one to zero.
- control module 311, 321 sends a loading command to the pre-load module 312, 322 in order to load the lines of the bus. data 314, 324 as well as the lines of the second memory data bus 203.
- the figure 5 represents another mode of use of the shared memory 20 according to the invention.
- the invention can, in fact, apply to a shared memory 20 realizing a first full-fledged circuit.
- the first circuit including the shared memory 20 can be realized on a second chip based semiconductors.
- FIG 5 a schematic representation of such an embodiment is presented.
- the m processors 25 as shown in the figure 2 are replaced on the figure 5 by a series of m input / output ports 50.
- the m input / output ports 50 may be connected to one or more external circuits to the first circuit comprising the shared memory 20. This connection can be made conventionally by the intermediate transfer ports connected to pins of the circuit having the shared memory 20.
- an input / output port is an element for connecting a chip to external electronic components.
- a port can therefore be a system implementing an exchange protocol and having input / output pins or for example a module capable of transforming signals entering or leaving the chip so that they can be transmitted for example by an inductive or capacitive coupling.
- the m input / output ports 50 are connected to the shared memory 20 via the data inputs / outputs EP / SP of the shared memory 20 and via the address and control inputs SP ' of the shared memory 20.
- the shared memory 20 shown on the figure 5 is the same as the one shown on the figure 2 .
- the input / output ports 50 may, in a first example of use of the shared memory 20 according to the invention, be connected to a processing chip.
- the processing chip comprising for example m processors such as processors 25 represented on the figure 2 , can be connected with the input / output ports 50 by a three-dimensional interconnection.
- the second chip having the shared memory 20 and the processing chip can be physically glued one above the other.
- the connections between the two chips can be made either via vias through the chip above, for example the second memory chip, or by inductive links, or by capacitive links.
- the vias are contacts between two levels of metals, the vias can especially cross the entire chip.
- the inductive links they can be made from small coils with a coil for example.
- the capacitive connections can be made by metal surfaces facing each other. These connections can in particular be used to connect the data and address buses of the processing chip processors to the parallel data buses 24 and the parallel address and data buses 200 of the shared memory 20.
- Such a memory chip is particularly relevant with the types of interconnection as described above.
- Such links notably make it possible to connect the shared memory 20 with the processing chip through multiple links.
- the interconnection links between the two chips make it possible to ensure a bandwidth much greater than that of the current memories. Indeed, the current memories are limited in particular by the number of pins of the housing integrating them and have only one input / output port.
- the figure 6 represents, schematically, a second example of use of the shared memory 20 according to the invention.
- several memory chips 60 having a shared memory 20 according to the invention can be placed physically one above the other.
- the data buses 24 and the address and control buses 200 shown in FIG. figure 2 are connected to data and address buses connecting the shared memories 20 of the different memory chips 60.
- Data and addresses connecting the shared memories 20 can traverse the memory chips 60 transversely by vias 61 or capacitive links or inductive links. Connections with processors 63 located for example on another chip 64 physically located above the memory chips 60 are effected, for example, via the vias 61.
- Such an embodiment has the advantage of increasing the number of memories available to processors without significantly increasing the interconnection surfaces, especially between chips and processors.
- the shared memory 20 advantageously allows m processors 25 to be able to access all the memory banks 21, 22, 23 of the shared memory 20. All the m processors 25 can in particular have simultaneous access to the shared memory 20, each m processors 25 then accessing a memory bank 21, 22, 23 different.
- the shared memory 20 makes it possible to considerably reduce the interconnection surface between the memories and the processors of a multiprocessor system by making it possible to pass this interconnection for example over the memories.
- the use of single-port memories 210, 220, 230 advantageously makes it possible to optimize the surface area of the connections required with respect to a solution using multi-port memories.
- the shared memory according to the invention also allows a gain in data transfer speed and a gain in energy consumption. Indeed, the reduction in the length of the connections reduces their parasitic capacitance and therefore the switching time of the signals as well as the energy consumption. Switching time and power consumption are further reduced by the use of a differential structure where the voltage swing used is reduced.
- An efficient design in terms of silicon layers used in particular for making the connections in a chip comprising a shared memory 20 according to the invention advantageously reduces the production costs of such shared memory.
- the shared memory 20 has a mode of operation of the MIMD type, an acronym for the multiple statement Multiple Data, meaning “multiple instructions, multiple data”, allowing different programs executing. on the m processors 25 to simultaneously access different data contained in the shared memory 20.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Multi Processors (AREA)
- Static Random-Access Memory (AREA)
Description
La présente invention concerne une mémoire partagée notamment par plusieurs processeurs intégrés dans un même circuit à base de semi-conducteurs. La mémoire partagée trouve notamment son application dans le domaine de la microélectronique.The present invention relates to a shared memory in particular by several processors integrated in the same circuit based on semiconductors. Shared memory finds its particular application in the field of microelectronics.
Dans un système multiprocesseur, les processeurs peuvent travailler en parallèle sur une même application. Il est donc nécessaire que les processeurs puissent s'échanger entre eux un nombre important de données. Afin de réduire le temps de transfert des données entre les processeurs, il est avantageux que tous les processeurs aient accès à une même mémoire. Un premier processeur peut ainsi travailler sur une première tâche puis mettre à jour des premières données dans la mémoire en fonction des résultats qu'il a obtenu. Il peut ensuite signaler à un deuxième processeur que les premières données de la mémoire sont prêtes pour subir un autre traitement. Le deuxième processeur peut alors utiliser les premières données de la mémoire dans une deuxième tâche. Pendant ce temps, le premier processeur peut traiter d'autres données.In a multiprocessor system, the processors can work in parallel on the same application. It is therefore necessary that the processors can exchange a large number of data between them. In order to reduce the data transfer time between the processors, it is advantageous that all the processors have access to the same memory. A first processor can thus work on a first task and then update the first data in the memory according to the results it has obtained. It can then signal to a second processor that the first data in the memory is ready for further processing. The second processor can then use the first data of the memory in a second task. Meanwhile, the first processor can process other data.
Afin que le partage de la mémoire par plusieurs processeurs ne ralentisse pas les traitements des processeurs, une méthode connue est de diviser la mémoire en plusieurs bancs de mémoire indépendants. De cette manière, tant que les processeurs travaillent chacun avec un banc de mémoire différent, le partage de la mémoire ne ralentit pas leurs traitements.So that the memory sharing by multiple processors does not slow processor processing, a known method is to divide the memory into several independent memory banks. In this way, as long as the processors are each working with a different memory bank, memory sharing does not slow down their processing.
Il convient pour une telle mémoire partagée d'utiliser une conception qui permette à la fois une réduction des coûts de production et une optimisation des vitesses de transfert des données. Différentes conceptions de mémoire partagée ont été proposées, comme par exemple dans
Une solution existante utilise des multiplexeurs connectés en leur sortie sur une entrée des bancs mémoire. Les entrées des multiplexeurs sont connectées aux sorties des différents processeurs devant avoir accès à la mémoire. Un bus de données d'entrée de chaque processeur est connecté à une sortie d'un multiplexeur recevant en entrée les bus de données de sortie de tous les bancs mémoire.An existing solution uses multiplexers connected at their output to an input of the memory banks. The inputs of the multiplexers are connected to the outputs of the different processors to have access to the memory. An input data bus of each processor is connected to an output of a multiplexer receiving as input the output data buses of all memory banks.
Un problème rencontré avec cette mise en oeuvre est le nombre important de connexions nécessaires. En particulier, les connexions reliant les bus de données de sortie des bancs de mémoire avec les multiplexeurs situés sur les bus de données d'entrée des processeurs sont en nombre important. La surface de ces connexions peut être supérieure à la surface des mémoires et la surface totale de la mémoire partagée peut ainsi être doublée. Une telle augmentation de la surface d'une mémoire partagée engendre des coûts de production importants. De plus la longueur importante de ces connexions induit notamment une capacité parasite importante. Cette capacité parasite peut ralentir le transfert de données et causer une consommation d'énergie importante.A problem encountered with this implementation is the large number of necessary connections. In particular, the connecting connections the output data buses of the memory banks with the multiplexers located on the input data buses of the processors are in large numbers. The surface of these connections may be greater than the area of the memories and the total area of the shared memory can thus be doubled. Such an increase in the area of a shared memory generates significant production costs. In addition, the large length of these connections induces in particular a large parasitic capacitance. This stray capacitance can slow down data transfer and cause significant energy consumption.
Une autre solution existante est de connecter plusieurs processeurs à une même mémoire multi-ports. Une mémoire multi-ports est une mémoire comportant un nombre k d'entrées/sorties. La mémoire multi-ports comporte par exemple plusieurs cellules mémoire, chaque cellule mémoire comportant un nombre k de commutateurs. Les k commutateurs de chaque cellule mémoire permettent de connecter chaque cellule mémoire à un bus de données de sortie mémoire, parmi k bus de données de sortie mémoire. Une telle mémoire permet à k processeurs d'avoir un accès à ladite mémoire. Cependant cette autre solution nécessite également un nombre important de connexions. Ceci implique qu'un composant intégrant une telle mémoire est de taille relativement importante.Another existing solution is to connect several processors to the same multi-port memory. A multi-port memory is a memory having a number k of inputs / outputs. The multi-port memory comprises for example several memory cells, each memory cell having a number k of switches. The k switches of each memory cell make it possible to connect each memory cell to a memory output data bus, among k memory output data buses. Such memory allows k processors to have access to said memory. However this other solution also requires a large number of connections. This implies that a component integrating such a memory is of relatively large size.
Le brevet
Un but de l'invention est notamment de pallier les inconvénients précités. A cet effet, l'invention a pour objet une mémoire partagée réalisée sur un premier circuit intégré à base de semi-conducteurs telle que décrit dans les revendications.An object of the invention is in particular to overcome the aforementioned drawbacks. For this purpose, the subject of the invention is a shared memory made on a first semiconductor-based integrated circuit as described in the claims.
L'invention a notamment pour principaux avantages de permettre la production d'une mémoire partagée, ayant des vitesses d'accès aux données importantes ainsi qu'une consommation d'énergie réduite, à faible coût.The main advantages of the invention include allowing the production of shared memory, having high data access speeds as well as reduced power consumption at low cost.
D'autres caractéristiques et avantages de l'invention apparaîtront à l'aide de la description qui suit, donnée à titre illustratif et non limitatif, et faite en regard des dessins annexés qui représentent :
- la
figure 1a : un exemple d'architecture de mémoire partagée par plusieurs processeurs selon l'art antérieur ; - la
figure 1b : un exemple d'architecture d'une mémoire multi-ports selon l'art antérieur ; - la
figure 2 : un premier exemple d'architecture de mémoire partagée par plusieurs processeurs selon l'invention ; - la
figure 3 : un exemple détaillé d'une interconnexion selon l'invention entre un banc mémoire et un processeur ; - la
figure 4 : un chronogramme de fonctionnement d'une lecture de donnée sur un banc de la mémoire partagée selon l'invention ; - la
figure 5 : un deuxième exemple d'architecture d'une mémoire partagée selon l'invention ; - la
figure 6 : un troisième exemple schématique d'architecture d'une mémoire partagée selon l'invention.
- the
figure 1a : an example of memory architecture shared by several processors according to the prior art; - the
figure 1b : an example architecture of a multi-port memory according to the prior art; - the
figure 2 : a first example of memory architecture shared by several processors according to the invention; - the
figure 3 : a detailed example of an interconnection according to the invention between a memory bank and a processor; - the
figure 4 a chronogram of operation of a reading of data on a bank of the shared memory according to the invention; - the
figure 5 : a second example of architecture of a shared memory according to the invention; - the
figure 6 : A third schematic example of architecture of a shared memory according to the invention.
La
Par exemple la première mémoire partagée 1 peut comporter un nombre b de bancs mémoire. Sur la
La mémoire partagée 1 peut être partagée par un nombre n de processeurs élémentaires 2. Sur la
Chaque processeur élémentaire PE1, PE2, PE3 comporte une entrée EP reliée à un multiplexeur d'entrée processeur 9, 10, 11.The shared
Each elementary processor PE1, PE2, PE3 comprises an input EP connected to a
Chaque bus de données et d'adresses de chaque processeur 3, 4, 5 est relié aux multiplexeurs d'entrée mémoire 6, 7, 8. Le bus de données et d'adresses processeur 3, 4, 5 peut permettre d'écrire en mémoire une ou plusieurs données calculées par un processeur élémentaire PE1, PE2, PE3 par exemple. Le bus de données et d'adresses processeur 3, 4, 5 permet également au processeur élémentaire PE1, PE2, PE3 d'effectuer des requêtes de données stockées dans la mémoire partagée 1. Les multiplexeurs d'entrée mémoire 6, 7, 8 permet par exemple de sélectionner quelles données venant des différents processeurs élémentaires PE1, PE2, PE3 doivent arriver dans chacun des bancs mémoire 15, 16, 17.Each data and address bus of each
Chaque sortie S de chaque banc mémoire 15, 16, 17 est reliée à chaque multiplexeur d'entrée processeur 9, 10, 11 par l'intermédiaire des bus de données mémoire 12, 13, 14. Chaque multiplexeur d'entrée processeur 9, 10, 11 est donc connecté à l'ensemble des premiers bus de données mémoire 12, 13, 14 de chaque banc mémoire 15, 16, 17.Each output S of each
Ce type d'architecture de mémoire partagée nécessite donc un grand nombre de connexions occupant une surface importante de la mémoire partagée. De plus la longueur des connexions implique une capacité parasite importante.This type of shared memory architecture therefore requires a large number of connections occupying a large area of the shared memory. In addition, the length of the connections implies a large parasitic capacitance.
La
Le premier commutateur 1010 est relié d'une part à une première ligne de mots 107, et d'autre part à un premier fil 105 d'un premier bus de données 106. Le premier bus de données 106 est par exemple relié à une broche d'un port d'entrée/sortie de la mémoire multi-ports 100.The
De la même manière, le deuxième commutateur 1011 est relié d'une part à une deuxième ligne de mots 104, et d'autre part à un premier fil 108 d'un deuxième bus de données 109.Similarly, the
Comme la première cellule mémoire 101, une deuxième cellule mémoire 102 est connectée à deux commutateurs: un troisième commutateur 1020 et un quatrième commutateur 1021. Le troisième commutateur 1020 est connecté d'une part à la première ligne de mots 107 et d'autre part à un deuxième fil 110 du premier bus de données 106. Le quatrième commutateur 1021 est connecté d'une part à la deuxième ligne de mots 104, et d'autre part à un deuxième fil 111 du deuxième bus de données 109.Like the
Une troisième cellule mémoire 103 est, de la même manière, connectée à un cinquième commutateur 1030 et à un sixième commutateur 1031. Le cinquième commutateur 1030 est connecté d'une part à une troisième ligne de mots 112 et d'autre part au premier fil 105 du premier bus de données 106. Le sixième commutateur 1031 est connecté d'une part à une quatrième ligne de mots 113 et d'autre part au premier fil 108 du deuxième bus de données 109.A
De manière générale, chaque cellule mémoire 101, 102, 103, est connectée à chaque bus de donnée 109, 106.In general, each
Chaque ligne de mot 104, 107, 112, 113 est connectée à un décodeur de ligne de mots 114, 115. La première ligne de mots 107 est connectée à un premier décodeur de ligne de mots 114, la deuxième ligne de mots 104 est, quant à elle, connectée à un deuxième décodeur de ligne de mots 115. De la même manière, la troisième ligne de mots 112 est connectée au deuxième décodeur de ligne de mots 115 et la quatrième ligne de mots 113 est connectée au premier décodeur de ligne de mots 114.Each
Chaque décodeur de ligne de mot 114, 115, prend donc en entrée plusieurs lignes de mots 104, 107, 112, 113. Une sortie de chaque décodeur de ligne de mot 114, 115, est connectée à un bus d'adresse 116, 117. Par exemple, le premier décodeur de ligne de mots 114 est connecté à un premier bus d'adresses 116 et le deuxième décodeur de ligne de mots 117 est connecté à un deuxième bus d'adresses 117. Les deux bus d'adresses 116, 117 sont reliés à un port d'entrée/sortie de la mémoire multi-ports 100.Each
Une telle mémoire multi-ports 100 représentée sur la
- commutateurs 1010, 1011, 1020, 1021, 1030, 1031 reliés à chaque cellule mémoire ;
- lignes de mots ;
- bus de données.
-
1010, 1011, 1020, 1021, 1030, 1031 connected to each memory cell;switches - lines of words;
- data bus.
Une mémoire multi-ports, si elle fournit un accès mémoire à plusieurs processeurs en parallèle, possède une surface importante notamment due au nombre important de connexions à réaliser.A multi-port memory, if it provides memory access to several processors in parallel, has a large area, in particular due to the large number of connections to be made.
La
La structure de la mémoire partagée 20 selon l'invention comporte par exemple un nombre p de bancs mémoire 21, 22, 23, p étant un nombre entier par exemple supérieur à deux. Sur la
210, 220, 230 ;une mémoire 211, 221, 231 ;un multiplexeur - un bus de données mémoire 213, 223, 233 ;
- un bloc de commutateurs 214, 224, 234.
- a
210, 220, 230;memory - a
211, 221, 231;multiplexer - a
213, 223, 233;memory data bus - a
214, 224, 234.switch block
Les mémoires 210, 220, 230 peuvent être des mémoires à simple port, par exemple de type SRAM, acronyme pour l'expression anglo-saxonne Static Random Access Memory signifiant mémoire vive statique. Des mémoires de type SRAM permettent notamment un accès rapide aux données stockées par les-dites mémoires.The
La mémoire partagée 20 est partagée par un nombre m de processeurs 25, m étant un nombre entier par exemple supérieur à un. Sur la
La mémoire partagée 20 est décrite dans la suite plus particulièrement pour un premier banc mémoire 21 et un premier processeur 26, cette même description peut s'appliquer à l'ensemble des p bancs mémoires 21, 22, 23 et à l'ensemble des m processeurs 25.The shared
La structure de la mémoire partagée 20 comporte en outre un ensemble de m bus parallèles bidirectionnels de données 24, par exemple. Dans un mode de réalisation d'une mémoire partagée 20 selon l'invention, les bus parallèles de données 24 peuvent se situer physiquement, au-dessus des p bancs mémoire 21, 22, 23. Les p bancs mémoires 21, 22, 23 peuvent par exemple se situer dans un même plan. Placer les bus parallèles de données 24 physiquement au-dessus des p bancs mémoire 21, 22, 23 permet d'économiser la surface de silicium utilisée pour réaliser la mémoire partagée 20 selon l'invention. L'économie de silicium réalisée permet avantageusement d'améliorer le coût de production d'une mémoire partagée 20 selon l'invention. Les bus parallèles de données 24 peuvent être par exemple des bus de trente-deux bits ou de soixante-quatre bits ou encore de cent vingt-huit bits. Les bus parallèles de données 24 peuvent être des bus différentiels.
Chacun des bus parallèles de données 24 peut permettre de recevoir une donnée venant de l'un des p bancs mémoire 21, 22, 23, ou bien de transmettre une donnée vers ces bancs mémoire, via les bus 213, 223, 233. Ces transferts peuvent se faire simultanément depuis ou vers les m processeurs 25, à raison d'autant de transferts simultanés qu'il y a de bus de données 24 (en supposant un nombre de bancs mémoire supérieur ou égal au nombre de bus de données 24).
Les bus parallèles de données 24 peuvent représenter des entrées/sorties de données EP/SP de la mémoire partagée 20 ou être physiquement raccordés à des entrées/sorties de données EP/SP de la mémoire partagée 20.The structure of the shared
Each of the
The
Les entrées/sorties de données EP/SP de la mémoire partagée 20 font partie des interfaces d'entrées/sorties de la mémoire partagée 20 avec des composants électroniques extérieurs à la mémoire partagée 20. Sur la
Chacun des m bus parallèles de données 24 correspond à l'un des m processeurs 25. Par exemple, sur la
Chacun des m bus parallèles de données 24 est relié à une entrée/sortie de données E/S de chacune des mémoires 210, 220, 230 des p bancs mémoire 21, 22, 23 par l'intermédiaire d'un commutateur parmi chaque ensemble 214, 224, 234 de commutateurs de chacun des p bancs mémoire 21, 22, 23. Ainsi chacun des p bancs mémoire 21, 22, 23 dispose d'un nombre m de commutateurs lui permettant de distribuer des données des mémoires 210, 220, 230 sur l'ensemble des m bus parallèles de données 24. Ceci revient à répartir une fonction de multiplexeurs connectés aux m bus parallèles de données 24, sur chacun des p bancs mémoire 21, 22,23.Each of the
La mémoire partagée 20 comporte également un ensemble de m bus parallèles d'adresses et de contrôle 200 rassemblant les bus d'adresses et de contrôle de chacun des m processeurs 25. Un des bus d'adresses et de contrôle 200, comme un premier bus d'adresse et de contrôle 204, permet notamment de transmettre des signaux d'adresses mais également des signaux de contrôle. Les signaux de contrôle permettent notamment de commander les mémoires 210, 220, 230. Une commande d'une mémoire 210, 220, 230 peut être une commande :
- d'activation de la mémoire 210, 220, 230 ;
- d'opérations de lecture et d'écriture de la mémoire 210, 220, 230 ;
- et, éventuellement de mise en séquence, ou séquencement, des différentes opérations effectuées par la mémoire 210, 220, 230.
- activating the
210, 220, 230;memory - reading and writing operations of the
210, 220, 230;memory - and, optionally sequencing or sequencing, the various operations performed by the
210, 220, 230.memory
Les bus d'adresses et de contrôle 200 sont connectés à m entrées d'adresses et de contrôle SP' de la mémoire partagée 20. Par exemple le premier bus d'adresses et de contrôle 204, faisant partie des bus d'adresses et de contrôle 200, est connecté à une entrée d'adresses et de contrôle SP' de la mémoire partagée 20. L'entrée d'adresses et de contrôle SP' de la mémoire partagée 20 est connectée à un bus d'adresses et de contrôle du premier processeur 26. Les entrées d'adresses et de contrôle SP' font parties des interfaces d'entrées/sorties EP/SP, SP' de la mémoire partagée 20.The address and
Chaque bus d'adresses et de contrôle 200 est connecté à chacun des p bancs mémoire 21, 22, 23. En particulier, chaque bus d'adresses et de contrôle 200 est connecté à une entrée de chaque multiplexeur 211, 221, 231 de chacun des p bancs mémoire 21, 22, 23. Le bus d'adresses et de contrôle 204 est connecté à une entrée d'un premier multiplexeur 211 faisant partie du premier banc mémoire 21. Le bus d'adresses et de contrôle 204 est également connecté à une entrée d'un deuxième multiplexeur 221 faisant partie d'un deuxième banc mémoire 22, ainsi qu'à une entrée d'un troisième multiplexeur 231, faisant partie d'un troisième banc mémoire 23.Each address and
Chaque multiplexeur 211, 221, 231 est relié à une entrée de commande et de contrôle E d'une mémoire 210, 220, 230. Par exemple, une entrée de commande et de contrôle E de la première mémoire 210 est connectée à la sortie du premier multiplexeur 211.Each
Les bus parallèles d'adresses et de contrôle 200 permettent de véhiculer vers les p bancs mémoires 21, 22, 23 des requêtes de chacun des m processeurs 25.The parallel address and
Les connexions entre les p bancs mémoires 21, 22, 23 et les m processeurs 25 se font donc via les m bus parallèles de données 24 et les bus parallèles d'adresses et de contrôle 200. Chaque processeur 25 est donc connecté à l'un des m bus parallèles de données 24 et à l'un des m bus parallèles d'adresses et de contrôle 20, ce qui permet l'accès à des segments de données courts. Les connexions entre les p bancs mémoires 21, 22, 23 et les m processeurs 25 n'engendrent donc pas ou peu de surface supplémentaire par rapport à la surface occupée par les p bancs mémoires 21, 22, 23, ces connexions pouvant par exemple être réalisées physiquement au-dessus des p bancs mémoire 21, 22, 23.The connections between the
La mémoire partagée 20 ainsi que les m processeurs 25 peuvent être intégrés sur une première puce à base de semi-conducteurs par exemple. Les bus parallèles de données 24, les bus d'adresses et de contrôle 200 peuvent être réalisés dans des couches de métaux situées physiquement au-dessus des couches réalisant les p bancs mémoires 21, 22, 23. Les p bancs mémoires 21, 22, 23 sont en effet réalisés à l'aide, entre autres, de plusieurs couches de métaux différentes. En d'autres termes, les bus parallèles de données 24, les bus d'adresses et de contrôle 200 peuvent être réalisés dans des couches de métaux superposées aux couches réalisant les p bancs mémoires 21, 22, 23. La réalisation des p mémoires 210, 220, 230 est par ailleurs connue de l'homme du métier.The shared
De plus, afin de réduire l'impact de la longueur des connexions sur la vitesse de transfert des données et la consommation en énergie de la puce comportant ces mémoires, les connexions peuvent être réalisées de manière différentielle. Ceci revient à doubler chaque connexion et à réunir chaque paire de connexion par un amplificateur différentiel permettant ainsi d'utiliser une faible excursion en tension du signal véhiculé par les connexions. Une telle réalisation est décrite plus précisément par la suite. Un dispositif similaire est couramment utilisé sur les lignes de bit des mémoires par exemple.In addition, in order to reduce the impact of the length of the connections on the speed of data transfer and the power consumption of the chip comprising these memories, the connections can be made differentially. This amounts to doubling each connection and joining each pair of connection by a differential amplifier thus allowing to use a small voltage excursion of the signal carried by the connections. Such an embodiment is described more precisely later. A similar device is commonly used on bit lines of memories for example.
La
Lorsque le processeur 31, 32 veut lire le contenu d'un mot stocké dans une deuxième mémoire 300 du quatrième banc mémoire 30, un module de contrôle 311, 321 associé au processeur 31, 32 envoie une requête de lecture au quatrième banc mémoire 30. On peut, par exemple, mettre en oeuvre :
- soit un module de contrôle 311, 321 par processeur 31, 32, comme c'est le cas sur la
figure 3 , - soit un module de contrôle par banc mémoire dans un autre mode de réalisation.
- a
311, 321 bycontrol module 31, 32, as is the case on theprocessor figure 3 , - or a memory bank control module in another embodiment.
La requête de lecture passe notamment par les bus d'adresses et de contrôle 200 représentés sur la
Un arbitre, non représenté sur la
- une première réalisation, dite arbitre à priorité tournante, peut considérer que chaque processeur 31, 32, est prioritaire à tour de rôle ;
- une deuxième réalisation, dite arbitre à priorité fixe, peut assigner une priorité fixe à chaque processeur 31, 32 par exemple.
- a first embodiment, said revolving priority arbitrator, may consider that each
31, 32 has priority in turn;processor - a second embodiment, said fixed priority arbitrator, can assign a fixed priority to each
31, 32 for example.processor
Lors de l'envoi d'une requête de lecture à la deuxième mémoire 300, celle-ci traite alors cette requête et renvoie les données collectées sur une entrée/sortie E/S de données de la deuxième mémoire 300. Une fois la lecture de données effectuée par la mémoire 300, le module de contrôle 311, 321 du processeur 31, 32 active le signal de séquencement 313, 323. Le signal de séquencement est destiné à connecter la sortie S de la deuxième mémoire 300 sur un deuxième bus de données mémoire 203. Le signal de séquencement 313, 323 est connecté à un composant de commande 33. Le composant de commande 33 permet de contrôler le fonctionnement d'un ou plusieurs drivers 34. Un driver 34 est un composant électronique pilote. Les drivers 34 sont chargés de connecter la sortie S de la mémoire 300 aux bus parallèles de données 24. Les bus parallèles de données 24 sont représentés par deux couples de fils 314, 324, chaque couple de fils 314, 324 étant relié à l'un des processeurs 31, 32. Par extension, dans la suite, les deux couples de fils 314, 324 sont nommés bus de données 314, 324. Chacun des deux couples de fils 314, 324 représente par exemple trente-deux couples de fils dans le cas d'un bus de données de trente-deux bits.When sending a read request to the
Le composant de commande 33 peut être réalisé au moyen d'une porte logique « OU ». Le composant de commande 33 peut être un élément d'un multiplexeur comme le premier multiplexeur 211 représenté sur la
Le ou les premiers drivers 34 sont par exemple des drivers trois états. La fonction du premier driver 34 peut être réalisée par un premier amplificateur différentiel de lecture de la deuxième mémoire 300 par exemple. On peut utiliser un premier amplificateur différentiel comportant un ou deux étages, suivant la charge capacitive des bus parallèles de données 24, représentés sur la
L'écriture d'une donnée en mémoire par le processeur 31, 32 peut s'effectuer de la même manière. Les données véhiculées par le bus de données 314, 324 sont dans ce cas contrôlées par le processeur 31, 32. Par exemple, une donnée devant être stockée en mémoire est codée sur un signal. Le signal sort du processeur 31, 32 par une sortie Dout du processeur 31, 32 puis est transmis en mode différentiel par un troisième amplificateur différentiel 317, 327. La pré-charge du bus de données 314, 324 étant coupée, une des lignes complémentaires du bus de données 314, 324 est chargée par la sortie du troisième amplificateur différentiel 317, 327. La différence de tension du bus de données 314, 324 se propage au deuxième bus de données mémoire 203. La différence de tension est alors détectée par le premier amplificateur différentiel 34 du quatrième banc mémoire 30. Une fois que le signal, détecté par le premier amplificateur différentiel 34, a basculé sur une sortie du premier amplificateur différentiel 34, la donnée contenue dans le signal est enregistrée dans le quatrième banc mémoire 30.The writing of a data in memory by the
La réalisation de la mémoire partagée 20 en utilisant des moyens différentiels permet une efficacité accrue. En effet, il n'est pas nécessaire d'obtenir une pleine excursion du signal pour propager une donnée d'un bus à l'autre par exemple. Ceci permet un gain de temps ainsi qu'un gain d'énergie pendant le transfert des données de la mémoire partagée 20 vers les processeurs 31, 32.Achieving shared memory using differential means enables increased efficiency. Indeed, it is not necessary to obtain a full signal excursion to propagate data from one bus to another, for example. This allows a saving of time as well as an energy saving during the transfer of data from the shared
La
Un premier chronogramme 40 représente un signal transitant sur le premier bus d'adresses et de contrôle 200 représenté sur la
Lorsque le processeur 31, 32 envoie une requête de lecture 41 au quatrième banc mémoire 30, ceci provoque le déclenchement de lecture d'une valeur sur le quatrième banc mémoire 30 et donc l'apparition d'un premier signal 44 sur une ligne de bit du quatrième banc mémoire 30. L'apparition du premier signal 44 sur la ligne de bit du quatrième banc mémoire 30 est représentée sur le deuxième chronogramme 45.When the
Un peu après l'envoi de la requête de lecture 41, la pré-charge du bus de données 314, 324 est coupée. La coupure 42 de la pré-charge, effectuée par le module de pré-charge 312, 322, est représentée sur un troisième chronogramme 43.A little after sending the read
Lorsque le premier signal 44 sur les lignes de bit du quatrième banc mémoire 30 a atteint un premier niveau 46 suffisant pour permettre le bon fonctionnement du premier amplificateur différentiel 34 alors une première commande 47 des drivers 34 reliés au deuxième bus de données mémoire 203 est activée. La première commande 47 des drivers 34 est représentée sur un quatrième chronogramme 48.When the
Une fois la première commande 47 des drivers 34 exécutée par les drivers 34, les lignes du bus de données 314, 324 du processeur 31, 32 devant transporter l'information lue dans le quatrième banc mémoire 30 commencent à se décharger. Un cinquième chronogramme 49 représente l'apparition d'un deuxième signal 400 sur les lignes du bus de données 314, 324.Once the
Lorsque la première commande 47 a été exécutée et que le deuxième signal 400 atteint un deuxième niveau 401 suffisant pour le fonctionnement du deuxième amplificateur différentiel 316, 326, alors une deuxième commande 402 est envoyée par le module de contrôle 311, 321 au driver 316, 326 du bus de données Din du processeur 31, 32. La deuxième commande 402 est représentée sur un sixième chronogramme 403.When the
L'exécution de la deuxième commande 402 par le driver 316, 326 entraîne un positionnement 406 de l'entrée Din du processeur 31, 32 à la valeur lue dans le quatrième banc mémoire 30. Le positionnement 406 de l'entrée Din du processeur 31, 32 est représenté sur un septième chronogramme 404 illustrant deux changements d'état possibles : un premier changement d'état de zéro vers un et un deuxième changement d'état de un vers zéro.The execution of the
Une fois la valeur lue récupérée par le processeur 31, 32, le module de contrôle 311, 321 envoie une commande de mise en charge au module de pré-charge 312, 322 afin d'effectuer une mise en charge 405 des lignes des bus de données 314, 324 ainsi que les lignes du deuxième bus de données mémoire 203.Once the value read has been recovered by the
La
Sur la
Les ports d'entrée/sortie 50 peuvent, dans un premier exemple d'utilisation de la mémoire partagée 20 selon l'invention, être connectés à une puce de traitements. La puce de traitements, comportant par exemple m processeurs comme les processeurs 25 représentés sur la
Une telle puce mémoire est particulièrement pertinente avec les types d'interconnexion tels que décrits précédemment. De telles liaisons permettent notamment de connecter la mémoire partagée 20 avec la puce de traitements à travers de multiples liaisons. Les liaisons d'interconnexion entre les deux puces permettent d'assurer une bande passante bien supérieure à celle des mémoires actuelles. En effet, les mémoires actuelles sont notamment limitées par le nombre de broches du boîtier les intégrant et ne présentent qu'un seul port d'entrée/sortie.Such a memory chip is particularly relevant with the types of interconnection as described above. Such links notably make it possible to connect the shared
La
Une telle réalisation a pour avantage d'augmenter le nombre de mémoires mises à disposition des processeurs sans augmenter de manière considérable les surfaces d'interconnexions, notamment entre les puces et les processeurs.Such an embodiment has the advantage of increasing the number of memories available to processors without significantly increasing the interconnection surfaces, especially between chips and processors.
La mémoire partagée 20 selon l'invention permet avantageusement à m processeurs 25 de pouvoir accéder à tous les bancs mémoire 21, 22, 23 de la mémoire partagée 20. Tous les m processeurs 25 peuvent notamment avoir accès simultanément à la mémoire partagée 20, chacun des m processeurs 25 accédant alors à un banc mémoire 21, 22, 23 différent.The shared
La mémoire partagée 20 selon l'invention permet de réduire considérablement la surface d'interconnexion entre les mémoires et les processeurs d'un système multiprocesseur en permettant de faire passer cette interconnexion par exemple au-dessus des mémoires. De plus l'utilisation de mémoires simple port 210, 220, 230 permet avantageusement d'optimiser la surface des connexions nécessaires par rapport à une solution utilisant des mémoires multiports.The shared
La mémoire partagée selon l'invention permet également un gain en vitesse de transfert des données ainsi qu'un gain en consommation d'énergie. En effet, la réduction de la longueur des connexions diminue leur capacité parasite et donc le temps de commutation des signaux ainsi que la consommation d'énergie. Le temps de commutation et la consommation d'énergie sont encore réduits par l'utilisation d'une structure différentielle où l'excursion de tension utilisée est réduite.The shared memory according to the invention also allows a gain in data transfer speed and a gain in energy consumption. Indeed, the reduction in the length of the connections reduces their parasitic capacitance and therefore the switching time of the signals as well as the energy consumption. Switching time and power consumption are further reduced by the use of a differential structure where the voltage swing used is reduced.
Une conception efficace en terme de couches de silicium utilisées notamment pour réaliser les connexions dans une puce comportant une mémoire partagée 20 selon l'invention, réduit avantageusement les coûts de production d'une telle mémoire partagée.An efficient design in terms of silicon layers used in particular for making the connections in a chip comprising a shared
Avantageusement, la mémoire partagée 20 selon l'invention a un mode de fonctionnement de type MIMD, acronyme pour l'expression anglo-saxonne Multiple Instructions, Multiple Data, signifiant « instructions multiples, données multiples », permettant à des programmes différents s'exécutant sur les m processeurs 25 d'accéder simultanément à des données différentes contenues dans la mémoire partagée 20.Advantageously, the shared
Claims (15)
- A shared memory (20), made on a first integrated circuit based on semi-conductors, comprising:• an integer number m, greater than one, of input/output interfaces (EP/SP, SP'), each of the m input/output interfaces comprising 1 data input/output (EP/SP) and 1 address and control input (SP');• an integer number p, greater than one, of memory banks (21, 22, 23, 30), each memory bank (21, 22, 23, 30) comprising:the shared memory (20) being characterised in that it comprises:- a memory (210, 220, 230, 300) comprising a data input/output (E/S) and an address and control input (E);- a block of m switches (214, 224, 234), each of the m switches (214, 224, 234) being connected via a memory data bus (213, 223, 233) to the data input/output (E/S) of the memory (210, 220, 230, 300),• m bidirectional data buses (24), each of the m data buses (24) being respectively connected to one of the m data inputs/outputs (EP/SP) of the shared memory (20) on the one hand and being connected, on the other hand, to each of the p memory banks by means of one of the m switches of each memory bank,• m address and control buses (200), each being connected to one of the m address and control inputs of the shared memory (20) on the one hand, and being connected on the other hand to the address and control input (E) of each of the p memory banks by means of a multiplexer (211, 221, 231) provided in each memory bank.
- The shared memory (20) according to Claim 1, characterised in that each multiplexer (211, 221, 231) of a memory bank has an output connected to the address and control input (E) of the memory (210, 220, 230, 300), inputs (211, 221, 231) of the multiplexer being connected to each of the address and control buses (200), the multiplexer selecting the address and control bus (200) that has to control the memory bank (21, 22, 23, 30).
- The shared memory (20) according to Claim 1, characterised in that the first integrated circuit comprises several layers of metals, the p memory banks (21, 22, 23) using one or more layers of metals, the m data buses (24) and the address and control buses (200) are made in layers of metals overlaid on the layers comprising the p memory banks (21, 22, 23, 30).
- The shared memory (20) according to any of Claims 1 to 3, characterised in that the m input/output interfaces (EP/SP, SP') of the shared memory (20) are able to be connected to inputs/outputs of m processors (25) internal to the first integrated circuit.
- The shared memory (20) according to any of the preceding claims, characterised in that the m data buses (24) and the address and control buses (200) are parallel buses.
- The shared memory (20) according to any of Claims 1 to 5, characterised in that the m data buses (24) are differential buses.
- The shared memory (20) according to any of the preceding claims, characterised in that the memories (210, 220, 230, 300) are single-port memories.
- The shared memory (20) according to any of the preceding claims, characterised in that the memories (210, 220, 230, 300) are memories of the SRAM type, the acronym standing for the expression Static Random Access Memory.
- An integrated circuit based on semi-conductors characterised in that it comprises:• a shared memory according to any of claims 1 to 8;• m input/output ports (50) enabling connection of the integrated circuit to electronic components on the outside of the integrated circuit, each of the m ports (50) being linked to one of the m input/output interfaces (EP/SP, SP') of the shared memory (20).
- The integrated circuit based on semi-conductors, characterised in that it comprises a shared memory according to any of Claims 1 to 8, and m processors (25), each of the m processors (25) being connected to one of the m input/output interfaces (EP/SP, SP') of the shared memory (20).
- The integrated circuit based on semi-conductors, characterised in that it comprises a shared memory according to any of Claims 1 to 8, said integrated circuit being physically in contact with a second integrated circuit comprising m processors (25), each of the m processors (25) being connected to one of the m input/output interfaces (EP/SP, SP') of the shared memory (20), and in that the second integrated circuit is physically in contact with said first integrated circuit.
- The integrated circuit based on semi-conductors according to Claim 11, characterised in that the m input/output interfaces (EP/SP, SP') of the shared memory are connected to the m processors (25) by vias.
- The integrated circuit based on semi-conductors according to Claim 11, characterised in that the m input/output interfaces (EP/SP, SP') of the shared memory are connected to the m processors (25) by inductive links.
- The integrated circuit based on semi-conductors according to Claim 11, characterised in that the m input/output interfaces of the shared memory are connected to the m processors (25) by capacitive links.
- Integrated circuits based on semi-conductors, characterised in that they each comprise one or more of the p memory banks (21, 22, 23, 302) according to any of Claims 1 to 8, said integrated circuits being situated one above another, the m data buses (24) and the address and control buses (200) being positioned transversely to the integrated circuits.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
FR0706035A FR2920584B1 (en) | 2007-08-29 | 2007-08-29 | SHARED MEMORY |
PCT/EP2008/060675 WO2009027236A1 (en) | 2007-08-29 | 2008-08-14 | Shared memory |
Publications (2)
Publication Number | Publication Date |
---|---|
EP2186093A1 EP2186093A1 (en) | 2010-05-19 |
EP2186093B1 true EP2186093B1 (en) | 2015-12-02 |
Family
ID=39262817
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP08787215.6A Not-in-force EP2186093B1 (en) | 2007-08-29 | 2008-08-14 | Shared memory |
Country Status (5)
Country | Link |
---|---|
US (1) | US8656116B2 (en) |
EP (1) | EP2186093B1 (en) |
JP (1) | JP2010537361A (en) |
FR (1) | FR2920584B1 (en) |
WO (1) | WO2009027236A1 (en) |
Family Cites Families (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0420339A3 (en) * | 1989-09-29 | 1992-06-03 | N.V. Philips' Gloeilampenfabrieken | Multi-plane random access memory system |
US5875470A (en) * | 1995-09-28 | 1999-02-23 | International Business Machines Corporation | Multi-port multiple-simultaneous-access DRAM chip |
JPH09115286A (en) * | 1995-10-17 | 1997-05-02 | Hitachi Ltd | Multi-port memory |
JP3092558B2 (en) * | 1997-09-16 | 2000-09-25 | 日本電気株式会社 | Semiconductor integrated circuit device |
TW451215B (en) * | 1998-06-23 | 2001-08-21 | Motorola Inc | Pipelined dual port integrated circuit memory |
JP2001352358A (en) * | 2000-06-07 | 2001-12-21 | Nec Corp | Integrated circuit for modem |
US7082502B2 (en) * | 2001-05-15 | 2006-07-25 | Cloudshield Technologies, Inc. | Apparatus and method for interfacing with a high speed bi-directional network using a shared memory to store packet data |
KR100533976B1 (en) * | 2004-05-10 | 2005-12-07 | 주식회사 하이닉스반도체 | Multi-port memory device |
JP4534132B2 (en) * | 2004-06-29 | 2010-09-01 | エルピーダメモリ株式会社 | Stacked semiconductor memory device |
US20060017068A1 (en) * | 2004-07-20 | 2006-01-26 | Oki Electric Industry Co., Ltd. | Integrated circuit with on-chip memory and method for fabricating the same |
US20060138650A1 (en) * | 2004-12-28 | 2006-06-29 | Freescale Semiconductor, Inc. | Integrated circuit packaging device and method for matching impedance |
US8243467B2 (en) * | 2007-02-13 | 2012-08-14 | Nec Corporation | Semiconductor device |
-
2007
- 2007-08-29 FR FR0706035A patent/FR2920584B1/en not_active Expired - Fee Related
-
2008
- 2008-08-14 EP EP08787215.6A patent/EP2186093B1/en not_active Not-in-force
- 2008-08-14 US US12/675,382 patent/US8656116B2/en not_active Expired - Fee Related
- 2008-08-14 WO PCT/EP2008/060675 patent/WO2009027236A1/en active Application Filing
- 2008-08-14 JP JP2010522305A patent/JP2010537361A/en active Pending
Also Published As
Publication number | Publication date |
---|---|
WO2009027236A1 (en) | 2009-03-05 |
JP2010537361A (en) | 2010-12-02 |
EP2186093A1 (en) | 2010-05-19 |
US20100306480A1 (en) | 2010-12-02 |
US8656116B2 (en) | 2014-02-18 |
FR2920584A1 (en) | 2009-03-06 |
FR2920584B1 (en) | 2009-11-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP0712133B1 (en) | Method of anticipated reading of a serial accessed memory and related memory | |
WO2005031493A2 (en) | Component with a dynamically reconfigurable architecture | |
FR2827684A1 (en) | MEMORY CONTROLLER HAVING 1X / MX WRITE CAPACITY | |
EP0626760B1 (en) | Electronic system organized in matrix cell network | |
CN103093808B (en) | Time division multiplexing multiport memory | |
FR2772507A1 (en) | INTEGRATED CIRCUIT MEMORY DEVICE HAVING DATA INPUT AND OUTPUT LINES EXTENDING IN THE DIRECTION OF THE COLUMNS, AND CIRCUITS AND METHODS FOR REPAIRING FAULTY CELLS | |
EP2284839A1 (en) | Static memory device with five transistors and operating method. | |
EP0298002A1 (en) | Transposition memory for a data processing circuit | |
EP0601922B1 (en) | EEPROM memory organised in words of several bits | |
EP3080812B1 (en) | Memory data writing circuit | |
EP2186093B1 (en) | Shared memory | |
EP3598451B1 (en) | Sram/rom memory reconfigurable by connections to power supplies | |
EP0537083B1 (en) | Memory cell content detection circuit, especially for an EPROM, method for its operation and memory provided with such a circuit | |
FR2888660A1 (en) | Column redundancy enabling system for e.g. integrated circuit memory, has controller with signal generating unit for generating signal that is conveyed to read-out circuits of memory in order to enable column redundancy unit | |
FR2655763A1 (en) | REDUNDANCY CIRCUIT FOR MEMORY. | |
EP3506264A1 (en) | Memory circuit | |
US11775295B2 (en) | Processing-in-memory (PIM) devices | |
WO2010010163A1 (en) | Processor circuit with shared memory and buffer system | |
FR2748595A1 (en) | Parallel access memory reading method for e.g. EEPROM | |
FR3061798B1 (en) | CIRCUIT FOR CONTROLLING A LINE OF A MEMORY MATRIX | |
FR2958064A1 (en) | ARCHITECTURE FOR PROCESSING A DATA STREAM ENABLING THE EXTENSION OF A NEIGHBORHOOD MASK | |
EP1342241A2 (en) | Amplifier for reading storage cells with exclusive-or type function | |
FR2821202A1 (en) | METHOD FOR TESTING A SEQUENTIAL ACCESS MEMORY PLAN, AND CORRESPONDING SEQUENTIAL ACCESS MEMORY CONDUCTOR DEVICE | |
FR2634576A1 (en) | READING AND PROGRAMMING STEERING STAGE FOR PROGRAMMABLE LOGIC NETWORK COMPONENT | |
EP1033722B1 (en) | Shared memory |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20100302 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MT NL NO PL PT RO SE SI SK TR |
|
AX | Request for extension of the european patent |
Extension state: AL BA MK RS |
|
RAP1 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: COMMISSARIAT A L'ENERGIE ATOMIQUE ET AUX ENERGIES |
|
DAX | Request for extension of the european patent (deleted) | ||
17Q | First examination report despatched |
Effective date: 20110216 |
|
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
INTG | Intention to grant announced |
Effective date: 20150504 |
|
GRAS | Grant fee paid |
Free format text: ORIGINAL CODE: EPIDOSNIGR3 |
|
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
GRAA | (expected) grant |
Free format text: ORIGINAL CODE: 0009210 |
|
INTG | Intention to grant announced |
Effective date: 20151005 |
|
AK | Designated contracting states |
Kind code of ref document: B1 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MT NL NO PL PT RO SE SI SK TR |
|
REG | Reference to a national code |
Ref country code: GB Ref legal event code: FG4D Free format text: NOT ENGLISH |
|
REG | Reference to a national code |
Ref country code: AT Ref legal event code: REF Ref document number: 763942 Country of ref document: AT Kind code of ref document: T Effective date: 20151215 Ref country code: CH Ref legal event code: EP |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: FG4D Free format text: LANGUAGE OF EP DOCUMENT: FRENCH |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R096 Ref document number: 602008041447 Country of ref document: DE |
|
REG | Reference to a national code |
Ref country code: NL Ref legal event code: MP Effective date: 20160302 |
|
REG | Reference to a national code |
Ref country code: LT Ref legal event code: MG4D |
|
REG | Reference to a national code |
Ref country code: AT Ref legal event code: MK05 Ref document number: 763942 Country of ref document: AT Kind code of ref document: T Effective date: 20151202 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: HR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20151202 Ref country code: ES Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20151202 Ref country code: NO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20160302 Ref country code: LT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20151202 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: AT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20151202 Ref country code: SE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20151202 Ref country code: LV Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20151202 Ref country code: FI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20151202 Ref country code: PL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20151202 Ref country code: GR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20160303 Ref country code: NL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20151202 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20151202 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: CZ Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20151202 Ref country code: IT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20151202 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20160402 Ref country code: PT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20160404 Ref country code: RO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20151202 Ref country code: EE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20151202 Ref country code: SK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20151202 |
|
REG | Reference to a national code |
Ref country code: FR Ref legal event code: PLFP Year of fee payment: 9 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R097 Ref document number: 602008041447 Country of ref document: DE |
|
PLBE | No opposition filed within time limit |
Free format text: ORIGINAL CODE: 0009261 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: DK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20151202 |
|
26N | No opposition filed |
Effective date: 20160905 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20151202 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: BE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20160831 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MC Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20151202 |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: PL |
|
GBPC | Gb: european patent ceased through non-payment of renewal fee |
Effective date: 20160814 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LI Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20160831 Ref country code: CH Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20160831 |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: MM4A |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: GB Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20160814 Ref country code: IE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20160814 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LU Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20160814 |
|
REG | Reference to a national code |
Ref country code: FR Ref legal event code: PLFP Year of fee payment: 10 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: FR Payment date: 20170831 Year of fee payment: 10 Ref country code: DE Payment date: 20170817 Year of fee payment: 10 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: CY Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20151202 Ref country code: HU Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO Effective date: 20080814 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20151202 Ref country code: TR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20151202 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: BG Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20151202 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R119 Ref document number: 602008041447 Country of ref document: DE |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: DE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20190301 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: FR Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20180831 |