CN116483288A - Memory control equipment, method and device and server memory module - Google Patents

Memory control equipment, method and device and server memory module Download PDF

Info

Publication number
CN116483288A
CN116483288A CN202310742179.7A CN202310742179A CN116483288A CN 116483288 A CN116483288 A CN 116483288A CN 202310742179 A CN202310742179 A CN 202310742179A CN 116483288 A CN116483288 A CN 116483288A
Authority
CN
China
Prior art keywords
memory
target data
module
data
signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310742179.7A
Other languages
Chinese (zh)
Inventor
陈曦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Inspur Intelligent Technology Co Ltd
Original Assignee
Suzhou Inspur Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Inspur Intelligent Technology Co Ltd filed Critical Suzhou Inspur Intelligent Technology Co Ltd
Priority to CN202310742179.7A priority Critical patent/CN116483288A/en
Publication of CN116483288A publication Critical patent/CN116483288A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0656Data buffering arrangements
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The embodiment of the application provides a memory control device, a method, a device and a server memory module, wherein the memory control device comprises: the control module is connected with the cache module through a double-rate protocol, the control module is also used for providing a signal interface for calculating a quick connection protocol, the cache module is used for being connected with N memory units, and N is a positive integer greater than 1; the control module is used for executing conversion between the calculation quick connection protocol and the double rate protocol on a target data signal, wherein the target data signal is used for indicating to execute reading and writing of target data, the target data is the pulse string length of the target data volume, and the target data volume is the sum of the data volumes of N memory units; and the caching module is used for caching the target data and executing reading and writing of the target data on the N memory units. Through the method and the device, the problem that the data transmission rate of the signal is low is solved, and the effect of improving the data transmission rate of the signal is achieved.

Description

Memory control equipment, method and device and server memory module
Technical Field
The embodiment of the application relates to the field of computers, in particular to memory control equipment, a memory control method, a memory control device and a memory module of a server.
Background
With the rapid development of computer technology, the increasing demands of data-intensive applications such as high-performance computing and artificial intelligence make the computation density continuously increase, so that higher demands are placed on the signal transmission rate of data.
At present, when the original traditional memory bank of the current computing node is used, only one Rank (a group of memory particles connected to the same chip select signal) can work, only 64 bytes of data can be transmitted at a time, the bandwidth is very low, and the transmission rate of data signals is greatly reduced.
Aiming at the problems of low transmission rate of data signals and the like in the related art, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the application provides a memory control device, a memory control method, a memory control device and a memory module of a server, so as to at least solve the problem of data signal transmission rate in the related art.
According to an embodiment of the present application, there is provided a memory control apparatus including: a control module and a cache module, wherein,
the control module is connected with the cache module through a double rate protocol, the control module is also used for providing a signal interface for calculating a quick connection protocol, the cache module is used for being connected with N memory units, and N is a positive integer greater than 1;
The control module is configured to perform conversion between the calculated rapid connection protocol and the double rate protocol on a target data signal, where the target data signal is used to instruct to perform reading and writing of target data, the target data is a pulse string length of a target data amount, and the target data amount is a sum of data amounts of the N memory units;
the buffer module is used for buffering the target data and executing the reading and writing of the target data on the N memory units.
In an exemplary embodiment, N memory channels are constructed in the cache module, where the N memory channels are in one-to-one correspondence with the N memory units;
the control module is used for receiving the target data returned by the cache module under the condition that the target data signal is used for indicating to read the target data;
the buffer module is used for reading the data in the N memory units through the N memory channels respectively to obtain N first data; and splicing the N pieces of first data into the target data and sending the target data to the control module.
In an exemplary embodiment, N memory channels are constructed in the cache module, where the N memory channels are in one-to-one correspondence with the N memory units;
The buffer module is used for dividing the target data into N pieces of second data under the condition that the target data signal is used for indicating writing of the target data; and writing the N second data into the N memory units through the N memory channels respectively.
In one exemplary embodiment, each of the N memory channels allows for independent execution of operations on a corresponding one of the N memory cells.
In an exemplary embodiment, the N memory units are N memory columns, or the N memory units are N memory granule groups, where a plurality of memory granules are disposed in each memory granule group.
In one exemplary embodiment, the control module includes: and the memory expansion control chip, the cache module comprises: a data buffer, wherein,
the memory expansion control chip is provided with a computing quick connection protocol interface and a double rate protocol interface, the computing quick connection protocol interface is used for being connected with computing quick connection protocol equipment, and the double rate protocol interface is connected with the data buffer;
the chip select signals of the memory expansion control chip enable the N memory units at the same time.
In an exemplary embodiment, the data size of each memory cell is M, and the target data size is mxn.
According to another embodiment of the present application, there is provided a server memory module, including: a memory module, a control module and a cache module, wherein,
the control module is provided with a computing quick connection protocol interface and a double rate protocol interface, the control module is connected with the cache module through the double rate protocol interface, the computing quick connection protocol interface is used for being connected with a server and interactively computing a quick connection protocol signal, the cache module is connected with the memory module, the memory module comprises P memory units, and P is a positive integer greater than 1;
the control module is configured to perform conversion between the calculated rapid connection protocol and the double rate protocol on a target data signal, where the target data signal is used to instruct to perform reading and writing of target data, the target data is a pulse string length of a target data amount, the target data amount is a sum of data amounts of N memory units, and the P memory units include the N memory units, where N is greater than 1 and less than or equal to P;
The buffer module is used for buffering the target data and executing the reading and writing of the target data on the N memory units.
In an exemplary embodiment, N memory channels are constructed in the cache module, where the N memory channels are in one-to-one correspondence with the N memory units;
the control module is used for receiving the target data returned by the cache module under the condition that the target data signal is used for indicating to read the target data; transmitting the target data to the server via the computing quick connect protocol;
the buffer module is used for reading the data in the N memory units through the N memory channels respectively to obtain N first data; and splicing the N pieces of first data into the target data and sending the target data to the control module.
In an exemplary embodiment, N memory channels are constructed in the cache module, where the N memory channels are in one-to-one correspondence with the N memory units;
the buffer module is used for dividing the target data into N pieces of second data under the condition that the target data signal is used for indicating writing of the target data; and writing the N second data into the N memory units through the N memory channels respectively.
In one exemplary embodiment, each of the N memory channels allows for independent execution of operations on a corresponding one of the N memory cells.
In one exemplary embodiment, the P memory units are P memory columns, or the P memory units are P memory granule groups, where a plurality of memory granules are disposed in each memory granule group.
In one exemplary embodiment, the control module includes: and the memory expansion control chip, the cache module comprises: a data buffer, wherein,
the chip select signals of the memory expansion control chip enable the N memory units at the same time.
In an exemplary embodiment, the data size of each memory cell is M, and the target data size is mxn.
According to an embodiment of the present application, there is provided a memory control method applied to a memory control device, including:
performing, by a control module, conversion between a computational fast connection protocol and a double rate protocol on a target data signal, wherein the memory control device comprises: the control module is connected with the cache module through the double rate protocol, the control module is further used for providing a signal interface for calculating a quick connection protocol, the cache module is used for being connected with N memory units, N is a positive integer greater than 1, the target data signal is used for indicating to execute the reading and writing of target data, the target data is the pulse string length of target data volume, and the target data volume is the sum of the data volumes of the N memory units;
And caching the target data through the caching module, and executing reading and writing of the target data on the N memory units.
In an exemplary embodiment, the caching, by the caching module, the target data, and performing reading and writing of the target data on the N memory units, includes:
under the condition that the target data signal is used for indicating to read the target data, respectively reading data in the N memory units through N memory channels to obtain N first data, wherein the N memory channels are constructed in the cache module, and the N memory channels are in one-to-one correspondence with the N memory units;
and splicing the N pieces of first data into the target data and sending the target data to the control module.
In an exemplary embodiment, the caching, by the caching module, the target data, and performing reading and writing of the target data on the N memory units, includes:
dividing the target data into N second data under the condition that the target data signal is used for indicating writing of the target data, wherein N memory channels are constructed in the cache module, and the N memory channels are in one-to-one correspondence with the N memory units;
And writing the N second data into the N memory units through the N memory channels respectively.
In one exemplary embodiment, each of the N memory channels allows for independent execution of operations on a corresponding one of the N memory cells.
In an exemplary embodiment, the N memory units are N memory columns, or the N memory units are N memory granule groups, where a plurality of memory granules are disposed in each memory granule group.
In an exemplary embodiment, before the caching, by the caching module, the target data and performing the reading and writing of the target data on the N memory units, the method further includes:
enabling the N memory units simultaneously through a chip select signal of a memory expansion control chip, wherein the control module comprises: and the memory expansion control chip, the cache module comprises: the memory expansion control chip is provided with a computing quick connection protocol interface and a double rate protocol interface, the computing quick connection protocol interface is used for being connected with computing quick connection protocol equipment, and the double rate protocol interface is connected with the data buffer.
In an exemplary embodiment, the data size of each memory cell is M, and the target data size is mxn.
According to another embodiment of the present application, there is provided a memory control apparatus applied to a memory control device, including:
a conversion module, configured to perform conversion between a fast connection protocol and a double rate protocol on a target data signal through a control module, where the memory control device includes: the control module is connected with the cache module through the double rate protocol, the control module is further used for providing a signal interface for calculating a quick connection protocol, the cache module is used for being connected with N memory units, N is a positive integer greater than 1, the target data signal is used for indicating to execute the reading and writing of target data, the target data is the pulse string length of target data volume, and the target data volume is the sum of the data volumes of the N memory units;
and the processing module is used for caching the target data through the caching module and executing the reading and writing of the target data on the N memory units.
According to a further embodiment of the present application, there is also provided a computer readable storage medium having stored therein a computer program, wherein the computer program is arranged to perform the steps of any of the method embodiments described above when run.
According to a further embodiment of the present application, there is also provided an electronic device comprising a memory having stored therein a computer program and a processor arranged to run the computer program to perform the steps of any of the method embodiments described above.
Through this application, memory control device includes: the control module is used for providing a signal interface for calculating a quick connection protocol, and is connected with the cache modules connected with N memory units through a double rate protocol, wherein N is a positive integer greater than 1; the control module is used for executing conversion between a calculation quick connection protocol and a double rate protocol on a target data signal indicating read-write target data, wherein the target data is the pulse string length of target data quantity, and the target data quantity is the sum of the data quantity of N memory units; the buffer module is used for buffering the target data and executing the reading and writing of the target data on the N memory units. That is, the target data signal is transmitted to the control module through the signal interface that provides the calculation rapid connection protocol, and the control module connected with the buffer module through the double rate protocol converts the target data signal into a data signal supporting the double rate protocol and transmits the data signal to the buffer module, so as to instruct the buffer module to execute the reading and writing of the target data. After the buffer memory module executes the read-write of the target data through N memory units with the total data quantity being the target data quantity, the target data with the pulse string length of the target data quantity is buffered, and then the transmission of the target data signal is realized through the control module capable of executing the conversion between the calculation fast protocol and the double rate protocol on the target data signal, so that the target data of the total data quantity of the N memory units is read and written once on the basis of the calculation fast connection protocol and the double rate protocol, the read-write rate of the memory units is improved, and the data transmission rate of the target data is improved. Therefore, the problem of low data transmission rate of signal transmission is solved, and the effect of improving the data transmission rate of signal transmission is further achieved.
Drawings
FIG. 1 is a schematic diagram of an alternative memory control device according to an embodiment of the present application;
FIG. 2 is a schematic diagram of the internal architecture of a control module according to an alternative embodiment of the present application;
FIG. 3 is a schematic diagram of a memory control device according to an alternative embodiment of the present application;
FIG. 4 is a schematic diagram of an alternative server memory module according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a server memory module according to an alternative embodiment of the present application;
fig. 6 is a block diagram of a hardware structure of a mobile terminal according to a memory control method of an embodiment of the present application;
FIG. 7 is a flow chart of a memory control method according to an embodiment of the present application;
FIG. 8 is a block diagram of a memory control device according to an embodiment of the present application;
fig. 9 is a schematic diagram of an alternative electronic device according to an embodiment of the present application.
Detailed Description
Embodiments of the present application will be described in detail below with reference to the accompanying drawings in conjunction with the embodiments.
It should be noted that the terms "first," "second," and the like in the description and claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order.
In this embodiment, a memory control device is provided, fig. 1 is a schematic diagram of an alternative memory control device according to an embodiment of the present application, and as shown in fig. 1, the memory control device includes: the control module 102 and the cache module 104, wherein,
the control module 102 is connected with the cache module 104 through a double rate protocol, the control module 102 is further configured to provide a signal interface for calculating a rapid connection protocol, the cache module is configured to be connected with N memory units 106, and N is a positive integer greater than 1;
the control module 102 is configured to perform conversion between the calculated rapid connection protocol and the double rate protocol on a target data signal, where the target data signal is used to instruct to perform reading and writing of target data, the target data is a pulse string length of a target data amount, and the target data amount is a sum of data amounts of the N memory units 106;
the buffer module 104 is configured to buffer the target data, and perform reading and writing of the target data on the N memory units 106.
With the above apparatus, the memory control apparatus includes: the control module is used for providing a signal interface for calculating a quick connection protocol, and is connected with the cache modules connected with N memory units through a double rate protocol, wherein N is a positive integer greater than 1; the control module is used for executing conversion between a calculation quick connection protocol and a double rate protocol on a target data signal indicating read-write target data, wherein the target data is the pulse string length of target data quantity, and the target data quantity is the sum of the data quantity of N memory units; the buffer module is used for buffering the target data and executing the reading and writing of the target data on the N memory units. That is, the target data signal is transmitted to the control module through the signal interface that provides the calculation rapid connection protocol, and the control module connected with the buffer module through the double rate protocol converts the target data signal into a data signal supporting the double rate protocol and transmits the data signal to the buffer module, so as to instruct the buffer module to execute the reading and writing of the target data. After the buffer memory module executes the read-write of the target data through N memory units with the total data quantity being the target data quantity, the target data with the pulse string length of the target data quantity is buffered, and then the transmission of the target data signal is realized through the control module capable of executing the conversion between the calculation fast protocol and the double rate protocol on the target data signal, so that the target data of the total data quantity of the N memory units is read and written once on the basis of the calculation fast connection protocol and the double rate protocol, the read-write rate of the memory units is improved, and the data transmission rate of the target data is improved. Therefore, the problem of low data transmission rate of signal transmission is solved, and the effect of improving the data transmission rate of signal transmission is further achieved.
Optionally, in this embodiment, the memory control device may be, but is not limited to, used to connect a data signal with a memory unit, and control communication between the data signal and the memory unit, so as to implement reading and writing of data carried in the data signal and implement transmission of data between a CPU (Central Processing Unit ) or a server and the memory unit.
Alternatively, in the present embodiment, the double rate protocol may be, but is not limited to, a protocol supporting a higher external data transmission rate such as DDR (Double Data Rate) protocol; the computing fast protocol may be, but is not limited to, CXL (Compute Express Link, a high-speed serial protocol) protocol, etc., mainly used for data acceleration transmission between the CPU and the Device, low latency, high rate memory bus protocol. In the embodiment of the present application, the double rate protocol is taken as a DDR protocol, and the computing quick connection protocol is taken as a CXL protocol as an example.
Alternatively, in this embodiment, the control module may include, but is not limited to, a memory expansion controller chip supporting CXL protocol, such as MXC (Memory Expander Controller, memory expansion control) chip.
Alternatively, in this embodiment, the buffer module may include, but is not limited to, a memory for temporarily storing data as it passes between elements having different transmission capabilities.
Alternatively, in this embodiment, the memory unit may include, but is not limited to, a computer unit that may be addressed by a CPU through a bus and perform read/write operations, and the memory unit may include, but is not limited to, a memory bank, a memory granule, and the like.
Alternatively, in the present embodiment, the target data signal may be, but not limited to, a control signal issued by the CPU or the server requesting to read and write target data, and the target data amount may be, but not limited to, a data size indicating that reading and writing are required.
Alternatively, in the present embodiment, the Burst length of the target data amount may be, but is not limited to, the number of periods in which adjacent memory cells in the same row are continuously transferred, and may be, but is not limited to, represented by Burst Lengths (BL). The bursts of the target data amount may be, but are not limited to, a pattern representing successive data transfers of adjacent memory cells in the same row.
Optionally, in this embodiment, the control module may, but not limited to, convert the CXL high-speed signal into a DDR high-speed signal supporting the DDR protocol after accessing the CXL high-speed signal (target data signal), access the DDR signal to the buffer module, perform reading and writing of target data in the DDR signal on N memory units through the buffer module, integrate and buffer the target data to form a Burst length (Burst length) of a sum of data amounts of the N memory units, so that the control module and the buffer module may, at each time, transmit the target data of the sum of data amounts of the N memory units, where the buffer module is connected to the memory module, and may, but not limited to, determine the value of N (N is a positive integer greater than 1) according to the target data amount.
In an exemplary embodiment, the cache module may, but is not limited to, construct N memory channels, where the N memory channels are in one-to-one correspondence with the N memory units; the control module may be, but is not limited to, configured to receive the target data returned by the buffer module, where the target data signal is used to indicate reading of the target data; the buffer module may be, but not limited to, configured to read data in the N memory units through the N memory channels, to obtain N first data; the N first data may be spliced into the target data and sent to the control module.
Alternatively, in this embodiment, N pseudo memory channels, that is, virtual memory channels, may be but not limited to be constructed in the cache module, and the N pseudo memory channels are in one-to-one correspondence with N memory units, that is, if the pseudo memory channel 1, the pseudo memory channel 2, the pseudo memory channel 3, the pseudo memory channel 2, the pseudo memory channel N, and the N memory units include the memory unit 1, the memory unit 2, the memory unit 3, the pseudo memory channel 3, and the memory unit N in the cache module, the pseudo memory channel 1 corresponds to the memory unit 1, the pseudo memory channel 2 corresponds to the memory unit 2, the pseudo memory channel 3 corresponds to the memory unit 3, the pseudo memory channel N corresponds to the memory unit N.
Optionally, in this embodiment, under the condition that the target data signal is used to indicate to read the target data, N pseudo multi-memory channels corresponding to N RANK (N memory units) one to one are constructed in a data buffer (a buffer module), a CS signal (chip select signal) provided by an MXC chip (a control module) enables N RANKs simultaneously, when the pseudo memory channel 1 is ready to read the memory data next time for RANK1, the pseudo memory channel 2 reads the memory data of RANK2, so as to push, finally, N first data such as first data 1, first data 2, first data N are obtained respectively, and the data buffer performs splice integration on the N first data and then sends the N first data to the MXC chip.
Optionally, in this embodiment, unlike the prior art that the next memory unit needs to be read after the last memory unit is read, by constructing N pseudo memory channels corresponding to N memory units in one-to-one manner in the cache module, when the memory data is read, the N pseudo memory channels are controlled by the enable signal of the control module, and the N memory units are simultaneously enabled to read the target data, and then the N memory units are respectively read to N first data to be spliced and then transmitted to the control module, so that the effective waiting time for reading the memory data is shortened, the data reading efficiency is improved, and the data transmission rate is improved.
In an exemplary embodiment, the cache module may, but is not limited to, construct N memory channels, where the N memory channels are in one-to-one correspondence with the N memory units; the buffer module may be, but is not limited to, configured to divide the target data into N second data if the target data signal is used to indicate writing of the target data; the N second data may be written into the N memory cells through the N memory channels, respectively, but is not limited thereto.
Optionally, in this embodiment, in the case where the target data signal is used to indicate writing of target data, N pseudo multi-memory channels corresponding to N RANK (N memory units) one-to-one are constructed in the data buffer (the buffer module), the data buffer divides the target data into N second data such as first data 1, first data 2, first data N, etc., the CS chip select signal provided by the MXC chip (the control module) enables N RANKs simultaneously, when the pseudo memory channel 1 is ready to write the memory data to RANK1 next time, the pseudo memory channel 2 writes the memory data of RANK2, and so on, and writes the N second data into N RANK through the N pseudo memory channels, respectively.
Optionally, in this embodiment, unlike the prior art that writing of the next memory unit is performed after the previous memory unit is completely written, N pseudo memory channels corresponding to N memory units one by one are constructed in the buffer module, when the memory data is read, the target data is first divided into second data corresponding to N pseudo memory channels one by the buffer module, and then N second data are simultaneously written into N memory units by enabling signals of the control module, so that the effective waiting time of writing the memory data is reduced, the data writing efficiency is improved, and the data transmission rate is improved.
In an exemplary embodiment, each of the N memory channels may, but is not limited to, allow for independent execution of operations on a corresponding memory cell of the N memory cells.
Alternatively, in this embodiment, if the cache module constructs the dummy memory channel 1, the dummy memory channel 2, the dummy memory channel 3, the third party, the dummy memory channel N, and the N memory units include the memory unit 1, the memory unit 2, the memory unit 3, the third party, and the memory unit N, the dummy memory channel 1 may but is not limited to allow the operation to be independently performed on the memory unit 1, the dummy memory channel 2 may but is not limited to allow the operation to be independently performed on the memory unit 2, the dummy memory channel 3 may but is not limited to allow the operation to be independently performed on the memory unit 3, and the dummy memory channel N may but is not limited to allow the operation to be independently performed on the memory unit N.
Optionally, in this embodiment, unlike the serial logic of the memory data read/write by the buffer module in the prior art, that is, the buffer module needs to wait for the completion of the reading/writing of the previous memory unit before the next memory unit can read/write, by constructing N pseudo memory channels allowing the corresponding N memory units to be independently operated in the buffer module, the parallel reading/writing of the memory data is realized, and the memory data reading efficiency is improved, so that the memory bandwidth is further greatly expanded and the maximum utilization of the memory resource can be realized on the basis of the original traditional memory of the current computing node.
In one exemplary embodiment, the N memory units may be, but are not limited to, N memory ranks, or the N memory units may be, but are not limited to, N memory granule groups, where each memory granule group may be, but are not limited to, a plurality of memory granules disposed.
Alternatively, in this embodiment, the N memory cells may be, but not limited to, N conventional memory banks (memory ranks), and the N memory cells may be, but not limited to, N memory granule groups, where a plurality of DRAM granules (memory granules) may be disposed in each memory granule group. Each memory granule group may be, but is not limited to, a form in which a plurality of DRAM granules are spliced into one RANK, and each memory granule group of the N memory granule groups may be, but is not limited to, sequentially arranged in a row form.
In one exemplary embodiment, the control module may include, but is not limited to: the memory expansion control chip, the cache module may, but is not limited to, include: the memory expansion control chip is provided with a computing quick connection protocol interface and a double rate protocol interface, wherein the computing quick connection protocol interface is used for being connected with computing quick connection protocol equipment, and the double rate protocol interface is connected with the data buffer; the chip select signals of the memory expansion control chip enable the N memory units at the same time.
Alternatively, in this embodiment, the memory expansion control chip may be, but is not limited to, a chip operating based on a computing rapid connection protocol, and in this embodiment, the memory expansion control chip is described as an MXC chip.
Alternatively, in the present embodiment, the data buffer may be, but not limited to, a four-bus buffer, an eight-bus buffer, or the like, and specifically may be selected according to the system bus requirements, which is not limited herein.
Alternatively, in the present embodiment, N memory cells may be enabled to read N first data at the same time by a CS signal (chip select signal) of an MXC chip (memory expansion control chip); n memory cells may be enabled to write N second data simultaneously by CS signals (chip select signals) of an MXC chip (memory expansion control chip), but not limited thereto.
In an alternative embodiment, an internal schematic diagram of a control module is provided, and fig. 2 is a schematic diagram of an internal structure of a control module according to an alternative embodiment of the present application, and as shown in fig. 2, the control module includes an MXC chip, a DDR interface (double rate protocol interface), and a CXL interface (computing fast connection protocol interface), and may also include but not be limited to an interface for providing Clock, RESET, and management functions, such as CLK (Clock signal), RESET (RESET signal), I2C (synchronous serial bus), and the like, an SPI FLASH (Serial Peripheral interface Flash) for storing chip firmware, a serial peripheral interface FLASH memory, an external UART (Universal Asynchronous Receiver/transceiver) for Debug (Debug) analysis, an asynchronous transceiver interface, and a JTAG (Joint Test Action Group), and may also include but not be limited to a serial interface for acquiring information such as SPD (SERIAL PRESENCE DETECT, module detection), PMIC (Power Management IC, power management integrated circuit), and temperature Sensor (I3_h signal).
In one exemplary embodiment, the data amount of each memory cell may be, but is not limited to being, M, and the target data amount may be, but is not limited to being, mxn.
Alternatively, in the present embodiment, the data amount of each memory cell may be, but is not limited to, a data information storage amount that is binary data in bytes.
Alternatively, in the present embodiment, if the data size of each memory cell is 64 bytes (M), and 2 (N) memory cells are total, the target data size may be 64×2=128 bytes.
In an alternative embodiment, a schematic diagram of a memory control device is provided, fig. 3 is a schematic diagram of a memory control device according to an alternative embodiment of the present application, as shown in fig. 3, an MXC chip (memory expansion control chip) accesses a CXL high-speed signal (target data signal) from a high-speed connector, and converts the CXL high-speed signal into a DDR high-speed signal, where the high-speed connector includes, but is not limited to, a high-speed connector such as a gold finger, an MCIO, and the like, and meanwhile, the high-speed connector also introduces CLK, RESET, I C signals to provide clock, reset, and manage functions for the MXC chip, while the MXC chip hangs 1 SPI FLASH for storing chip firmware, and provides UART and JTAG interfaces for Debug analysis, and an I3C signal of the MXC chip accesses a memory bank to obtain information such as SPD, PMIC, and temperature Sensor on the memory bank. After being converted into DDR protocol by the MXC chip, the Data Buffer device is accessed, and then a traditional Memory bar (two Memory ranks, N=2) with 2 Rank and above is accessed by a DIMM (Dual-Inline Memory Modules) connector. When the traditional memory bank is used, a certain Rank can be enabled to work through a CS chip selection signal, 64 bytes of Data can be transmitted each time, in the application, a pseudo multi-memory channel (a plurality of memory channels) is formed through a Data Buffer, 2 ranks are enabled simultaneously through an MXC chip CS chip selection signal, when the pseudo memory channel 1 is ready to access the memory for the next time, the pseudo memory channel 2 carries out memory read-write of Rank2, otherwise, the pseudo multi-memory complementation reduces the effective waiting time of the memory read-write by 50%, and two groups of 64 bytes of Data of the 2 ranks are integrated and cached to form 128 bytes of Burst length (the pulse string length of the target Data quantity), 128 bytes of Data can be transmitted each time between the Data Buffer and the MXC, so that the transmission speed of link Data is doubled, namely the bandwidth Data is doubled.
Optionally, in this embodiment, taking a traditional DIMM with a memory bank model of DDR4-3200 as an example, its basic rate is 3200MT/s, and its theoretical bandwidth is: 3.2 x 64/8=25.6 GB/s, by the embodiment of the present application, its theoretical bandwidth will double up as: 3.2 x 128/8=51.2 GB/s, and the CXL port of the MXC chip is x8 bandwidth, the highest speed supports 32GT/s, the theoretical bandwidth of the CXL port is 64GB/s, and the traditional memory bank capable of supporting higher basic speed is described.
Optionally, in this embodiment, the current conventional memory bank design does not need to be updated, and the conventional memory bank can be directly utilized to realize bandwidth doubling application, so that the bandwidth application requirement is met, and meanwhile, full utilization is realized, and the cost is reduced; and CXL serial expansion is supported, the CPU memory channel is not additionally occupied, and compared with the prior art, the system memory bandwidth and capacity are further improved.
In this embodiment, a server memory module is also provided, and fig. 4 is a schematic diagram of an optional server memory module according to an embodiment of the present application, as shown in fig. 4, where the server memory module includes: a memory module 402, a control module 404, and a cache module 406, wherein,
the control module 404 is configured with a fast connection computing protocol interface and a double rate protocol interface, the control module 404 is connected with the cache module 406 through the double rate protocol interface, the fast connection computing protocol interface is used for being connected with the server 408 and interactively computing fast connection protocol signals, the cache module 406 is connected with the memory module 402, the memory module 410 includes P memory units, and P is a positive integer greater than 1;
The control module 404 is configured to perform conversion between the calculated rapid connection protocol and the double rate protocol on a target data signal, where the target data signal is used to instruct to perform reading and writing of target data, the target data is a pulse string length of a target data amount, the target data amount is a sum of data amounts of N memory units, and the P memory units include the N memory units, where N is greater than 1 and less than or equal to P;
the buffer module 406 is configured to buffer the target data, and perform reading and writing of the target data on the N memory units.
Through the module, the target data signal is transmitted to the control module through the signal interface for providing the calculation quick connection protocol, the control module connected with the cache module through the double rate protocol converts the target data signal into the data signal supporting the double rate protocol and transmits the data signal to the cache module, and the cache module is instructed to execute the reading and writing of the target data. After the buffer memory module executes the read-write of the target data through N memory units with the total data quantity being the target data quantity, the target data with the pulse string length of the target data quantity is buffered, and then the transmission of the target data signal is realized through the control module capable of executing the conversion between the calculation fast protocol and the double rate protocol on the target data signal, so that the target data of the total data quantity of the N memory units is read and written once on the basis of the calculation fast connection protocol and the double rate protocol, the read-write rate of the memory units is improved, and the data transmission rate of the target data is improved. Therefore, the problem of low data transmission rate of signal transmission is solved, and the effect of improving the data transmission rate of signal transmission is further achieved.
Optionally, in this embodiment, the server memory module may, but is not limited to, connect a server with a memory control device provided in the present application, implement communication between a data signal provided by the server and the memory control device provided in the present application, and control read/write operations of the data signal on the memory control device provided in the present application, thereby implementing transmission of data between the server and the memory control device provided in the present application.
Alternatively, in the present embodiment, the double rate protocol may be, but is not limited to, a protocol supporting a higher external data transmission rate such as DDR (Double Data Rate) protocol; the computing fast protocol may be, but is not limited to, CXL (Compute Express Link) protocol, etc. which is mainly used for data acceleration transmission between the CPU and the Device, low latency, high rate memory bus protocol. In the embodiment of the present application, the double rate protocol is taken as a DDR protocol, and the computing quick connection protocol is taken as a CXL protocol as an example.
Alternatively, in this embodiment, the control module may include, but is not limited to, a memory expansion controller chip supporting the CXL protocol, such as an MXC chip. The cache module may include, but is not limited to, a memory for temporarily storing data as it passes between elements having different transmission capabilities.
Optionally, in this embodiment, the memory module may be, but is not limited to, used to connect with the cache module to implement the read/write operation of the target control signal. The P memory units included in the memory module may include, but are not limited to, a server storage unit that may be addressed by a server via a bus and perform read and write operations, and the memory units may include, but are not limited to, memory banks, memory granules, and the like.
Optionally, in this embodiment, the buffer module may, but is not limited to, communicate with N memory units that specifically satisfy the sum of the data amounts of the P memory units in the memory module as the target data amount according to the size of the target data amount, so as to perform reading and writing of the target data.
Alternatively, in the present embodiment, the target data signal may be, but not limited to, a control signal issued by the server requesting to read and write target data, and the target data amount may be, but not limited to, a data size indicating that reading and writing are required.
Alternatively, in the present embodiment, the Burst length of the target data amount may be, but is not limited to, the number of periods in which adjacent memory cells in the same row are continuously transferred, and may be, but is not limited to, represented by Burst Lengths (BL). The bursts of the target data amount may be, but are not limited to, a pattern representing successive data transfers of adjacent memory cells in the same row.
Optionally, in this embodiment, the control module may, but not limited to, convert the CXL high-speed signal into a DDR high-speed signal supporting the DDR protocol after accessing the CXL high-speed signal (target data signal), access the DDR signal to the buffer module, connect the buffer module to the memory module, determine, according to the target data size of the CXL signal, N memory units included in P memory units that need to be used in the memory module, and then perform, by the buffer module, read and write of the target data in the DDR signal on the N memory units, integrate and buffer the target data to form Burst length (Burst length) of the sum of the data sizes of the N memory units, so that the target data of the sum of the data sizes of the N memory units can be transmitted between the control module and the buffer module at a time, and may, but not limited to, determine, according to the target data size, the value of N (N is a positive integer greater than 1).
In an exemplary embodiment, the cache module may, but is not limited to, construct N memory channels, where the N memory channels are in one-to-one correspondence with the N memory units; the control module may be, but is not limited to, configured to receive the target data returned by the buffer module, where the target data signal is used to indicate reading of the target data; transmitting the target data to the server via the computing quick connect protocol; the buffer module may be, but not limited to, configured to read data in the N memory units through the N memory channels, to obtain N first data; the N first data may be spliced into the target data and sent to the control module.
Alternatively, in this embodiment, N pseudo memory channels, that is, virtual memory channels, may be but not limited to be constructed in the cache module, and the N pseudo memory channels are in one-to-one correspondence with N memory units, that is, if the pseudo memory channel 1, the pseudo memory channel 2, the pseudo memory channel 3, the pseudo memory channel 2, the pseudo memory channel N, and the N memory units include the memory unit 1, the memory unit 2, the memory unit 3, the pseudo memory channel 3, and the memory unit N in the cache module, the pseudo memory channel 1 corresponds to the memory unit 1, the pseudo memory channel 2 corresponds to the memory unit 2, the pseudo memory channel 3 corresponds to the memory unit 3, the pseudo memory channel N corresponds to the memory unit N.
Optionally, in this embodiment, under the condition that the target data signal is used to indicate to read the target data, N pseudo multi-memory channels corresponding to N RANK (N memory units) one to one are constructed in a data buffer (a buffer module), a CS chip selection signal provided by an MXC chip (a control module) enables N RANKs simultaneously, when the pseudo memory channel 1 is ready to read the memory data next time for RANK1, the pseudo memory channel 2 reads the memory data of RANK2, and so on, and finally, N first data such as first data 1, first data 2, first data N, and the like are obtained respectively, and the data buffer splices and integrates the N first data and then sends the N first data to the MXC chip.
Optionally, in this embodiment, unlike the prior art that the next memory unit needs to be read after the last memory unit is read, by constructing N pseudo memory channels corresponding to N memory units in one-to-one manner in the cache module, when the memory data is read, the N pseudo memory channels are controlled by the enable signal of the control module, and the N memory units are simultaneously enabled to read the target data, and then the N memory units are respectively read to N first data to be spliced and then transmitted to the control module, so that the effective waiting time for reading the memory data is shortened, the data reading efficiency is improved, and the data transmission rate is improved.
In an exemplary embodiment, the cache module may, but is not limited to, construct N memory channels, where the N memory channels are in one-to-one correspondence with the N memory units; the buffer module may be, but is not limited to, configured to divide the target data into N second data if the target data signal is used to indicate writing of the target data; the N second data may be written into the N memory cells through the N memory channels, respectively, but is not limited thereto.
Optionally, in this embodiment, in the case where the target data signal is used to indicate writing of target data, N pseudo multi-memory channels corresponding to N RANK (N memory units) one-to-one are constructed in the data buffer (the buffer module), the data buffer divides the target data into N second data such as first data 1, first data 2, first data N, etc., the CS chip select signal provided by the MXC chip (the control module) enables N RANKs simultaneously, when the pseudo memory channel 1 is ready to write the memory data to RANK1 next time, the pseudo memory channel 2 writes the memory data of RANK2, and so on, and writes the N second data into N RANK through the N pseudo memory channels, respectively.
Optionally, in this embodiment, unlike the prior art that writing of the next memory unit is performed after the previous memory unit is completely written, N pseudo memory channels corresponding to N memory units one by one are constructed in the buffer module, when the memory data is read, the target data is first divided into second data corresponding to N pseudo memory channels one by the buffer module, and then N second data are simultaneously written into N memory units by enabling signals of the control module, so that the effective waiting time of writing the memory data is reduced, the data writing efficiency is improved, and the data transmission rate is improved.
In an exemplary embodiment, each of the N memory channels may, but is not limited to, allow for independent execution of operations on a corresponding memory cell of the N memory cells.
Alternatively, in this embodiment, if the cache module constructs the dummy memory channel 1, the dummy memory channel 2, the dummy memory channel 3, the third party, the dummy memory channel N, and the N memory units include the memory unit 1, the memory unit 2, the memory unit 3, the third party, and the memory unit N, the dummy memory channel 1 may but is not limited to allow the operation to be independently performed on the memory unit 1, the dummy memory channel 2 may but is not limited to allow the operation to be independently performed on the memory unit 2, the dummy memory channel 3 may but is not limited to allow the operation to be independently performed on the memory unit 3, and the dummy memory channel N may but is not limited to allow the operation to be independently performed on the memory unit N.
Optionally, in this embodiment, unlike the serial logic of the memory data read/write by the buffer module in the prior art, that is, the buffer module needs to wait for the completion of the reading/writing of the previous memory unit before the next memory unit can read/write, by constructing N pseudo memory channels allowing the corresponding N memory units to be independently operated in the buffer module, the parallel reading/writing of the memory data is realized, and the memory data reading efficiency is improved, so that the memory bandwidth is further greatly expanded and the maximum utilization of the memory resource can be realized on the basis of the original traditional memory of the current computing node.
In one exemplary embodiment, the P memory units may be, but are not limited to, P memory ranks, or the P memory units may be, but are not limited to, P memory granule groups, where each memory granule group may be, but are not limited to, a plurality of memory granules disposed.
Alternatively, in this embodiment, the P memory cells may be, but not limited to, P conventional memory banks (memory ranks), and the P memory cells may be, but not limited to, P memory granule groups, where a plurality of DRAM granules (memory granules) may be disposed in each memory granule group. Each memory granule group may be, but is not limited to, a form in which a plurality of DRAM granules are spliced into one RANK, and each of the P memory granule groups may be, but is not limited to, sequentially arranged in a row form.
Optionally, in this embodiment, the types and arrangements of the N memory cells included in the P memory cells are identical to those of the P memory cells.
In one exemplary embodiment, the control module may include, but is not limited to: and the memory expansion control chip, the cache module comprises: and the data buffer, wherein the chip selection signal of the memory expansion control chip enables the N memory units at the same time.
Alternatively, in this embodiment, the memory expansion control chip may be, but is not limited to, a chip operating based on a computing rapid connection protocol, and in this embodiment, the memory expansion control chip is described as an MXC chip.
Alternatively, in the present embodiment, the data buffer may be, but not limited to, a four-bus buffer, an eight-bus buffer, or the like, and specifically may be selected according to the system bus requirements, which is not limited herein.
Alternatively, in the present embodiment, N memory cells may be enabled to read N first data at the same time by a CS signal (chip select signal) of an MXC chip (memory expansion control chip); n memory cells may be enabled to write N second data simultaneously by CS signals (chip select signals) of an MXC chip (memory expansion control chip), but not limited thereto.
In one exemplary embodiment, the data amount of each memory cell may be, but is not limited to being, M, and the target data amount may be, but is not limited to being, mxn.
Alternatively, in the present embodiment, the data amount of each memory cell may be, but is not limited to, a data information storage amount that is binary data in bytes.
Alternatively, in the present embodiment, if the data size of each memory cell is 64 bytes (M), and 2 (N) memory cells are total, the target data size may be 64×2=128 bytes.
Alternatively, in this embodiment, if the P memory units are P memory granule groups in which a plurality of DRAM granules are disposed, the plurality of DRAM granules in one memory granule group may be, but not limited to, a form of RANK (memory granule group) spliced into one 64 bytes (M), that is, the data size of each memory granule group is 64 bytes, and then the target data size may be 64×n.
In an alternative embodiment, a schematic diagram of a server memory module is provided, and fig. 5 is a schematic diagram of a server memory module according to an alternative embodiment of the present application, as shown in fig. 5, where DRAM particles (memory particles) and Data buffers, MXC chips (memory expansion control chips), and other devices are all soldered onto a PCB board. DRAM particles respectively form two ranks (memory units are in the form of memory particle groups, N=P=2) and are connected to a Data Buffer, 128 bytes of Data Burst length (the pulse string length of target Data quantity) is formed through the Data Buffer, interaction is carried out with an MXC chip, CXL signals (target Data signals) are converted through the MXC chip and transmitted through golden fingers, the golden fingers are used for realizing communication between a server and a control module, and the golden fingers comprise but are not limited to EDSFF (Enterprise and Data Center Standard Form Factor, namely, the standard form specification of enterprises and Data centers), PCIe (Peripheral component interconnect express, a high-speed serial computer expansion bus standard) and the like. Meanwhile, the golden finger also introduces CLK, RESET, I C signals to provide clock, reset and management functions for the MXC chip respectively, and meanwhile, 1 SPI FLASH is hung under the MXC chip for storing chip firmware, and UART and JTAG test points are provided for Debug analysis. The MXC I3C interface is hung under the SPD to store information related to the module, the PMIC stores information such as module voltage, power consumption and the like, and the Sensor provides module temperature information.
Optionally, in this embodiment, taking a memory unit as a memory granule as an example, the server memory module provided in this embodiment of the present application may break through the bandwidth limitation of the current DRAM granule itself, so that the bandwidth is doubled, and the advantage of the CXL link high bandwidth is fully utilized, so that the bandwidth parameter of the standard CXL memory module reaches the theoretical limit.
The method embodiments provided in the embodiments of the present application may be performed in a mobile terminal, a computer terminal or similar computing device. Taking the mobile terminal as an example, fig. 6 is a block diagram of a hardware structure of the mobile terminal of a memory control method according to an embodiment of the present application. As shown in fig. 6, the mobile terminal may include one or more processors 602 (only one is shown in fig. 6) (the processor 602 may include, but is not limited to, a microprocessor MCU or a processing device such as a programmable logic device FPGA) and a memory 604 for storing data, wherein the mobile terminal may further include a transmission device 606 for communication functions and an input-output device 608. It will be appreciated by those skilled in the art that the structure shown in fig. 6 is merely illustrative and not limiting of the structure of the mobile terminal described above. For example, the mobile terminal may also include more or fewer components than shown in fig. 6, or have a different configuration than shown in fig. 6.
The memory 604 may be used to store a computer program, for example, a software program of application software and a module, such as a computer program corresponding to a memory control method in the embodiment of the present application, and the processor 602 executes the computer program stored in the memory 604 to perform various functional applications and data processing, that is, implement the method described above. Memory 604 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid state memory. In some examples, the memory 604 may further comprise memory located remotely from the processor 602, which may be connected to the mobile terminal via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmitting device 606 is used to receive or transmit data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the mobile terminal. In one example, the transmitting device 606 includes a network adapter (Network Interface Controller, simply referred to as NIC) that can connect to other network devices through a base station to communicate with the internet. In one example, the transmission device 606 may be a Radio Frequency (RF) module, which is configured to communicate with the internet wirelessly.
In this embodiment, a memory control method is provided, and fig. 7 is a flowchart of the memory control method according to an embodiment of the present application, as shown in fig. 7, where the flowchart includes the following steps:
s702, performing conversion between a calculation quick connection protocol and a double rate protocol on a target data signal through a control module, wherein the memory control device comprises: the control module is connected with the cache module through the double rate protocol, the control module is further used for providing a signal interface for calculating a quick connection protocol, the cache module is used for being connected with N memory units, N is a positive integer greater than 1, the target data signal is used for indicating to execute the reading and writing of target data, the target data is the pulse string length of target data volume, and the target data volume is the sum of the data volumes of the N memory units;
s704, caching the target data through the caching module, and executing reading and writing of the target data on the N memory units.
Through the steps, the target data signal is transmitted to the control module through the signal interface for providing the calculation quick connection protocol, the control module connected with the cache module through the double rate protocol converts the target data signal into the data signal supporting the double rate protocol and transmits the data signal to the cache module, and the cache module is instructed to execute the reading and writing of the target data. After the buffer memory module executes the read-write of the target data through N memory units with the total data quantity being the target data quantity, the target data with the pulse string length of the target data quantity is buffered, and then the transmission of the target data signal is realized through the control module capable of executing the conversion between the calculation fast protocol and the double rate protocol on the target data signal, so that the target data of the total data quantity of the N memory units is read and written once on the basis of the calculation fast connection protocol and the double rate protocol, the read-write rate of the memory units is improved, and the data transmission rate of the target data is improved. Therefore, the problem of low data transmission rate of signal transmission is solved, and the effect of improving the data transmission rate of signal transmission is further achieved.
The main execution body of the above steps may be a server, but is not limited thereto.
Optionally, in this embodiment, the memory control device may be, but is not limited to, used to connect a data signal with a memory unit, and control communication between the data signal and the memory unit, so as to implement reading and writing of data carried in the data signal and implement transmission of data between a CPU (Central Processing Unit ) or a server and the memory unit.
Optionally, in this embodiment, the control module may, but not limited to, access the CXL high-speed signal (target data signal) to the DDR high-speed signal supporting the DDR protocol, access the DDR signal to the buffer module, perform, by the buffer module, reading and writing of target data in the DDR signal on N memory units, integrate and buffer the target data to form a Burst length (Burst length) of a total data size of the N memory units, so that the control module and the buffer module may, at each time, transmit the target data of the total data size of the N memory units, where the buffer module is connected to the memory module, and may, but not limited to, determine the value of N (N is a positive integer greater than 1) according to the target data size.
In the technical solution provided in step S702, the double rate protocol may be, but is not limited to, a protocol supporting a higher external data transmission rate, such as DDR (Double Data Rate) protocol; the computing fast protocol may be, but is not limited to, CXL (Compute Express Link) protocol, etc. which is mainly used for data acceleration transmission between the CPU and the Device, low latency, high rate memory bus protocol. In the embodiment of the present application, the double rate protocol is taken as a DDR protocol, and the computing quick connection protocol is taken as a CXL protocol as an example.
Alternatively, in this embodiment, the control module may include, but is not limited to, a memory expansion controller chip supporting the CXL protocol, such as a MXC (Memory Expander Controller) chip.
Alternatively, in the present embodiment, the target data signal may be, but not limited to, a control signal issued by the CPU or the server requesting to read and write target data, and the target data amount may be, but not limited to, a data size indicating that reading and writing are required.
Alternatively, in the present embodiment, the Burst length of the target data amount may be, but is not limited to, the number of periods in which adjacent memory cells in the same row are continuously transferred, and may be, but is not limited to, represented by Burst Lengths (BL). The bursts of the target data amount may be, but are not limited to, a pattern representing successive data transfers of adjacent memory cells in the same row.
In the solution provided in step S704, the buffer module may include, but is not limited to, a memory for temporarily storing data when the data passes between elements having different transmission capabilities.
Alternatively, in this embodiment, the memory unit may include, but is not limited to, a computer unit that may be addressed by a CPU through a bus and perform read/write operations, and the memory unit may include, but is not limited to, a memory bank, a memory granule, and the like.
In one exemplary embodiment, the target data may be, but is not limited to, cached by the caching module, and the reading and writing of the target data are performed on the N memory units: under the condition that the target data signal is used for indicating to read the target data, respectively reading data in the N memory units through N memory channels to obtain N first data, wherein the N memory channels are constructed in the cache module, and the N memory channels are in one-to-one correspondence with the N memory units; and splicing the N pieces of first data into the target data and sending the target data to the control module.
Alternatively, in this embodiment, N pseudo memory channels, that is, virtual memory channels, may be but not limited to be constructed in the cache module, and the N pseudo memory channels are in one-to-one correspondence with N memory units, that is, if the pseudo memory channel 1, the pseudo memory channel 2, the pseudo memory channel 3, the pseudo memory channel 2, the pseudo memory channel N, and the N memory units include the memory unit 1, the memory unit 2, the memory unit 3, the pseudo memory channel 3, and the memory unit N in the cache module, the pseudo memory channel 1 corresponds to the memory unit 1, the pseudo memory channel 2 corresponds to the memory unit 2, the pseudo memory channel 3 corresponds to the memory unit 3, the pseudo memory channel N corresponds to the memory unit N.
Optionally, in this embodiment, when the target data signal is used to indicate reading target data, the target data may be read through, but not limited to, a data buffer (buffer module), N pseudo multi-memory channels constructed in the buffer module and corresponding to N RANK (N memory units) one by one, a CS chip select signal provided by the MXC chip (control module) enables N RANKs at the same time, when the pseudo memory channel 1 is ready to read the memory data next time for RANK1, the pseudo memory channel 2 reads the memory data of RANK2, and so on, and finally, N first data such as first data 1, first data 2, third, first data N, and so on are obtained respectively, and the data buffer performs splice integration on the N first data and then sends the N first data to the MXC chip.
Optionally, in this embodiment, unlike the prior art that the next memory unit needs to be read after the previous memory unit is read, by constructing the buffer modules of N pseudo memory channels corresponding to the N memory units one by one, when the memory data is read, the N pseudo memory channels are controlled by the enable signal of the control module, and the N memory units are simultaneously enabled to read the target data, and then the N memory units are respectively read to the N first data to be spliced and then transmitted to the control module, so that the effective waiting time for reading the memory data is reduced, the data reading efficiency is improved, and the data transmission rate is improved.
In one exemplary embodiment, the target data may be, but is not limited to, cached by the caching module, and the reading and writing of the target data are performed on the N memory units: dividing the target data into N second data under the condition that the target data signal is used for indicating writing of the target data, wherein N memory channels are constructed in the cache module, and the N memory channels are in one-to-one correspondence with the N memory units; and writing the N second data into the N memory units through the N memory channels respectively.
Optionally, in this embodiment, in the case where the target data signal is used to indicate writing of target data, the target data may be divided into N second data such as first data 1, first data 2, third, first data N, etc. by N pseudo multi-memory channels constructed in a data buffer (buffer module) and corresponding to N RANK (N memory cells) one by one, the CS chip select signal provided by the MXC chip (control module) enables N RANKs at the same time, when the pseudo memory channel 1 is ready to write the memory data next time to RANK1, the pseudo memory channel 2 performs memory data writing of RANK2, and so on, and the N second data is written into N RANK by N pseudo memory channels, respectively.
Optionally, in this embodiment, unlike the prior art that writing of the next memory unit is performed after the previous memory unit is completely written, the target data is first divided into the second data corresponding to the N pseudo memory channels one by the buffer module when the memory data is read through the N pseudo memory channels corresponding to the N memory units one by one constructed in the buffer module, and then the N second data are simultaneously enabled to be written into the N pseudo memory channels by the enable signal of the control module, so that the effective waiting time of writing the memory data is reduced, and the data writing efficiency is improved, thereby improving the data transmission rate.
In an exemplary embodiment, each of the N memory channels may, but is not limited to, allow for independent execution of operations on a corresponding memory cell of the N memory cells.
Alternatively, in this embodiment, if the cache module constructs the dummy memory channel 1, the dummy memory channel 2, the dummy memory channel 3, the third party, the dummy memory channel N, and the N memory units include the memory unit 1, the memory unit 2, the memory unit 3, the third party, and the memory unit N, the dummy memory channel 1 may but is not limited to allow the operation to be independently performed on the memory unit 1, the dummy memory channel 2 may but is not limited to allow the operation to be independently performed on the memory unit 2, the dummy memory channel 3 may but is not limited to allow the operation to be independently performed on the memory unit 3, and the dummy memory channel N may but is not limited to allow the operation to be independently performed on the memory unit N.
Optionally, in this embodiment, unlike the serial logic of the memory data read/write by the buffer module in the prior art, that is, the buffer module needs to wait for the completion of the reading/writing of the previous memory unit before the next memory unit can read/write, by constructing N pseudo memory channels allowing the corresponding N memory units to be independently operated in the buffer module, the parallel reading/writing of the memory data is realized, and the memory data reading efficiency is improved, so that the memory bandwidth is further greatly expanded and the maximum utilization of the memory resource can be realized on the basis of the original traditional memory of the current computing node.
In an exemplary embodiment, the N memory units are N memory columns, or the N memory units are N memory granule groups, where a plurality of memory granules are disposed in each memory granule group.
Alternatively, in this embodiment, the N memory cells may be, but not limited to, N conventional memory banks (memory ranks), and the N memory cells may be, but not limited to, N memory granule groups, where a plurality of DRAM granules (memory granules) may be disposed in each memory granule group. Each memory granule group may be, but is not limited to, a form in which a plurality of DRAM granules are spliced into one RANK, and each memory granule group of the N memory granule groups may be, but is not limited to, sequentially arranged in a row form.
In an exemplary embodiment, before the target data is cached by the caching module and the reading and writing of the target data are performed on the N memory units, the following method may be adopted, but is not limited to: enabling the N memory units simultaneously through a chip select signal of a memory expansion control chip, wherein the control module comprises: and the memory expansion control chip, the cache module comprises: the memory expansion control chip is provided with a computing quick connection protocol interface and a double rate protocol interface, the computing quick connection protocol interface is used for being connected with computing quick connection protocol equipment, and the double rate protocol interface is connected with the data buffer.
Alternatively, in this embodiment, the memory expansion control chip may be, but is not limited to, a chip operating based on a computing rapid connection protocol, and in this embodiment, the memory expansion control chip is described as an MXC chip.
Alternatively, in the present embodiment, the data buffer may be, but not limited to, a four-bus buffer, an eight-bus buffer, or the like, and specifically may be selected according to the system bus requirements, which is not limited herein.
Optionally, in this embodiment, before the buffer module buffers the target data and performs reading and writing of the target data on N memory units, N first data may be simultaneously enabled to be read by N memory units through a CS signal (chip select signal) of an MXC chip (memory expansion control chip); n memory cells may be enabled to write N second data simultaneously by CS signals (chip select signals) of an MXC chip (memory expansion control chip), but not limited thereto.
In one exemplary embodiment, the data amount of each memory cell may be, but is not limited to being, M, and the target data amount may be, but is not limited to being, mxn.
Alternatively, in the present embodiment, the data amount of each memory cell may be, but is not limited to, a data information storage amount that is binary data in bytes.
Alternatively, in the present embodiment, if the data size of each memory cell is 128 bytes (M), and 2 (N) memory cells are total, the target data size may be 128×2=256 bytes.
From the description of the above embodiments, it will be clear to a person skilled in the art that the method according to the above embodiments may be implemented by means of software plus the necessary general hardware platform, but of course also by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), comprising several instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method described in the embodiments of the present application.
The embodiment also provides a memory control device, which is used for implementing the above embodiment and the preferred implementation, and is not described in detail. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. While the means described in the following embodiments are preferably implemented in software, implementation in hardware, or a combination of software and hardware, is also possible and contemplated.
Fig. 8 is a block diagram of a memory control apparatus according to an embodiment of the present application, and as shown in fig. 8, the apparatus is applied to a memory control device, and includes:
a conversion module 802, configured to perform conversion between a fast connection protocol and a double rate protocol on a target data signal through a control module, where the memory control device includes: the control module is connected with the cache module through the double rate protocol, the control module is further used for providing a signal interface for calculating a quick connection protocol, the cache module is used for being connected with N memory units, N is a positive integer greater than 1, the target data signal is used for indicating to execute the reading and writing of target data, the target data is the pulse string length of target data volume, and the target data volume is the sum of the data volumes of the N memory units;
And the processing module 804 is configured to buffer the target data through the buffer module, and perform reading and writing of the target data on the N memory units.
Through the device, the target data signal is transmitted to the control module through the signal interface for providing the calculation quick connection protocol, the control module connected with the cache module through the double rate protocol converts the target data signal into the data signal supporting the double rate protocol and transmits the data signal to the cache module, and the cache module is instructed to execute the reading and writing of the target data. After the buffer memory module executes the read-write of the target data through N memory units with the total data quantity being the target data quantity, the target data with the pulse string length of the target data quantity is buffered, and then the transmission of the target data signal is realized through the control module capable of executing the conversion between the calculation fast protocol and the double rate protocol on the target data signal, so that the target data of the total data quantity of the N memory units is read and written once on the basis of the calculation fast connection protocol and the double rate protocol, the read-write rate of the memory units is improved, and the data transmission rate of the target data is improved. Therefore, the problem of low data transmission rate of signal transmission is solved, and the effect of improving the data transmission rate of signal transmission is further achieved.
In one exemplary embodiment, the processing module is configured to: under the condition that the target data signal is used for indicating to read the target data, respectively reading data in the N memory units through N memory channels to obtain N first data, wherein the N memory channels are constructed in the cache module, and the N memory channels are in one-to-one correspondence with the N memory units; and splicing the N pieces of first data into the target data and sending the target data to the control module.
In one exemplary embodiment, the processing module is further configured to: dividing the target data into N second data under the condition that the target data signal is used for indicating writing of the target data, wherein the N memory channels are constructed in the cache module and are in one-to-one correspondence with the N memory units; and writing the N second data into the N memory units through the N memory channels respectively.
In one exemplary embodiment, each of the N memory channels in the device allows for independent execution of operations on a corresponding one of the N memory cells.
In an exemplary embodiment, the N memory units in the apparatus are N memory columns, or the N memory units are N memory granule groups, where a plurality of memory granules are disposed in each memory granule group.
In an exemplary embodiment, the apparatus further comprises: and an enabling module, configured to enable the N memory units simultaneously by a chip select signal of a memory expansion control chip before the target data is cached by the caching module and the read-write of the target data is performed on the N memory units, where the control module includes: and the memory expansion control chip, the cache module comprises: the memory expansion control chip is provided with a computing quick connection protocol interface and a double rate protocol interface, the computing quick connection protocol interface is used for being connected with computing quick connection protocol equipment, and the double rate protocol interface is connected with the data buffer.
In an exemplary embodiment, the data amount of each of the memory cells in the apparatus is M, and the target data amount is mxn.
It should be noted that each of the above modules may be implemented by software or hardware, and for the latter, it may be implemented by, but not limited to: the modules are all located in the same processor; alternatively, the above modules may be located in different processors in any combination.
Embodiments of the present application also provide a computer readable storage medium having a computer program stored therein, wherein the computer program is arranged to perform the steps of any of the method embodiments described above when run.
In one exemplary embodiment, the computer readable storage medium may include, but is not limited to: a usb disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing a computer program.
Embodiments of the present application also provide an electronic device, where fig. 9 is a schematic diagram of an alternative electronic device according to embodiments of the present application, as shown in fig. 9, the electronic device including one or more processors; and a memory for storing one or more programs that, when executed by the one or more processors, cause the one or more processors to implement a method for running the programs, wherein the programs are configured to perform the memory control method described above when run.
In an exemplary embodiment, the electronic device may further include a transmission device connected to the processor, and an input/output device connected to the processor.
Specific examples in this embodiment may refer to the examples described in the foregoing embodiments and the exemplary implementation, and this embodiment is not described herein.
It will be appreciated by those skilled in the art that the modules or steps of the application described above may be implemented in a general purpose computing device, they may be concentrated on a single computing device, or distributed across a network of computing devices, they may be implemented in program code executable by computing devices, so that they may be stored in a storage device for execution by computing devices, and in some cases, the steps shown or described may be performed in a different order than that shown or described herein, or they may be separately fabricated into individual integrated circuit modules, or multiple modules or steps of them may be fabricated into a single integrated circuit module. Thus, the present application is not limited to any specific combination of hardware and software.
The foregoing description is only of the preferred embodiments of the present application and is not intended to limit the same, but rather, various modifications and variations may be made by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the principles of the present application should be included in the protection scope of the present application.

Claims (24)

1. A memory control device, comprising: a control module and a cache module, wherein,
the control module is connected with the cache module through a double rate protocol, the control module is also used for providing a signal interface for calculating a quick connection protocol, the cache module is used for being connected with N memory units, and N is a positive integer greater than 1;
the control module is configured to perform conversion between the calculated rapid connection protocol and the double rate protocol on a target data signal, where the target data signal is used to instruct to perform reading and writing of target data, the target data is a pulse string length of a target data amount, and the target data amount is a sum of data amounts of the N memory units;
the buffer module is used for buffering the target data and executing the reading and writing of the target data on the N memory units.
2. The apparatus of claim 1, wherein N memory channels are constructed in the cache module, the N memory channels being in one-to-one correspondence with the N memory units;
the control module is used for receiving the target data returned by the cache module under the condition that the target data signal is used for indicating to read the target data;
The buffer module is used for reading the data in the N memory units through the N memory channels respectively to obtain N first data; and splicing the N pieces of first data into the target data and sending the target data to the control module.
3. The apparatus of claim 1, wherein N memory channels are constructed in the cache module, the N memory channels being in one-to-one correspondence with the N memory units;
the buffer module is used for dividing the target data into N pieces of second data under the condition that the target data signal is used for indicating writing of the target data; and writing the N second data into the N memory units through the N memory channels respectively.
4. A device as claimed in claim 2 or 3, wherein each of the N memory channels allows independent execution of operations on a corresponding one of the N memory cells.
5. The apparatus of claim 1, wherein the N memory cells are N memory columns or the N memory cells are N memory granule groups, each memory granule group having a plurality of memory granules disposed therein.
6. The apparatus of claim 1, wherein the control module comprises: and the memory expansion control chip, the cache module comprises: a data buffer, wherein,
The memory expansion control chip is provided with a computing quick connection protocol interface and a double rate protocol interface, the computing quick connection protocol interface is used for being connected with computing quick connection protocol equipment, and the double rate protocol interface is connected with the data buffer;
the chip select signals of the memory expansion control chip enable the N memory units at the same time.
7. The apparatus of claim 1 wherein the data size of each of the memory cells is M and the target data size is mxn.
8. A server memory module, comprising: a memory module, a control module and a cache module, wherein,
the control module is provided with a computing quick connection protocol interface and a double rate protocol interface, the control module is connected with the cache module through the double rate protocol interface, the computing quick connection protocol interface is used for being connected with a server and interactively computing a quick connection protocol signal, the cache module is connected with the memory module, the memory module comprises P memory units, and P is a positive integer greater than 1;
the control module is configured to perform conversion between the calculated rapid connection protocol and the double rate protocol on a target data signal, where the target data signal is used to instruct to perform reading and writing of target data, the target data is a pulse string length of a target data amount, the target data amount is a sum of data amounts of N memory units, and the P memory units include the N memory units, where N is greater than 1 and less than or equal to P;
The buffer module is used for buffering the target data and executing the reading and writing of the target data on the N memory units.
9. The server memory module of claim 8, wherein N memory channels are constructed in the cache module, the N memory channels being in one-to-one correspondence with the N memory units;
the control module is used for receiving the target data returned by the cache module under the condition that the target data signal is used for indicating to read the target data; transmitting the target data to the server via the computing quick connect protocol;
the buffer module is used for reading the data in the N memory units through the N memory channels respectively to obtain N first data; and splicing the N pieces of first data into the target data and sending the target data to the control module.
10. The server memory module of claim 8, wherein N memory channels are constructed in the cache module, the N memory channels being in one-to-one correspondence with the N memory units;
the buffer module is used for dividing the target data into N pieces of second data under the condition that the target data signal is used for indicating writing of the target data; and writing the N second data into the N memory units through the N memory channels respectively.
11. The server memory module of claim 9 or 10, wherein each of the N memory channels allows independent execution of operations on a corresponding one of the N memory cells.
12. The server memory module of claim 8, wherein the P memory units are P memory ranks or the P memory units are P memory granule groups, each memory granule group having a plurality of memory granules disposed therein.
13. The server memory module of claim 8, wherein the control module comprises: and the memory expansion control chip, the cache module comprises: a data buffer, wherein,
the chip select signals of the memory expansion control chip enable the N memory units at the same time.
14. The server memory module of claim 8 wherein the data size of each memory cell is M and the target data size is mxn.
15. A memory control method, applied to a memory control device, comprising:
performing, by a control module, conversion between a computational fast connection protocol and a double rate protocol on a target data signal, wherein the memory control device comprises: the control module is connected with the cache module through the double rate protocol, the control module is further used for providing a signal interface for calculating a quick connection protocol, the cache module is used for being connected with N memory units, N is a positive integer greater than 1, the target data signal is used for indicating to execute the reading and writing of target data, the target data is the pulse string length of target data volume, and the target data volume is the sum of the data volumes of the N memory units;
And caching the target data through the caching module, and executing reading and writing of the target data on the N memory units.
16. The method of claim 15, wherein the caching the target data by the cache module and performing the reading and writing of the target data to the N memory cells comprises:
under the condition that the target data signal is used for indicating to read the target data, respectively reading data in the N memory units through N memory channels to obtain N first data, wherein the N memory channels are constructed in the cache module, and the N memory channels are in one-to-one correspondence with the N memory units;
and splicing the N pieces of first data into the target data and sending the target data to the control module.
17. The method of claim 15, wherein the caching the target data by the cache module and performing the reading and writing of the target data to the N memory cells comprises:
dividing the target data into N second data under the condition that the target data signal is used for indicating writing of the target data, wherein N memory channels are constructed in the cache module, and the N memory channels are in one-to-one correspondence with the N memory units;
And writing the N second data into the N memory units through the N memory channels respectively.
18. The method of claim 16 or 17, wherein each of the N memory channels allows independent execution of operations on a corresponding one of the N memory cells.
19. The method of claim 15, wherein the N memory cells are N memory columns or the N memory cells are N memory granule groups, each memory granule group having a plurality of memory granules disposed therein.
20. The method of claim 15, wherein prior to said caching of said target data by said cache module and performing said reading and writing of said target data to said N memory cells, said method further comprises:
enabling the N memory units simultaneously through a chip select signal of a memory expansion control chip, wherein the control module comprises: and the memory expansion control chip, the cache module comprises: the memory expansion control chip is provided with a computing quick connection protocol interface and a double rate protocol interface, the computing quick connection protocol interface is used for being connected with computing quick connection protocol equipment, and the double rate protocol interface is connected with the data buffer.
21. The method of claim 15 wherein the data size of each of the memory cells is M and the target data size is mxn.
22. A memory control apparatus, for use in a memory control device, the apparatus comprising:
a conversion module, configured to perform conversion between a fast connection protocol and a double rate protocol on a target data signal through a control module, where the memory control device includes: the control module is connected with the cache module through the double rate protocol, the control module is further used for providing a signal interface for calculating a quick connection protocol, the cache module is used for being connected with N memory units, N is a positive integer greater than 1, the target data signal is used for indicating to execute the reading and writing of target data, the target data is the pulse string length of target data volume, and the target data volume is the sum of the data volumes of the N memory units;
and the processing module is used for caching the target data through the caching module and executing the reading and writing of the target data on the N memory units.
23. A computer readable storage medium, characterized in that a computer program is stored in the computer readable storage medium, wherein the computer program, when being executed by a processor, implements the steps of the method of any of claims 15 to 21.
24. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the method of any of claims 15 to 21 when the computer program is executed.
CN202310742179.7A 2023-06-21 2023-06-21 Memory control equipment, method and device and server memory module Pending CN116483288A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310742179.7A CN116483288A (en) 2023-06-21 2023-06-21 Memory control equipment, method and device and server memory module

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310742179.7A CN116483288A (en) 2023-06-21 2023-06-21 Memory control equipment, method and device and server memory module

Publications (1)

Publication Number Publication Date
CN116483288A true CN116483288A (en) 2023-07-25

Family

ID=87225422

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310742179.7A Pending CN116483288A (en) 2023-06-21 2023-06-21 Memory control equipment, method and device and server memory module

Country Status (1)

Country Link
CN (1) CN116483288A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103946826A (en) * 2011-09-30 2014-07-23 英特尔公司 Apparatus and method for implementing a multi-level memory hierarchy over common memory channels
CN104094351A (en) * 2012-01-31 2014-10-08 惠普发展公司,有限责任合伙企业 Memory module buffer data storage
CN105550119A (en) * 2016-01-29 2016-05-04 中国人民解放军国防科学技术大学 Simulation device based on JTAG protocol
CN111638992A (en) * 2019-03-01 2020-09-08 英特尔公司 Microchip-based packetization
CN113778914A (en) * 2020-06-09 2021-12-10 华为技术有限公司 Apparatus, method, and computing device for performing data processing
CN115391269A (en) * 2021-05-24 2022-11-25 浙江毫微米科技有限公司 Workload certification calculation chip, data processing method and electronic equipment
CN116244074A (en) * 2023-02-10 2023-06-09 苏州浪潮智能科技有限公司 Memory module, data read-write method and device, server and electronic equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103946826A (en) * 2011-09-30 2014-07-23 英特尔公司 Apparatus and method for implementing a multi-level memory hierarchy over common memory channels
CN104094351A (en) * 2012-01-31 2014-10-08 惠普发展公司,有限责任合伙企业 Memory module buffer data storage
CN105550119A (en) * 2016-01-29 2016-05-04 中国人民解放军国防科学技术大学 Simulation device based on JTAG protocol
CN111638992A (en) * 2019-03-01 2020-09-08 英特尔公司 Microchip-based packetization
CN113778914A (en) * 2020-06-09 2021-12-10 华为技术有限公司 Apparatus, method, and computing device for performing data processing
CN115391269A (en) * 2021-05-24 2022-11-25 浙江毫微米科技有限公司 Workload certification calculation chip, data processing method and electronic equipment
CN116244074A (en) * 2023-02-10 2023-06-09 苏州浪潮智能科技有限公司 Memory module, data read-write method and device, server and electronic equipment

Similar Documents

Publication Publication Date Title
CN100580643C (en) Multiple processor system and method including a plurality memory hub modules
JP4560646B2 (en) Apparatus and method for direct memory access in a hub-based memory system
CN103559156B (en) Communication system between a kind of FPGA and computing machine
KR20190055608A (en) Memory Device performing parallel arithmetic process and Memory Module having the same
US7565461B2 (en) Switch/network adapter port coupling a reconfigurable processing element to one or more microprocessors for use with interleaved memory controllers
CN116841932B (en) Flexibly-connectable portable high-speed data access equipment and working method thereof
CN102420719A (en) Apparatus for testing PCIe bus bandwidth and method thereof
CN103002046A (en) Multi-system data copying remote direct memory access (RDMA) framework
CN115686153B (en) Memory module and electronic equipment
EP1811398B1 (en) Data processing system with a multiported memory for interprocessor communication
CN114443170B (en) FPGA dynamic parallel loading and unloading system
CN112988647A (en) TileLink bus-to-AXI 4 bus conversion system and method
CN101320344A (en) Multi-core or numerous-core processor function verification device and method
CN204390227U (en) Computing equipment expanding unit and extendible computing system
CN109522194A (en) For AXI protocol from the automation pressure testing system and method for equipment interface
CN210776403U (en) Server architecture compatible with GPUDirect storage mode
CN116107923B (en) BRAM-based many-to-many high-speed memory access architecture and memory access system
CN104598404A (en) Computing equipment extending method and device as well as extensible computing system
CN116483288A (en) Memory control equipment, method and device and server memory module
CN216527166U (en) Large-capacity storage system
CN111694513A (en) Memory device and method including a circular instruction memory queue
CN115994115A (en) Chip control method, chip set and electronic equipment
CN112513824A (en) Memory interleaving method and device
CN104025198B (en) Phase transition storage and switch(PCMS)Wrongly write error detection
CN115296743A (en) Optical fiber communication switching system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20230725