CN110941395B - Dynamic random access memory, memory management method, system and storage medium - Google Patents

Dynamic random access memory, memory management method, system and storage medium Download PDF

Info

Publication number
CN110941395B
CN110941395B CN201911121029.4A CN201911121029A CN110941395B CN 110941395 B CN110941395 B CN 110941395B CN 201911121029 A CN201911121029 A CN 201911121029A CN 110941395 B CN110941395 B CN 110941395B
Authority
CN
China
Prior art keywords
instruction set
processing unit
interface
mapping area
central processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911121029.4A
Other languages
Chinese (zh)
Other versions
CN110941395A (en
Inventor
赖振楠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hosin Global Electronics Co Ltd
Original Assignee
Hosin Global Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hosin Global Electronics Co Ltd filed Critical Hosin Global Electronics Co Ltd
Priority to CN201911121029.4A priority Critical patent/CN110941395B/en
Publication of CN110941395A publication Critical patent/CN110941395A/en
Application granted granted Critical
Publication of CN110941395B publication Critical patent/CN110941395B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/16Handling requests for interconnection or transfer for access to memory bus
    • G06F13/1668Details of memory controller
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0658Controller construction arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/32Address formation of the next instruction, e.g. by incrementing the instruction counter
    • G06F9/321Program or instruction counter, e.g. incrementing
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Techniques For Improving Reliability Of Storages (AREA)
  • Dram (AREA)

Abstract

The invention provides a dynamic random access memory, a memory management method, a system and a storage medium, wherein the dynamic random access memory comprises a circuit substrate, a DRAM chip set integrated on the circuit substrate, a memory controller, a first interface and a second interface; the memory controller is respectively connected with the DRAM chipset and the first interface and responds to a read-write request of the central processing unit connected to the first interface; the memory controller is connected with the second interface, and when the instruction set read by the central processing unit in the DRAM chipset meets the preset condition, the subsequent instruction set of the instruction set in the DRAM chipset is obtained from the mass storage device through the second interface, and the subsequent instruction set is stored in the DRAM chipset. The invention can ensure that the central processing unit is always in a high-efficiency running state, is suitable for the fields of cloud computing and the like, and can greatly improve the running efficiency of the system.

Description

Dynamic random access memory, memory management method, system and storage medium
Technical Field
The present invention relates to the field of computers, and more particularly, to a dynamic random access memory, a memory management method, a memory management system, and a storage medium.
Background
Currently, DRAM (Dynamic Random Access Memory ) technology has been greatly developed, and various types of Synchronous Dynamic Random Access Memory (SDRAM), double Data Rate (DDR) SDRAM, 2 nd generation DDR2 SDRAM, 3 rd generation DDR3 SDRAM, and 4 th generation DDR4 SDRAM are mainly used. For the above type of DRAM, a memory controller and a DRAM chip (i.e., memory granule) generally transmit control commands, including clock signals, command control signals, address signals, etc., to the DRAM chip via the memory controller, and the CPU (central processing unit ) performs read and write operations of data signals to the DRAM chip by the control commands.
When the computer system executes the program, the related program and data executed by the CPU are put into the DRAM, when the program is executed, the CPU fetches the instruction from the DRAM according to the content of the pointer register of the current program and executes the instruction, then fetches the next instruction and executes the next instruction, and the execution is stopped until the program ends the instruction. The working process is the process of continuously fetching and executing instructions, and finally, the calculated result is put into a memory address appointed by the instructions.
However, since DRAM is generally limited in storage capacity due to its high cost, most programs are stored in a relatively low cost mass storage device such as a hard disk, a solid state disk, etc., and when a computer is running, a CPU is required to move data in the mass storage device to DRAM and write data of DRAM into the mass storage device. In addition, the interaction speed of the mass storage device and the central processing unit is greatly lower than that of the central processing unit and the DRAM, so that the overall operation efficiency of the computer system is greatly affected.
Disclosure of Invention
The invention aims to solve the technical problem that the running efficiency is affected by the interaction speed of a central processing unit and a mass storage device in the computer system, and provides a dynamic random access memory, a memory management method, a memory management system and a storage medium.
The technical scheme of the invention for solving the technical problems is that a dynamic random access memory is provided, which comprises a circuit substrate, a DRAM chip group integrated on the circuit substrate, a memory controller, a first interface for connecting a central processing unit and a second interface for connecting a mass storage device; the memory controller is respectively connected with the DRAM chip set and the first interface, responds to a read-write request of the central processing unit connected to the first interface, acquires an instruction set from the DRAM chip set, is connected to the central processing unit through first interface data, and writes execution result data of the central processing unit into the DRAM chip set;
the memory controller is connected with the second interface, and when the instruction set read by the central processing unit in the DRAM chipset meets the preset condition, the subsequent instruction set of the instruction set in the DRAM chipset is obtained from the mass storage device through the second interface, and the subsequent instruction set is stored in the DRAM chipset.
Preferably, the DRAM chipset includes at least two logical storage areas that are a main mapping area and a standby mapping area, where the logical storage area where the instruction set currently read by the central processing unit is located is the main mapping area, and other logical storage areas are standby mapping areas;
the preset conditions are as follows: the number of the instruction sets waiting to be read in the main mapping area is smaller than a preset value, or the time for executing the instruction sets waiting to be read in the main mapping area in the central processing unit is smaller than a preset time.
Preferably, the memory controller stores a subsequent instruction set of the instruction set in the main mapping area obtained from the mass storage device through the second interface into a spare mapping area when the instruction set read by the central processing unit in the DRAM chipset waits for the preset condition;
the at least two logic memory areas switch between the main mapping area and the standby mapping area according to the program address specified by the program counter in the central processing unit.
Preferably, the sizes of the two logic storage areas are equal, and the subsequent instruction set acquired by the memory controller is equal to the size of the logic storage area;
before storing the subsequent instruction set of the instruction set in the main mapping area to the spare mapping area, if the content of the spare mapping area is updated, the memory controller writes the content in the spare mapping area back to the original address of the mass storage device.
Preferably, the first interface is a DRAM interface, the second interface is a PCIE interface, and the mass storage device is connected to the second interface through a PCIE bus.
Preferably, the mass storage device is constituted by a mass flash memory chip integrated onto the circuit substrate, and the mass flash memory chip is connected to the memory controller through the second interface.
The embodiment of the invention also provides a memory management method, wherein the memory comprises a DRAM chip set, and is connected with a central processing unit through a first interface and a mass storage device through a second interface, and the method comprises the following steps:
transmitting an instruction set stored in the DRAM chipset to be connected to the central processing unit for execution through the first interface data and writing execution result data of the central processing unit to the DRAM chipset in response to a request of the central processing unit;
and when the instruction set which is waiting for being read by the central processing unit in the DRAM chipset meets the preset condition, acquiring a subsequent instruction set of the instruction set in the DRAM chipset from a mass storage device through the second interface, and storing the subsequent instruction set into the DRAM chipset.
Preferably, the DRAM chipset includes at least two logical storage areas that are a main mapping area and a standby mapping area, where the logical storage area where an instruction set currently sent to the central processing unit is located is the main mapping area, the other logical storage areas are standby mapping areas, and the at least two logical storage areas switch the main mapping area and the standby mapping area according to a program address specified by a program counter in the central processing unit;
the preset conditions are as follows: the number of the instruction sets waiting to be read in the main mapping area is smaller than a preset value, or the time for executing the instruction sets waiting to be read in the main mapping area in the central processing unit is smaller than a preset time;
the retrieving a subsequent instruction set of the instruction set in the DRAM chipset from a mass storage device via the second interface and storing the subsequent instruction set to the DRAM chipset comprises:
acquiring a subsequent instruction set of the instruction set in the main mapping area from a mass storage device through the second interface, and storing the subsequent instruction set in a spare mapping area;
and before storing the subsequent instruction sets of the instruction sets in the main mapping area into a spare mapping area, if the content of the spare mapping area is updated, writing the content in the spare mapping area back to the original address of the mass storage device.
The invention also provides a computer system comprising a central processing unit, a dynamic random access memory, wherein the dynamic random access memory comprises a circuit substrate, a DRAM chip set integrated on the circuit substrate, a memory controller, a first interface for connecting the central processing unit and a second interface for connecting a mass storage device, the memory controller comprises a storage unit, a processing unit and a computer program stored in the storage unit and capable of running on the processing unit, and the processing unit realizes the steps of the memory management method when executing the computer program.
The present invention also provides a computer readable storage medium storing a computer program which, when executed by a processor, implements the steps of the memory management method as described above.
According to the dynamic random access memory, the memory management method, the system and the storage medium, the content in the DRAM chipset is updated directly according to the executing instruction set of the central processing unit through the memory controller, so that the central processing unit does not need to interact with the large-capacity storage device, the central processing unit can be always in a high-efficiency running state, the method, the system and the storage medium are suitable for the fields of cloud computing and the like, and the running efficiency of the system can be greatly improved.
Drawings
FIG. 1 is a schematic diagram of a DRAM according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of the DRAM interacting with a CPU and a mass storage device according to an embodiment of the present invention;
fig. 3 is a flowchart of a memory management method according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Fig. 1 is a schematic diagram of a dynamic random access memory according to an embodiment of the invention, where the dynamic random access memory can be applied to a computer system, such as a cloud server, and is used for temporarily storing programs and data executed by a central processing unit. The dynamic random access memory of the present embodiment includes a circuit substrate 10, and a DRAM chipset 11, a memory controller 12, a first interface 13, and a second interface 14 integrated on the circuit substrate 10. The DRAM chipset 11 may specifically comprise a plurality of DRAM chip particles.
The first interface 13 may be a DRAM interface, and the dynamic random access memory may interact with the central processing unit at a high speed through the first interface 13; the second interface 14 may be a PCIE (peripheral component interconnect express, high-speed serial computer expansion bus standard) interface, and the dynamic random access memory or the memory controller 12 may be connected to a mass storage device through the second interface 14, where the mass storage device may be an SSD (Solid State Disk) or an HDD (Hard Disk Drive).
Within the circuit substrate 10, the memory controller 12 is connected to the DRAM chipset 11 and the first interface 13, respectively, so that the central processing unit connected to the first interface 13 can read an instruction set from the DRAM chipset 11 and write data to the DRAM chipset 11 through the first interface 13 and the memory controller 12 (specifically, the central processing unit can acquire and execute the instruction set from the DRAM chipset 11 according to a program pointer); the memory controller 12 is connected to the DRAM chipset 11 and the second interface 14, respectively, and enables interaction of the DRAM chipset 11 with data in a mass storage device connected to the second interface 14. Specifically, when the instruction set waiting for the cpu to read (i.e., the instruction set not read by the cpu, which may include instruction codes and data) in the DRAM chipset 11 meets the preset condition, the memory controller 12 obtains the subsequent instruction set (including the instruction codes and data) of the instruction set in the DRAM chipset 11 from the mass storage device through the second interface 14, and stores the subsequent instruction set to the DRAM chipset 11.
The dynamic random access memory directly updates the content in the DRAM chipset 11 according to the executing instruction set of the central processing unit through the memory controller 12, so that the dynamic random access memory can be automatically updated according to the running state of the central processing unit, the storage capacity of the dynamic random access memory is nearly infinite, the central processing unit does not need to interact with a large-capacity storage device, the central processing unit can always be in a high-efficiency running state, the dynamic random access memory is suitable for the fields with higher requirements on operation resources such as cloud computing, and the running efficiency of a system can be greatly improved.
In an embodiment of the present invention, as shown in fig. 2, the DRAM chipset 11 includes two logic memory areas 111 that are a main mapping area and a standby mapping area, the two logic memory areas 111 are each a section of memory space in the DRAM chipset 11, and each of the two logic memory areas stores an instruction set for processing by the central processing unit 20, and the central processing unit 20 also writes an execution result of the instruction set into the logic memory areas 111. The logical storage area 111 where the instruction set currently read by the central processing unit is located is a main mapping area, the other logical storage area 111 is a standby mapping area, and the two logical storage areas 111 can switch the main mapping area and the standby mapping area according to a jump instruction (i.e. a jump code in the instruction code) executed by the central processing unit 20. The instruction sets stored in the main mapping area and the spare mapping area are respectively from the mass storage device 30, and the stored instruction sets respectively correspond to a certain section of instruction sets in the mass storage device 30, that is, the main mapping area and the spare mapping area are equivalent to two "windows" of the mass storage device 30, and the central processing unit 20 can obtain the instruction sets stored in the mass storage device 30 through the two "windows". The content shown in the "window" is controlled by the memory controller 12 of the DRAM.
Specifically, the cpu 20 obtains the instruction set from the main mapping area through the first interface 13 and the memory controller 12 according to the Program address specified by the Program Counter (Program Counter). Under normal conditions, each time the program counter completes executing one instruction set, the home address +1 is automatically used as the program address of the next instruction set, so that the central processing unit 20 obtains the next instruction set from the main mapping area according to the updated program address; if the cpu 20 executes the jump instruction, the program counter uses the home address +n or-n as the program address of the next instruction set according to the jump value n, and the cpu 20 obtains the next instruction set from the main mapping area according to the updated program address. When the program address specified by the program counter is located in the spare mapping area, the main mapping area and the spare mapping area are switched.
Of course, in practical applications, the DRAM chipset 11 may include more logic memory areas 111, and one of the logic memory areas 111 is a main mapping area, and the other logic memory areas 111 are spare mapping areas.
Specifically, the memory controller 12 may update the contents of the DRAM chipset 11 in the following manner: when the number of instruction sets waiting for the cpu to read in the main mapping area is smaller than the preset value, or the time for the instruction sets waiting for reading in the main mapping area to be executed in the cpu is smaller than the preset time, the memory controller 12 obtains the subsequent instruction sets of the instruction sets in the DRAM chipset 11 from the mass storage device through the second interface 14, and stores the subsequent instruction sets in the DRAM chipset 11 (simultaneously adjusts the pointers according to the instruction sets in the main mapping area and the updated instruction sets in the spare mapping area, so that the cpu can read the instruction sets in sequence). By the method, the instruction set in the dynamic random access memory can be updated in time, so that the instruction execution of the central processing unit is not affected.
Preferably, the memory controller 12 may store the subsequent instruction sets of the instruction sets in the main mapping area obtained from the mass storage device 30 through the second interface 14 to the spare mapping area when the instruction sets waiting for the central processing unit 20 to read in the DRAM chipset 11 meet a preset condition, for example, when the number of instruction sets waiting for the central processing unit to read in the main mapping area is smaller than a preset value, or when the time for waiting for the read instruction sets in the main mapping area to be executed in the central processing unit is smaller than a preset time. Thus, by controlling the preset conditions, the efficient operation of the cpu 20 is not affected even when the capacity of the logic memory area 111 is small, and the resources of the DRAM chipset 11 are saved.
Specifically, when the instruction set waiting for the cpu to read in the DRAM chipset 11 does not include a jump instruction, or the instruction set waiting for the cpu to read in the DRAM chipset 11 includes a jump instruction and the instruction set pointed to by the jump instruction is still in the DRAM chipset 11, the subsequent instruction set takes the next instruction of the last instruction of the main mapping area of the DRAM chipset 11 as the starting point; when the instruction set waiting for the cpu to read in the DRAM chipset 11 includes a jump instruction and the instruction set pointed to by the jump instruction is not in the DRAM chipset 11, the subsequent instruction set takes the instruction pointed to by the jump instruction as the starting point.
For ease of management, the two logical storage areas 111 may be equal in size (i.e., equal in storage space), and the subsequent instruction sets fetched by the memory controller 12 are equal in size to the logical storage areas. In this way, the access efficiency of the memory controller 12 can be improved.
Since the cpu 20 writes the execution result to the logic memory area 111 when executing the instruction set, the memory controller 12 needs to write the contents of the spare map area (the updated result of the cpu 20) back to the original address of the mass storage device 30 if the contents of the spare map area have been updated (i.e. the cpu 20 writes the execution result of the instruction set) before storing the subsequent instruction set of the instruction set in the main map area to the spare map area. That is, before storing the subsequent instruction set of the instruction set in the main mapping area in the spare mapping area, the memory controller 12 determines whether the content of the spare mapping area is updated, if not, the subsequent instruction set is directly stored in the spare mapping area, otherwise, the content in the spare mapping area (that is, the updated content) is written back to the original address of the mass storage device 30, and then the subsequent instruction set is stored in the spare mapping area.
In one embodiment of the present invention, the mass storage device may be independent of the dynamic random access memory, and the mass storage device is connected to the second interface 14 (when the second interface 14 is a PCIE interface) through a PCIE bus. In addition, the mass storage device may be integrated into a dynamic random access memory, for example, the mass storage device may be formed of a mass flash memory chip integrated onto the circuit substrate 10, and the mass flash memory chip is connected to the memory controller 12 through the second interface 14, where the second interface 14 may employ a PCIE interface or other high-speed interface, so as to improve data throughput efficiency.
As shown in fig. 3, the embodiment of the present invention further provides a memory management method, where the memory may be a dynamic random access memory, and the memory includes a DRAM chipset 11, and the memory is connected to the central processing unit through a first interface and connected to the mass storage device through a second interface. The method of the present embodiment may be performed by a memory controller in a memory, and the method includes:
step S31: in response to a request from the central processing unit, an instruction set stored in the DRAM chipset is sent to the central processing unit for execution and execution data of the central processing unit is written to the DRAM chipset.
The DRAM chipset may include two logical storage areas that are a main mapping area and a standby mapping area, where the logical storage area where the instruction set currently sent to the central processing unit is located is the main mapping area, the other logical storage area is the standby mapping area, and the two logical storage areas switch the main mapping area and the standby mapping area according to a jump instruction executed by the central processing unit. Of course, in practical applications, the DRAM chipset 11 may include more logic memory areas 111, and one of the logic memory areas 111 is a main mapping area, and the other logic memory areas 111 are spare mapping areas.
The instruction sets stored in the main mapping area and the standby mapping area are respectively from the mass storage device, and the stored instruction sets respectively correspond to one section of instruction set in the mass storage device, namely, the main mapping area and the standby mapping area are equivalent to two 'windows' of the mass storage device, and the central processing unit can acquire the instruction sets stored in the mass storage device through the two 'windows'. The content displayed in the window is controlled by a memory controller of the dynamic random access memory.
Step S32: and when the instruction set waiting for the central processing unit to read in the DRAM chipset meets the preset condition, acquiring a subsequent instruction set of the instruction set in the DRAM chipset from a mass storage device through a second interface, and storing the subsequent instruction set in the DRAM chipset.
The preset conditions may be: the number of the instruction sets waiting to be read in the main mapping area is smaller than a preset value, or the time for executing the instruction sets waiting to be read in the main mapping area in the central processing unit is smaller than a preset time.
In the above step S32, a subsequent instruction set of the instruction set in the main mapping area may be obtained from the mass storage device through the second interface, and stored into the spare mapping area. And writing the contents of the spare mapping area back to the original address of the mass storage device if the contents of the spare mapping area have been updated before storing the subsequent instruction set of the instruction set in the main mapping area to the spare mapping area.
The memory management method in this embodiment belongs to the same concept as the dynamic random access memory in the corresponding embodiment in fig. 1, the specific implementation process is detailed in the corresponding dynamic random access memory embodiment, and the technical features in the dynamic random access memory embodiment are correspondingly applicable in the method embodiment, which is not repeated herein.
The invention also provides a computer system comprising a central processing unit and a dynamic random access memory, wherein the dynamic random access memory comprises a circuit substrate, a DRAM chip set integrated on the circuit substrate, a memory controller, a first interface for connecting the central processing unit and a second interface for connecting a mass storage device, the memory controller comprises a storage unit, a processing unit and a computer program which is stored in the storage unit and can run on the processing unit, and the processing unit realizes the steps of the memory management method as shown in figure 3 when executing the computer program.
The computer system in this embodiment and the dynamic random access memory in the corresponding embodiment of fig. 1-2 belong to the same concept, and the specific implementation process is detailed in the corresponding method embodiment, and the technical features in the method embodiment are correspondingly applicable in the device embodiment, which is not described herein again.
The embodiment of the invention also provides a computer readable storage medium, wherein the storage medium stores a computer program, and when the computer program is executed by a processor, the steps of the memory management method are realized. The computer readable storage medium in this embodiment and the inner dynamic random access memory in the corresponding embodiment of fig. 1-2 belong to the same concept, the specific implementation process is detailed in the corresponding method embodiment, and the technical features in the method embodiment are correspondingly applicable in the device embodiment, which is not repeated here.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic of each process, and should not limit the implementation process of the embodiment of the present application in any way.
It will be clearly understood by those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional allocation may be performed by different functional units and modules according to needs. The functional units and modules in the embodiment may be integrated in one processor, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed dynamic random access memory, memory management method, and computer system may be implemented in other manners. For example, the dynamic random access memory embodiments described above are merely illustrative.
In addition, each functional unit in the embodiments of the present application may be integrated in one processor, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated modules/units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of each method embodiment described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include: any entity or interface switching device, recording medium, USB flash disk, removable hard disk, magnetic disk, optical disk, computer Memory, read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), electrical carrier wave signals, telecommunications signals, and software distribution media, among others, capable of carrying the computer program code. It should be noted that the computer readable medium may include content that is subject to appropriate increases and decreases as required by jurisdictions in which such content is subject to legislation and patent practice, such as in certain jurisdictions in which such content is not included as electrical carrier signals and telecommunication signals.
The above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.

Claims (10)

1. A dynamic random access memory, comprising a circuit substrate, a DRAM chip set integrated on the circuit substrate, a memory controller, a first interface for connecting a central processing unit and a second interface for connecting a mass storage device; the memory controller is respectively connected with the DRAM chip set and the first interface, responds to a read-write request of the central processing unit connected to the first interface, acquires an instruction set from the DRAM chip set, is connected to the central processing unit through first interface data, and writes execution result data of the central processing unit into the DRAM chip set;
the memory controller is connected with the second interface, and when the instruction set read by the central processing unit in the DRAM chipset accords with a preset condition, the subsequent instruction set of the instruction set in the DRAM chipset is obtained from the mass storage device through the second interface, and the subsequent instruction set is stored in the DRAM chipset, so that the dynamic random access memory directly updates the content in the DRAM chipset according to the instruction set being executed by the central processing unit through the memory controller; the instruction set waiting for the central processing unit to read in the DRAM chipset is an instruction set which is not read by the central processing unit, and the instruction set comprises instruction codes and data;
the DRAM chipset comprises at least two logic memory areas which are a main mapping area and a standby mapping area, wherein the logic memory area where the instruction set read by the central processing unit at present is the main mapping area, the other logic memory areas are the standby mapping areas, the central processing unit acquires the instruction set from the main mapping area according to the program address appointed by the program counter, and when the program address appointed by the program counter is positioned in the standby mapping area, the main mapping area and the standby mapping area are switched.
2. The dynamic random access memory of claim 1, wherein the predetermined condition is: the number of the instruction sets waiting to be read in the main mapping area is smaller than a preset value, or the time for executing the instruction sets waiting to be read in the main mapping area in the central processing unit is smaller than a preset time.
3. The dynamic random access memory according to claim 2, wherein the memory controller stores a subsequent instruction set of the instruction set in the main mapping area acquired from the mass storage device through the second interface into the spare mapping area when the instruction set waiting for the cpu to read in the DRAM chipset meets a preset condition;
the at least two logic memory areas switch between the main mapping area and the standby mapping area according to the program address specified by the program counter in the central processing unit.
4. The dynamic random access memory of claim 3, wherein the at least two logical storage areas are equal in size, and a subsequent instruction set acquired by the memory controller is equal in size to the logical storage areas;
before storing the subsequent instruction set of the instruction set in the main mapping area to the spare mapping area, if the content of the spare mapping area is updated, the memory controller writes the content in the spare mapping area back to the original address of the mass storage device.
5. The dynamic random access memory of claim 1, wherein the first interface is a DRAM interface, the second interface is a PCIE interface, and the mass storage device is connected to the second interface through a PCIE bus.
6. The dynamic random access memory of claim 1, wherein the mass storage device is comprised of a mass flash memory chip integrated onto the circuit substrate, and the mass flash memory chip is connected to the memory controller through the second interface.
7. A memory management method, the memory comprising a DRAM chipset, and the memory being coupled to a central processing unit via a first interface and to a mass storage device via a second interface, the method comprising:
transmitting an instruction set stored in the DRAM chipset to be connected to the central processing unit for execution through the first interface data and writing execution result data of the central processing unit to the DRAM chipset in response to a request of the central processing unit;
when the instruction set read by the central processing unit in the DRAM chipset accords with a preset condition, acquiring a subsequent instruction set of the instruction set in the DRAM chipset from a mass storage device through the second interface, and storing the subsequent instruction set into the DRAM chipset, wherein the DRAM chipset comprises at least two logic storage areas which are a main mapping area and a standby mapping area, the logic storage area where the instruction set currently sent to the central processing unit is located is the main mapping area, the other logic storage areas are the standby mapping areas, and the at least two logic storage areas switch the main mapping area and the standby mapping area according to a program address specified by a program counter in the central processing unit.
8. The memory management method according to claim 7, wherein the predetermined condition is: the number of the instruction sets waiting to be read in the main mapping area is smaller than a preset value, or the time for executing the instruction sets waiting to be read in the main mapping area in the central processing unit is smaller than a preset time;
the retrieving a subsequent instruction set of the instruction set in the DRAM chipset from a mass storage device via the second interface and storing the subsequent instruction set to the DRAM chipset comprises:
acquiring a subsequent instruction set of the instruction set in the main mapping area from a mass storage device through the second interface, and storing the subsequent instruction set in a spare mapping area;
and before storing the subsequent instruction sets of the instruction sets in the main mapping area into a spare mapping area, if the content of the spare mapping area is updated, writing the content in the spare mapping area back to the original address of the mass storage device.
9. A computer system comprising a central processing unit, a dynamic random access memory, and the dynamic random access memory comprising a circuit substrate and a DRAM chipset integrated onto the circuit substrate, a memory controller, a first interface for connecting to a central processing unit and a second interface for connecting to a mass storage device, characterized in that the memory controller comprises a memory unit, a processing unit and a computer program stored in the memory unit and executable on the processing unit, the processing unit implementing the steps of the memory management method according to any of claims 7 to 8 when the computer program is executed.
10. A computer readable storage medium storing a computer program, wherein the computer program when executed by a processor implements the steps of the memory management method according to any one of claims 7 to 8.
CN201911121029.4A 2019-11-15 2019-11-15 Dynamic random access memory, memory management method, system and storage medium Active CN110941395B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911121029.4A CN110941395B (en) 2019-11-15 2019-11-15 Dynamic random access memory, memory management method, system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911121029.4A CN110941395B (en) 2019-11-15 2019-11-15 Dynamic random access memory, memory management method, system and storage medium

Publications (2)

Publication Number Publication Date
CN110941395A CN110941395A (en) 2020-03-31
CN110941395B true CN110941395B (en) 2023-06-16

Family

ID=69906665

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911121029.4A Active CN110941395B (en) 2019-11-15 2019-11-15 Dynamic random access memory, memory management method, system and storage medium

Country Status (1)

Country Link
CN (1) CN110941395B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112231269A (en) * 2020-09-29 2021-01-15 深圳宏芯宇电子股份有限公司 Data processing method of multiprocessor system and multiprocessor system
CN112559039B (en) * 2020-12-03 2022-11-25 类人思维(山东)智慧科技有限公司 Instruction set generation method and system for computer programming
CN112860433A (en) * 2021-01-27 2021-05-28 深圳宏芯宇电子股份有限公司 Cache server, content distribution network system, and data management method
CN113138803B (en) * 2021-05-12 2023-03-24 类人思维(山东)智慧科技有限公司 Instruction set storage system for computer programming
CN113572687B (en) * 2021-07-22 2022-11-15 无锡江南计算技术研究所 High-order router self-adaptive parallel starting method based on event-driven mechanism

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0788110A2 (en) * 1996-02-02 1997-08-06 Fujitsu Limited Semiconductor memory device with a pipe-line operation
CN1716453A (en) * 2004-01-30 2006-01-04 三星电子株式会社 The multi-port memory device that between main frame and non-volatile memory device, cushions
KR100660874B1 (en) * 2005-07-25 2006-12-26 삼성전자주식회사 Refresh control method of dram having dual ports
CN1885277A (en) * 2005-06-24 2006-12-27 秦蒙达股份公司 DRAM chip device and multi-chip package comprising such a device
KR100685324B1 (en) * 2007-01-12 2007-02-22 엠진 (주) A system for accessing nand flash memory at random using dual-port dram and a controller thereof
CN101165805A (en) * 2006-10-20 2008-04-23 凌华科技股份有限公司 Multiple port memory access control module
CN109872762A (en) * 2014-01-24 2019-06-11 高通股份有限公司 Memory training of DRAM system and associated method, system and device are provided using port to port loopback

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0788110A2 (en) * 1996-02-02 1997-08-06 Fujitsu Limited Semiconductor memory device with a pipe-line operation
CN1716453A (en) * 2004-01-30 2006-01-04 三星电子株式会社 The multi-port memory device that between main frame and non-volatile memory device, cushions
CN1885277A (en) * 2005-06-24 2006-12-27 秦蒙达股份公司 DRAM chip device and multi-chip package comprising such a device
KR100660874B1 (en) * 2005-07-25 2006-12-26 삼성전자주식회사 Refresh control method of dram having dual ports
CN101165805A (en) * 2006-10-20 2008-04-23 凌华科技股份有限公司 Multiple port memory access control module
KR100685324B1 (en) * 2007-01-12 2007-02-22 엠진 (주) A system for accessing nand flash memory at random using dual-port dram and a controller thereof
CN109872762A (en) * 2014-01-24 2019-06-11 高通股份有限公司 Memory training of DRAM system and associated method, system and device are provided using port to port loopback

Also Published As

Publication number Publication date
CN110941395A (en) 2020-03-31

Similar Documents

Publication Publication Date Title
CN110941395B (en) Dynamic random access memory, memory management method, system and storage medium
US11042297B2 (en) Techniques to configure a solid state drive to operate in a storage mode or a memory mode
EP2248023B1 (en) Extended utilization area for a memory device
US9927999B1 (en) Trim management in solid state drives
US20200150903A1 (en) Method for executing hard disk operation command, hard disk, and storage medium
CN110910921A (en) Command read-write method and device and computer storage medium
WO2015199909A1 (en) Accelerating boot time zeroing of memory based on non-volatile memory (nvm) technology
KR20200135718A (en) Method, apparatus, device and storage medium for managing access request
US11055220B2 (en) Hybrid memory systems with cache management
US11526441B2 (en) Hybrid memory systems with cache management
KR102653373B1 (en) Controller and operation method thereof
US20210240398A1 (en) Time to Live for Load Commands
EP3496356A1 (en) Atomic cross-media writes on storage devices
CN111177027B (en) Dynamic random access memory, memory management method, system and storage medium
US20190042443A1 (en) Data acquisition with zero copy persistent buffering
CN114647446A (en) Storage-level storage device, computer module and server system
CN104424124A (en) Memory device, electronic equipment and method for controlling memory device
CN217588059U (en) Processor system
US11775219B2 (en) Access control structure for shared memory
CN113900711A (en) SCM (Single chip multiple Access) -based data processing method and device and computer-readable storage medium
CN113609034A (en) Processor system
CN113284532A (en) Processor system
CN112231269A (en) Data processing method of multiprocessor system and multiprocessor system
CN113094328A (en) Multi-channel parallel computing system for real-time imaging of synthetic aperture radar
CN115016851A (en) BIOS loading method, bridge chip, BMC, device and mainboard thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant