CN113284532A - Processor system - Google Patents

Processor system Download PDF

Info

Publication number
CN113284532A
CN113284532A CN202110641303.1A CN202110641303A CN113284532A CN 113284532 A CN113284532 A CN 113284532A CN 202110641303 A CN202110641303 A CN 202110641303A CN 113284532 A CN113284532 A CN 113284532A
Authority
CN
China
Prior art keywords
unit
instruction set
processor
random access
memory unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110641303.1A
Other languages
Chinese (zh)
Inventor
赖振楠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hosin Global Electronics Co Ltd
Original Assignee
Hosin Global Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hosin Global Electronics Co Ltd filed Critical Hosin Global Electronics Co Ltd
Priority to CN202110641303.1A priority Critical patent/CN113284532A/en
Publication of CN113284532A publication Critical patent/CN113284532A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C11/00Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor
    • G11C11/21Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements
    • G11C11/34Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements using semiconductor devices
    • G11C11/40Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements using semiconductor devices using transistors
    • G11C11/401Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements using semiconductor devices using transistors forming cells needing refreshing or charge regeneration, i.e. dynamic cells

Abstract

The invention provides a processor system which is characterized by comprising a processor unit, a memory control unit, a random storage unit and a flash memory unit; the processor unit is electrically connected with the memory control unit, the memory control unit is electrically connected with the random access memory unit and the flash memory unit respectively, the processor unit reads a data instruction set in the random access memory unit through the memory control unit, and the random access memory unit maps the flash memory unit through the memory control unit. The invention can reduce the volume of the computer system, simultaneously enable the processor to be in a high-efficiency running state all the time, and greatly improve the running efficiency of the system.

Description

Processor system
Technical Field
The present invention relates to the field of computers, and more particularly to a processor system.
Background
At present, the technology of DRAM (Dynamic Random Access Memory) has been greatly developed, and various types of SDRAM (synchronous Dynamic Random Access Memory), Double Data Rate (DDR) SDRAM, double data rate generation 2 (DDR2) SDRAM, double data rate generation 3 (DDR3) SDRAM, and double data rate generation 4 (DDR4) SDRAM are mainly used. For the DRAM of the above type, the memory controller and the DRAM chip (i.e., the memory granule) are mainly used to constitute, and the CPU (central processing unit) sends control commands including clock signals, command control signals, address signals and the like to the DRAM chip via the memory controller, and controls the data signals to be read from and written to the DRAM chip by the control commands.
When the computer system executes the program, the relevant program and data executed by the CPU need to be put into the DRAM, when the program is executed, the CPU fetches the instruction from the DRAM according to the content of the current program pointer register and executes the instruction, then fetches the next instruction and executes the next instruction, and the execution is not stopped until the program finishes the instruction. The working process is the process of continuously fetching and executing the instruction, and finally, the calculated result is put into the memory address appointed by the instruction.
However, since the cost of the DRAM is high and the storage capacity of the DRAM is limited, most programs are stored in a mass storage device with relatively low cost, such as a hard disk, a solid state disk, and the like, and when the computer runs, the CPU needs to move data in the mass storage device to the DRAM and write data of the DRAM into the mass storage device. Moreover, the interaction speed between the mass storage device and the central processing unit is much lower than that between the central processing unit and the DRAM, so that the overall operation efficiency of the computer system is greatly influenced.
Disclosure of Invention
The technical problem to be solved by the present invention is to provide a processor system, aiming at the problem of low overall operating efficiency of the computer system.
The technical solution for solving the above technical problems is to provide a processor system, which includes a processor unit, a memory control unit, a random access memory unit, and a flash memory unit; the processor unit is electrically connected with the memory control unit, the memory control unit is electrically connected with the random access memory unit and the flash memory unit respectively, the processor unit reads a data instruction set in the random access memory unit through the memory control unit, and the random access memory unit maps the flash memory unit through the memory control unit.
As a further improvement of the present invention, the memory control unit is configured to read and feed back a corresponding instruction from the random access memory unit according to a request of the processor unit, and when a data instruction set in the random access memory unit meets a preset condition, obtain a subsequent instruction set of the data instruction set from the flash memory unit and write the subsequent instruction set into the random access memory unit.
As a further improvement of the present invention, the processor unit includes a first processor unit and a second processor unit, the data instruction set includes a first instruction set and a second instruction set, the preset conditions include a first preset condition and a second preset condition, the first instruction set is an instruction set waiting to be processed by the first processor unit, and the second instruction set is an instruction set waiting to be processed by the second processor unit;
the memory control unit is used for reading and feeding back corresponding instructions from the random access memory unit according to the requests of the first processor unit and the second processor unit, acquiring a subsequent instruction set of the first instruction set from the flash memory unit and writing the subsequent instruction set into the random access memory unit when a first instruction set in the random access memory unit meets a first preset condition, and acquiring a subsequent instruction set of the second instruction set from the flash memory unit and writing the subsequent instruction set into the random access memory unit when a second instruction set in the random access memory unit meets a second preset condition.
As a further refinement of the present invention, said first processor unit is further adapted to read a subsequent instruction set of said second instruction set from said random access unit.
As a further improvement of the present invention, the first processor unit is further configured to cache the first instruction set after processing or a processing result thereof to the random access unit, and the second processor unit is configured to read the first instruction set after processing from the random access unit.
As a further improvement of the present invention, the random access memory unit includes a first mapping area for caching the first instruction set and its subsequent instruction sets, and a second mapping area for caching the second instruction set and its subsequent instruction sets.
As a further improvement of the present invention, the random access memory unit includes a first mapping area, a second mapping area, a third mapping area and a fourth mapping area, the first mapping area is used for caching the first instruction set, and the third mapping area is used for caching a subsequent instruction set of the first instruction set; the second mapping region is for caching the second instruction set, and the fourth mapping region is for caching a subsequent instruction set of the second instruction set.
As a further improvement of the present invention, the first mapping area and the third mapping area are used for switching each other when caching the data instruction set; the second mapping area and the fourth mapping area are used for switching with each other when the data instruction set is cached.
As a further improvement of the present invention, the preset condition is that the number of data instruction sets waiting to be read by the plurality of processor units in the random access memory unit is less than a preset value, or the time for executing the data instruction sets waiting to be read in the random access memory unit in the plurality of processors is expected to be less than a preset time.
As a further development of the invention, the memory control unit further comprises an arbiter unit for determining an execution order of requests of the processor units and/or for determining a processing order of the same data instruction set between different processor units.
As a further improvement of the invention, the processor unit, the random access memory unit, the memory control unit and the flash memory unit are integrated on the same processor chip.
The processor module of the invention: the central processing unit, the special processor, the memory controller and the flash memory chip are packaged together, and the content in the DRAM chip is updated by the memory controller directly according to the instruction set being executed by the central processing unit and the special processor, so that the processor does not need to interact with the flash memory chip, the processor can be always in a high-efficiency running state, and the running efficiency of the system is greatly improved.
Drawings
FIG. 1 is a block diagram of a processor module provided by an embodiment of the invention;
FIG. 2 is a block diagram of a processor module provided by another embodiment of the invention;
FIG. 3 is a block diagram of a processor module provided by another embodiment of the invention;
fig. 4 is a block diagram (iii) of a processor module according to another embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
As shown in fig. 1, a block diagram of a processor system provided by the present invention, which can be applied to an electronic device, such as a personal computer, a server, a mobile phone, a tablet, etc., and implements instruction storage and processing. The processor system of this embodiment includes a processor unit 12, a memory control unit 13, a random access memory unit 15, and a flash memory unit 16, where the processor unit 12 is electrically connected to the memory control unit 13, and the memory control unit 13 is electrically connected to the random access memory unit 15 and the flash memory unit 16, respectively, where the processor unit 12 can read a data instruction set in the random access memory unit 15 through the memory control unit 13, and the random access memory unit 15 maps the flash memory unit 16 through the memory control unit 13. During the operation of the processor system, the random access memory unit 15 stores the data instruction set currently being executed or about to be executed by the processor unit 12, and the flash memory unit 16 is used for storing the data instruction set which needs to be saved for a long time.
In an embodiment of the present invention, the processor unit 12, the random access memory unit 15, the memory control unit 13, and the flash memory unit 16 may be integrated into a same processor chip, and in the SOC mode, the processor chip includes a unified external interface, and is mounted to a circuit board (e.g., a motherboard of a computer system) through the unified external interface, so that external devices can communicate and input and output signals.
In practical applications, the processor unit 12, the random access memory unit 15, the memory control unit 13, and the flash memory unit 16 may also be integrated into a plurality of chips, for example, the processor unit 12 is integrated into a first chip, the random access memory unit 15, the memory control unit 13, and the flash memory unit 16 are integrated into a second chip, and the first chip and the second chip are electrically connected. The first chip and the second chip can be packaged into a whole by adopting a system-in-package process, and a uniform external interface is provided, so that the first chip and the second chip can be mounted on devices such as a circuit board and the like. In addition, the above units can be respectively integrated in different processor chips, DRAM chips, memory control chips, and flash memory chips, so that the processor system can be packaged into a whole by adopting a system-in-package process and provide a uniform external interface, thereby being capable of being mounted on devices such as a circuit board.
In an embodiment of the present invention, the processor unit 12 may be connected to the memory controller unit 13 through a communication line to implement data interaction, and similarly, the random access memory unit 15 may be electrically connected to the memory controller unit through a memory bus to implement data interaction, and similarly, the flash memory unit 16 may be electrically connected to the memory controller unit 13 through a memory bus to implement data interaction.
In an embodiment of the present invention, the flash memory unit 16 may specifically employ a NAND memory chip or the like with relatively large storage capacity, relatively low cost, and relatively slow data access speed, which can store data in a power-off state; the random access memory unit 15 may specifically adopt memory chips such as DDR, DDR2, DDR3, DDR4, DDR5, phase change memory, and the like, which have relatively small memory capacity, relatively high cost, and relatively high data access speed, that is, the data memory capacity of the random access memory unit 15 is smaller than the data memory capacity of the flash memory unit 16. In view of cost, it is preferable that the random access memory unit 15 is a memory chip that cannot retain stored data when power is turned off.
The memory control unit 13 maps the storage data of the flash memory unit 16 with large storage capacity and low cost to the random storage unit 15 with small storage capacity and high cost, and the mapping operation does not occupy the clock cycle of the processor unit 12 and the memory bus between the processor unit 12 and the random storage unit 15, so that the high-speed processing of the data can be realized under the condition that the storage capacity of the random storage unit 15 is small, the processor unit 12 can be always in a high-efficiency running state, and the running efficiency of electronic equipment such as a computer system and the like is greatly improved.
In an embodiment of the present invention, the memory control unit 13 may read and feed back the corresponding data instruction from the random access memory unit 15 according to a request of the processor unit 12, that is, when the memory control unit 13 receives a read/write request from the processor unit 12, the memory control unit 13 obtains the data instruction corresponding to the read/write request from the random access memory unit 15 and sends the data instruction corresponding to the read/write request to the processor unit 12 through the memory bus, and writes an execution result of the processor unit 12 into the random access memory unit 15. The above operation process is the same as the data interaction process of the existing processor and internal memory (e.g. DRAM), and is not described herein again.
In addition, the memory control unit 13 may implement mapping between the random access memory unit 15 and the flash memory unit 16 by: when the data instruction set in the random access memory unit 15 meets the preset condition, the memory control unit 13 obtains a subsequent instruction set of the data instruction set from the flash memory unit 16 and writes the subsequent instruction set into the random access memory unit 15. Specifically, the preset condition may be (i.e. the memory control unit 13 may map the content in the flash memory unit 16 to the random access memory unit 15 in the following manner): when the number of instruction sets to be read by the processor unit 12 in the random access memory unit 15 is smaller than a preset value, or the time for which the instruction sets to be read in the random access memory unit 15 are expected to be executed in the processor unit 12 is smaller than a preset time, the memory control unit 13 obtains a subsequent instruction set of the instruction sets from the flash memory unit 16, and updates and stores the subsequent instruction set of the instruction set in the random access memory unit 15. So that data instructions in the random access memory unit 15 can be updated in time so as not to affect the instruction execution of the processor unit 12.
Since the memory control unit 13 can directly predict the subsequent instruction set to be executed according to the instruction set being executed by the processor unit 12 and update the contents of the random access memory unit 15 according to the prediction result, the processor unit 12 does not need to interact with the flash memory unit 16 and does not occupy the memory bus. That is, the operation of the memory control unit 13 is transparent to the processor unit 12, the data transfer operation of the random access memory unit 15 does not need to be performed by the processor unit 12, time management is not allocated, the processor unit 12 only uses the random access memory unit 15 and the flash memory unit 16 as an ultra-large DRAM, and data is automatically solidified permanently. Therefore, the processor unit 12 can be always in an efficient operation state, and is suitable for the fields of cloud computing and the like with higher requirements on operation resources, and the operation efficiency of the system can be greatly improved.
As shown in connection with fig. 2, in one embodiment of the invention, the processor unit 12 comprises a first processor unit 121 and a second processor unit 122. The first processor unit 121 and the second processor unit 122 may be different types of processors, for example, the first processor unit 121 is a main processor, such as an embedded processor including a plurality of cores, and the second processor unit 122 is an auxiliary processor, such as a graphics processor, a neural network processor, or the like. Accordingly, the data instruction set in the random access unit 15 includes a first instruction set and a second instruction set, wherein the first instruction set is an instruction set waiting to be processed by the first processor unit 121, and the second instruction set is an instruction set waiting to be processed by the second processor unit 122. The preset conditions include a first preset condition and a second preset condition.
The memory control unit 13 reads and feeds back the corresponding data instruction from the random access memory unit 15 according to the requests of the first processor unit 121 and the second processor unit 122, and when the first instruction set in the random access memory unit 15 meets the first preset condition, acquires the subsequent instruction set of the first instruction set from the flash memory unit 16 and writes the subsequent instruction set into the random access memory unit 15, and when the second instruction set in the random access memory unit meets the second preset condition, acquires the subsequent instruction set of the second instruction set from the flash memory unit 16 and writes the subsequent instruction set into the random access memory unit 15.
The first preset condition may specifically be: when the number of the first instruction sets waiting for being read by the first processor unit 121 in the random access memory unit 15 is smaller than a preset value, or the time for executing the first instruction sets waiting for being read in the random access memory unit 15 in the first processor unit 121 is expected to be smaller than a preset time; the second preset condition may specifically be: when the number of the second instruction sets waiting for the second processor unit 122 to read in the random access memory unit 15 is smaller than the preset value, or the time for the second instruction sets waiting for reading in the random access memory unit 15 to be executed in the second processor unit 122 is expected to be smaller than the preset time. The preset value and the preset time may be adjusted according to the storage capacity of the random access memory unit 15, the main frequency of the first processor unit 121 and the second processor unit 122, and the like.
In an embodiment of the invention, the first processor unit 121 may be further configured to read a subsequent instruction set of the second instruction set from the random access memory unit 15. That is, all instruction sets cached in the random access memory unit 15 can be read by different processor units such as the first processing unit 121 or the second processing unit 121, so as to reduce the processing speed of data instruction sets among the processor units as soon as possible, reduce the frequency of transporting the same data instruction sets from the flash memory unit 16 to the random access memory unit 15, and further accelerate the processing speed of the same or similar data instruction sets by the processor units.
In an embodiment of the present invention, the first processor unit 121 is further configured to cache the first instruction set after processing or the processing result thereof to the random access unit 15, and the second processor unit 122 is configured to read the first instruction set after processing from the random access unit 15. After the first processor unit 121 reads and processes the first instruction set, the first processor unit 121 is further responsible for caching the processed first instruction set or the processing result thereof in the random access memory unit 15, and at this time, the first instruction set or the processing result thereof processed by the first processor unit 121 in the random access memory unit 15 may be further read and processed by the second processor 122 for a second time. In this embodiment, after being processed by one processor unit, the same data instruction set or the processing result thereof may also be cached in the random access memory unit, and other processors may also continue to read and process the instruction set or the processing result thereof, in this case, the memory control unit may continuously transport the data instruction set from the flash memory unit to the random access memory unit, and the data instruction set may also be processed by different processing units according to the processing sequence and then stored in the flash memory unit, thereby greatly improving the processing speed and efficiency of the data instruction set, reducing the number of times of transport of the data instruction set, and improving the processing efficiency of the processor.
Referring to fig. 4, in an embodiment of the present invention, the random access memory unit 15 includes a first mapping area 151 and a second mapping area 152, and the first mapping area 151 and the second mapping area 152 are both a segment of memory space in the random access memory unit 15, where the first mapping area 151 is used for caching a first instruction set and a subsequent instruction set thereof, and the second mapping area 152 is used for caching a second instruction set and a subsequent instruction set thereof. I.e. the first mapping zone 151 is used by the first processor unit 121 and the second mapping zone 152 is used by the second processor unit 122.
The first instruction set and the second instruction set in the first mapping area 151 and the second mapping area 152 correspond to a certain instruction program in the flash memory unit 16, respectively, that is, the first mapping area 151 and the second mapping area 152 correspond to two "windows" of the flash memory unit 16, through which the first processor unit 121 and the second processor unit 122 can obtain the instruction program stored in the flash memory unit 16, respectively. The content displayed in the "window" is controlled by the memory control unit 13.
Specifically, when the first instruction set in the first mapping area 151 meets a first preset condition, the memory control unit 13 obtains a subsequent instruction set of the first instruction set from the flash memory unit 16 and writes the subsequent instruction set into the first mapping area 151, and when the second instruction set in the second mapping area 152 meets a second preset condition, the memory control unit 13 obtains a subsequent instruction set of the second instruction set from the flash memory unit 16 and writes the subsequent instruction set into the second mapping area 152.
In another embodiment of the present invention, the random access unit 15 includes a first mapping area, a second mapping area, a third mapping area and a fourth mapping area, wherein the first mapping area is used for caching a first instruction set, and the third mapping area is used for caching a subsequent instruction set of the first instruction set; the second mapping region is for caching a second instruction set and the fourth mapping region is for caching a subsequent instruction set of the second instruction set. By the method, the process of establishing the mapping is separated from the process of reading the instructions by the processor unit, so that the process of establishing the mapping does not influence the instruction execution operation of the processor unit, and the efficiency of executing the instructions by the processor unit is further improved.
Specifically, one of the first mapping area and the third mapping area is used as a main mapping area, and the other is used as a standby mapping area, where the main mapping area stores data instructions that are currently executed and are to be executed by the first processor unit 121, the memory control unit 13 stores a subsequent instruction set of the first instruction set in the main mapping area into the standby mapping area, and when a condition is met, the main mapping area and the standby mapping area are interchanged, that is, the main and standby states of the first mapping area and the third mapping area are switched. For example, the first and third mapping areas may switch the main mapping area and the standby mapping area according to a jump instruction (i.e., a jump instruction in the main mapping area) executed by the first processor unit 121. For example, when the first mapping region is the main mapping region, the first processor unit 121 obtains the data instruction from the first mapping region through the memory control unit 13 according to the Program address specified by the Program Counter (Program Counter). Under normal conditions, the original address +1 is automatically used as the program address of the next data instruction after the program counter executes a data instruction, so that the first processor unit 121 obtains the next data instruction from the first mapping region according to the updated program address; if the data instruction executed by the first processor unit 121 is a jump instruction, the program counter uses the original address + n or-n as the program address of the next data instruction according to the jump value n, and the first processor unit 121 obtains the next data instruction from the first mapping area according to the updated program address; when the program address specified by the program counter is located in the third mapping region (i.e. the standby mapping region), the main mapping region and the standby mapping region complete the switch. Similarly, one of the second mapping area and the fourth mapping area is used as a main mapping area, and the other mapping area is used as a standby mapping area for switching when the data instruction set is cached.
As shown in fig. 3 and 4, in an embodiment of the present invention, the internal bus 14 may further include an arbiter unit 141, and the arbiter unit 141 is configured to determine an execution sequence of the requests of the processor unit 12. Specifically, when the arbiter unit 141 receives a plurality of requests from the processor unit 12 at the same time, the arbiter unit 141 may determine the priority of the requests, and respond to the request with higher priority first, that is, return the data instruction corresponding to the request with higher priority to the processor unit 12 in advance, so that the parallel processing of the program may be realized without affecting the execution of the main program. When there are multiple processor units in a processor system, for example: the first processor unit 121, the second processor unit 122, and the like, the arbiter unit 141 of the internal bus 14 is further configured to determine priorities of requests from different processor units, and respond to a request with a higher priority first, that is, return a data instruction corresponding to the request with the higher priority to the processor unit 12 in advance, so that parallel processing of programs can be realized without affecting execution of the main program. In some embodiments, the arbiter unit 141 may also be used to coordinate or determine the order of processing of the same data instruction set among different processor units, for example: according to the data processing sequence, the first processor unit processes the data, the processed data is cached in the random access memory unit 15, and then the second processor unit reads the processed data instruction set and performs secondary processing, so that the data processing efficiency is improved. In this manner, if there are more processor units, the processing of data can also be performed by each processor unit of different functions, respectively, according to the type of the data instruction set.
The invention also provides a processor system, which comprises a processor chip, a DRAM chip, a memory control chip, a flash memory chip and an internal bus, wherein the processor chip is connected with the DRAM chip through the internal bus, and the DRAM chip is mapped to the flash memory chip through the memory control chip; and the memory control chip is used for reading and feeding back a corresponding instruction from the DRAM chip according to the request of the processor chip, acquiring a subsequent instruction set of the data instruction set from the flash memory chip and writing the subsequent instruction set into the DRAM chip when the data instruction set in the DRAM chip meets a preset condition, and the processor chip, the DRAM chip, the memory control chip, the flash memory chip and the internal bus are packaged into a whole by adopting a system-level packaging process and provide a uniform external interface so as to be installed on devices such as a circuit board and the like.
The processor system in this embodiment and the processor system in the embodiment corresponding to fig. 1 to 4 belong to the same concept, and specific implementation processes thereof are described in detail in the corresponding embodiments, and technical features in the embodiments of fig. 1 to 4 are correspondingly applicable in this embodiment, and are not described herein again.
Those of ordinary skill in the art would appreciate that the elements and algorithm steps of the various embodiments described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The embodiments of the present invention have been described in detail with reference to the accompanying drawings, but the present invention is not limited to the above embodiments, and various changes can be made within the knowledge of those skilled in the art without departing from the gist of the present invention. Furthermore, the embodiments of the present invention and the features of the embodiments may be combined with each other without conflict.

Claims (10)

1. A processor system is characterized by comprising a processor unit, a memory control unit, a random storage unit and a flash memory unit; the processor unit is electrically connected with the memory control unit, the memory control unit is electrically connected with the random access memory unit and the flash memory unit respectively, the processor unit reads a data instruction set in the random access memory unit through the memory control unit, and the random access memory unit maps the flash memory unit through the memory control unit; the memory control unit is used for reading and feeding back a corresponding instruction from the random storage unit according to the request of the processor unit, and acquiring a subsequent instruction set of the data instruction set from the flash memory unit and writing the subsequent instruction set into the random storage unit when the data instruction set in the random storage unit meets a preset condition.
2. The processor system according to claim 1, wherein the processor unit comprises a first processor unit and a second processor unit, the data instruction set comprises a first instruction set and a second instruction set, the preset conditions comprise a first preset condition and a second preset condition, the first instruction set is an instruction set waiting to be processed by the first processor unit, the second instruction set is an instruction set waiting to be processed by the second processor unit;
the memory control unit is configured to read and feed back a corresponding instruction from the random access memory unit according to a request of the first processor unit and the second processor unit, acquire a subsequent instruction set of the first instruction set from the flash memory unit and write the subsequent instruction set into the random access memory unit when a first instruction set in the random access memory unit meets a first preset condition, and acquire a subsequent instruction set of the second instruction set from the flash memory unit and write the subsequent instruction set into the random access memory unit when a second instruction set in the random access memory unit meets a second preset condition.
3. The processor system according to claim 2, wherein said first processor unit is further configured to read a subsequent instruction set of said second instruction set from said random access unit.
4. The processor system according to claim 2, wherein said first processor unit is further configured to cache a first instruction set after processing or a processing result thereof to said random access unit, and said second processor unit is configured to read said first instruction set after processing from said random access unit.
5. The processor system according to claim 3 or 4, wherein the random access memory unit comprises a first mapping area and a second mapping area, the first mapping area is used for caching the first instruction set and the subsequent instruction sets thereof, and the second mapping area is used for caching the second instruction set and the subsequent instruction sets thereof.
6. The processor system according to claim 3 or 4, wherein the random access memory unit comprises a first mapping area, a second mapping area, a third mapping area and a fourth mapping area, the first mapping area is used for caching the first instruction set, and the third mapping area is used for caching a subsequent instruction set of the first instruction set; the second mapping region is for caching the second instruction set and the fourth mapping region is for caching a subsequent instruction set of the second instruction set.
7. The processor system according to claim 6, wherein the first mapping region and the third mapping region are configured to switch with each other when caching the data instruction set; the second mapping area and the fourth mapping area are used for switching each other when the data instruction set is cached.
8. The processor system according to any one of claims 1 to 7, wherein the predetermined condition is that the number of data instruction sets waiting to be read by the plurality of processor units in the random access memory unit is less than a predetermined value, or that a time for which the data instruction sets waiting to be read in the random access memory unit are expected to be executed in the plurality of processors is less than a predetermined time.
9. The processor system according to any of claims 1 to 7, wherein the memory control unit further comprises an arbiter unit for determining an execution order of requests of the processor units and/or for determining a processing order of a same set of data instructions between different processor units.
10. The processor system according to any one of claims 1 to 7, wherein the processor unit, the random access memory unit, the memory control unit and the flash memory unit are integrated in a same processor chip.
CN202110641303.1A 2021-06-08 2021-06-08 Processor system Pending CN113284532A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110641303.1A CN113284532A (en) 2021-06-08 2021-06-08 Processor system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110641303.1A CN113284532A (en) 2021-06-08 2021-06-08 Processor system

Publications (1)

Publication Number Publication Date
CN113284532A true CN113284532A (en) 2021-08-20

Family

ID=77283818

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110641303.1A Pending CN113284532A (en) 2021-06-08 2021-06-08 Processor system

Country Status (1)

Country Link
CN (1) CN113284532A (en)

Similar Documents

Publication Publication Date Title
US11042297B2 (en) Techniques to configure a solid state drive to operate in a storage mode or a memory mode
EP3014623B1 (en) Hybrid memory device
CN104520823A (en) Methods, systems and devices for hybrid memory management
CN110941395B (en) Dynamic random access memory, memory management method, system and storage medium
US11366752B2 (en) Address mapping between shared memory modules and cache sets
US20220245066A1 (en) Memory system including heterogeneous memories, computer system including the memory system, and data management method thereof
CN108139994B (en) Memory access method and memory controller
CN110597742A (en) Improved storage model for computer system with persistent system memory
CN114647446A (en) Storage-level storage device, computer module and server system
US11055220B2 (en) Hybrid memory systems with cache management
US10901883B2 (en) Embedded memory management scheme for real-time applications
EP4060505A1 (en) Techniques for near data acceleration for a multi-core architecture
CN217588059U (en) Processor system
US20200285406A1 (en) Filtering memory calibration
KR102353859B1 (en) Computing device and non-volatile dual in-line memory module
EP4071583A1 (en) Avoiding processor stall when accessing coherent memory device in low power
CN113284532A (en) Processor system
US11526441B2 (en) Hybrid memory systems with cache management
CN113609034A (en) Processor system
CN111177027B (en) Dynamic random access memory, memory management method, system and storage medium
US11106559B2 (en) Memory controller and memory system including the memory controller
US20040250006A1 (en) Method of accessing data of a computer system
US20130246670A1 (en) Information processing system
WO2021196160A1 (en) Data storage management apparatus and processing core
CN113900711A (en) SCM (Single chip multiple Access) -based data processing method and device and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination