CN217588059U - Processor system - Google Patents

Processor system Download PDF

Info

Publication number
CN217588059U
CN217588059U CN202121279201.1U CN202121279201U CN217588059U CN 217588059 U CN217588059 U CN 217588059U CN 202121279201 U CN202121279201 U CN 202121279201U CN 217588059 U CN217588059 U CN 217588059U
Authority
CN
China
Prior art keywords
unit
instruction set
processor
random access
memory unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202121279201.1U
Other languages
Chinese (zh)
Inventor
赖振楠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hosin Global Electronics Co Ltd
Original Assignee
Hosin Global Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hosin Global Electronics Co Ltd filed Critical Hosin Global Electronics Co Ltd
Priority to CN202121279201.1U priority Critical patent/CN217588059U/en
Application granted granted Critical
Publication of CN217588059U publication Critical patent/CN217588059U/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The utility model provides a processor system, including processor unit, random access memory unit, memory the control unit, flash memory unit and internal bus, the processor unit passes through internal bus connection the random access memory unit, the random access memory unit passes through the memory the control unit is connected the flash memory unit, and pass through the memory the control unit maps the flash memory unit. The utility model discloses a memory control unit maps the memory data of flash memory unit to the random access memory unit for the processor unit need not to acquire the execution data fast with flash memory unit interaction, makes the processor unit can be in high-efficient running state all the time, improves computer system's operating efficiency greatly.

Description

Processor system
Technical Field
The utility model relates to an integrated circuit field, more specifically say, relate to a processor system.
Background
At present, the technology of DRAM (Dynamic Random Access Memory) has been greatly developed, and various types of SDRAM (synchronous Dynamic Random Access Memory), double Data Rate (DDR) SDRAM, 2 nd generation double data rate (DDR 2) SDRAM, 3 rd generation double data rate (DDR 3) SDRAM, and 4 th generation double data rate (DDR 4) SDRAM are mainly used. For the DRAM of the above type, the DRAM is mainly composed of a memory control unit and a DRAM chip (i.e., a memory granule), and a CPU (central processing unit) sends control commands including clock signals, command control signals, address signals, and the like to the DRAM chip via the memory control unit, and controls the read/write operations of data signals to the DRAM chip by the control commands.
When the computer system executes a program, related programs and data executed by the CPU need to be put into the DRAM, when the program is executed, the CPU fetches an instruction from the DRAM according to the content of the current program pointer register and executes the instruction, then fetches the next instruction and executes the next instruction, and the execution is not stopped until the program finishes the instruction. The working process is the process of continuously fetching and executing the instruction, and finally, the calculated result is put into the memory address appointed by the instruction.
However, since the cost of the DRAM is high and the storage capacity of the DRAM is limited, most programs are stored in a mass storage device with relatively low cost, such as a hard disk, a solid state disk, and the like, and when the computer runs, the CPU needs to move data in the mass storage device to the DRAM and write data of the DRAM into the mass storage device. Moreover, the interaction speed between the mass storage device and the central processing unit is much lower than that between the central processing unit and the DRAM, so that the overall operation efficiency of the computer system is greatly influenced.
SUMMERY OF THE UTILITY MODEL
The to-be-solved technical problem of the present invention is to provide a processor system for the above mentioned high cost problem of DRAM in the computer system.
The utility model provides a technical scheme of above-mentioned technical problem provides a processor system, including processor unit, random access memory unit, memory control unit, flash memory unit and internal bus, the processor unit passes through internal bus connection the random access memory unit, the random access memory unit passes through the memory control unit is connected the flash memory unit, and pass through the memory control unit mapping the flash memory unit, the processor unit passes through the internal bus reads the data instruction set of random access memory unit, the memory control unit is used for following the data instruction that the random access memory unit read and feedback corresponds, and when the data instruction set among the random access memory unit accords with the preset condition, follow the flash memory unit acquires the follow-up instruction set of data instruction set is write in the random access memory unit.
As a further improvement of the present invention, the processor unit includes a first processor unit and a second processor unit, the data instruction set includes a first instruction set and a second instruction set, the preset condition includes a first preset condition and a second preset condition, the first instruction set is an instruction set waiting for the first processor unit to process, the second instruction set is an instruction set waiting for the second processor unit to process;
the memory control unit is configured to read and feed back a corresponding instruction from the random access memory unit according to a request of the first processor unit and the second processor unit, acquire a subsequent instruction set of the first instruction set from the flash memory unit and write the subsequent instruction set into the random access memory unit when a first instruction set in the random access memory unit meets a first preset condition, and acquire a subsequent instruction set of the second instruction set from the flash memory unit and write the subsequent instruction set into the random access memory unit when a second instruction set in the random access memory unit meets a second preset condition.
As a further development of the invention, the first processor unit is further adapted to read a subsequent instruction set of the second instruction set from the random access memory unit.
As a further improvement of the present invention, the first processor unit is further configured to cache the first instruction set after processing to the random access memory unit, and the second processor unit is configured to read the first instruction set after processing from the random access memory unit.
As a further improvement of the present invention, the ram unit includes a first mapping area and a second mapping area, the first mapping area is used for caching the first instruction set and the subsequent instruction set thereof, and the second mapping area is used for caching the second instruction set and the subsequent instruction set thereof.
As a further improvement of the present invention, the ram unit includes a first mapping area, a second mapping area, a third mapping area and a fourth mapping area, the first mapping area is used for caching the first instruction set, and the third mapping area is used for caching a subsequent instruction set of the first instruction set; the second mapping region is for caching the second instruction set and the fourth mapping region is for caching a subsequent instruction set of the second instruction set.
As a further improvement of the present invention, the first mapping area and the third mapping area are used for caching the data instruction sets to switch with each other; the second mapping area and the fourth mapping area are used for switching each other when the data instruction set is cached.
As a further improvement of the present invention, the predetermined condition is that the number of the data instruction sets waiting to be read by the processor unit in the ram unit is smaller than a predetermined value, or the data instruction sets waiting to be read in the ram unit are expected to be smaller than a predetermined time.
As a further development of the invention, the internal bus further comprises an arbiter unit for determining an execution order of requests of the processor units and/or for determining a processing order of the same data instruction set between different processor units.
As a further improvement of the present invention, the processor unit, the RAM unit, the memory control unit, the flash memory unit and the internal bus are integrated on the same processor chip.
As a further improvement of the present invention, the processor unit is integrated on the first chip, the random access memory unit, the memory control unit, the flash memory unit, and the internal bus are integrated on the second chip, and the first chip is electrically connected to the second chip.
As a further improvement of the present invention, the processor unit, the internal bus are integrated on the first chip, the ram unit, the memory control unit, and the flash memory unit are integrated on the second chip, and the first chip is electrically connected to the second chip.
The utility model discloses a processor system: the memory control unit maps the storage data of the flash memory unit to the random access memory unit, so that the processor unit can quickly acquire the execution data without interaction with the flash memory unit, the processor unit can be always in a high-efficiency operation state, and the operation efficiency of the computer system is greatly improved.
Drawings
Fig. 1 is a block diagram of a processor system according to an embodiment of the present invention;
fig. 2 is a schematic diagram (one) illustrating an operation principle of a processor system according to another embodiment of the present invention;
fig. 3 is a schematic diagram (ii) illustrating an operating principle of a processor system according to another embodiment of the present invention;
fig. 4 is a schematic diagram (three) illustrating an operation principle of a processor system according to another embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more clearly understood, the present invention is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
As shown in fig. 1, the present invention provides a block diagram of a processor system, which can be applied to an electronic device, such as a personal computer, a server, a mobile phone, a tablet, etc., and implement instruction storage and processing. The processor system of the present embodiment includes a processor unit 12, a memory control unit 13, an internal bus 14, a random access memory unit 15 and a flash memory unit 16, wherein the processor unit 12 and the random access memory unit 15 are respectively connected to the internal bus 14, the random access memory unit 15 is electrically connected to the memory control unit 13, the memory control unit 13 is electrically connected to the flash memory unit, and the random access memory unit 15 maps the flash memory unit 16 through the memory control unit 13. During the operation of the processor system, the random access memory unit 15 stores the data instruction set currently being executed or about to be executed by the processor unit 12, and the flash memory unit 16 is used for storing the data instruction set that needs to be saved for a long time.
In an embodiment of the present invention, the processor unit 12, the random access memory unit 15, the memory control unit 13, the flash memory unit 16 and the internal bus 14 may be integrated into a same processor chip, and the processor chip includes a unified external interface and is installed on a circuit board (e.g., a motherboard of a computer system) through the unified external interface, so that external devices can communicate with each other to realize input and output operations of signals.
In practical applications, the processor unit 12, the ram unit 15, the memory control unit 13, the flash memory unit 16 and the internal bus 14 may also be integrated into a plurality of chips, for example, the processor unit 12 is integrated into a first chip, the ram unit 15, the memory control unit 13, the flash memory unit 16 and the internal bus 14 are integrated into a second chip, and the first chip and the second chip are electrically connected. Or, the processor unit 12 and the internal bus 14 are integrated on a first chip, and the random access memory unit 15, the memory control unit 13 and the flash memory unit 16 are integrated on a second chip, where the first chip is electrically connected to the second chip. The first chip and the second chip can be packaged into a whole by adopting a system-in-package process, and a uniform external interface is provided, so that the first chip and the second chip can be mounted on devices such as a circuit board and the like. In addition, the above units can be respectively integrated in different processor chips, DRAM chips, memory control chips, flash memory chips and internal buses, so that the processor system can be packaged into a whole by adopting a system-in-package process and provide a uniform external interface, thereby being capable of being mounted on devices such as a circuit board and the like.
In one embodiment of the present invention, the internal bus 14 may include a memory bus (e.g., a DRAM bus, etc.), a peripheral bus (e.g., a PCIE bus, etc.), and a bridge. The processor unit 12 and the ram unit 15 are electrically connected to a memory bus, respectively, the ram control unit 13 is electrically connected to the ram unit 15 through a DRAM bus, and the flash memory unit 16 is electrically connected to the ram control unit 13 through a peripheral bus.
In an embodiment of the present invention, the flash memory unit 16 may specifically adopt NAND memory chips with relatively large storage capacity, relatively low cost, and relatively slow data access speed, which can store data in the power-off state; the ram unit 15 may specifically adopt memory chips such as DDR, DDR2, DDR3, DDR4, DDR5, or phase change memory, which have relatively small memory capacity, relatively high cost, and relatively high data access speed, that is, the data memory capacity of the ram unit 15 is smaller than the data memory capacity of the flash memory unit 16. In consideration of cost, the ram unit 15 is preferably a memory chip that cannot retain stored data when power is turned off.
The memory control unit 13 maps the storage data of the flash memory unit 16 with large storage capacity and low cost to the random access memory unit 15 with small storage capacity and high cost, and the mapping operation does not occupy the clock cycle of the processor unit 12 and the memory bus between the processor unit 12 and the random access memory unit 15, so that the high-speed processing of the data can be realized under the condition that the storage capacity of the random access memory unit 15 is small, the processor unit 12 can be always in a high-efficiency running state, and the running efficiency of electronic equipment such as a computer system and the like is greatly improved.
In an embodiment of the present invention, memory control unit 13 can read and feed back the corresponding data command from random access memory unit 15 according to the request of processor unit 12, that is, memory control unit 13 obtains the data command corresponding to the read/write request from random access memory unit 15 and sends the data command corresponding to the read/write request to processor unit 12 through the memory bus when receiving the read/write request of processor unit 12, and writes the execution result of processor unit 12 into random access memory unit 15. The above operation process is the same as the data interaction process of the existing processor and internal memory (e.g. DRAM), and is not described herein again.
Also, the memory control unit 13 may implement the mapping between the random access memory unit 15 and the flash memory unit 16 by: when the data instruction set in the random access memory unit 15 meets the preset condition, the memory control unit 13 obtains a subsequent instruction set of the data instruction set from the flash memory unit 16 and writes the subsequent instruction set into the random access memory unit 15. Specifically, the preset condition may be (i.e., the memory control unit 13 may map the contents in the flash memory unit 16 to the ram unit 15 as follows): when the number of instruction sets to be read by the processor unit 12 in the random access memory unit 15 is smaller than a preset value, or the time in which the instruction sets to be read in the random access memory unit 15 are expected to be executed in the processor unit 12 is smaller than a preset time, the memory control unit 13 acquires a subsequent instruction set of the instruction sets from the flash memory unit 16, and stores the subsequent instruction set of the instruction sets in the random access memory unit 15 in an updated manner. So that data instructions in the random memory unit 15 can be updated in time so as not to affect the instruction execution of the processor unit 12.
Since the memory control unit 13 can directly predict the subsequent instruction set to be executed according to the instruction set being executed by the processor unit 12 and update the contents of the ram unit 15 according to the prediction result, the processor unit 12 does not need to interact with the flash memory unit 16 and does not occupy the memory bus. That is, the operation of the memory control unit 13 is transparent to the processor unit 12, the data transfer operation of the ram unit 15 does not need to be performed by the processor unit 12, time management is not required to be allocated, the processor unit 12 only uses the ram unit 15 and the flash memory unit 16 as an ultra-large DRAM, and data is automatically solidified permanently. Therefore, the processor unit 12 can be always in an efficient operation state, and is suitable for the fields of cloud computing and the like with high requirements on operation resources, and the operation efficiency of the system can be greatly improved.
As shown in fig. 2, in an embodiment of the present invention, the processor unit 12 includes a first processor unit 121 and a second processor unit 122. The first processor unit 121 and the second processor unit 122 may be different types of processors, for example, the first processor unit 121 is a main processor, such as an embedded processor including a plurality of cores, and the second processor unit 122 is an auxiliary processor, such as a graphics processor, a neural network processor, or the like. Accordingly, the set of data instructions in the random memory unit 15 comprises a first set of instructions being the set of instructions waiting to be processed by the first processor unit 121 and a second set of instructions being the set of instructions waiting to be processed by the second processor unit 122. The preset conditions include a first preset condition and a second preset condition.
The memory control unit 13 reads and feeds back the corresponding data instruction from the random access memory unit 15 according to the requests of the first processor unit 121 and the second processor unit 122, respectively, and when the first instruction set in the random access memory unit 15 meets the first preset condition, acquires the subsequent instruction set of the first instruction set from the flash memory unit 16 and writes the subsequent instruction set into the random access memory unit 15, and when the second instruction set in the random access memory unit meets the second preset condition, acquires the subsequent instruction set of the second instruction set from the flash memory unit 16 and writes the subsequent instruction set into the random access memory unit 15.
The first preset condition may specifically be: when the number of the first instruction sets waiting to be read by the first processor unit 121 in the random access memory unit 15 is smaller than the preset value, or the time for executing the first instruction sets waiting to be read in the random access memory unit 15 in the first processor unit 121 is expected to be smaller than the preset time; the second preset condition may specifically be: when the number of the second instruction sets waiting to be read by the second processor unit 122 in the random access memory unit 15 is smaller than the preset value, or the time for the second instruction sets waiting to be read in the random access memory unit 15 to be executed in the second processor unit 122 is expected to be smaller than the preset time. The preset value and the preset time can be adjusted according to the storage capacity of the ram unit 15, the main frequency of the first processor unit 121 and the second processor unit 122, and the like.
In an embodiment of the present invention, the first processor unit 121 may be further configured to read a subsequent instruction set of the second instruction set from the random access memory unit 15. That is, all instruction sets cached in the ram unit 15 can be read by different processor units such as the first processing unit 121 or the second processing unit 121, so as to speed up the processing of data instruction sets among the processor units, reduce the frequency of transporting the same data instruction set from the flash memory unit 16 to the ram unit 15, and speed up the processing of the same or similar data instruction sets by the processor units.
In an embodiment of the present invention, the first processor unit 121 is further configured to cache the first instruction set after processing to the random access memory unit 15, and the second processor unit 122 is configured to read the first instruction set after processing from the random access memory unit 15. After the first processor unit 121 reads and processes the first instruction set, the first processor unit 121 is further responsible for caching the processed first instruction set in the random access memory unit 15, at this time, the first instruction set processed by the first processor unit 121 in the random access memory unit 15 may be further read and processed by the second processor 122 for a second time, in this embodiment, the same data instruction set is cached in the random access memory unit after being processed by one processor unit, and other processors may also continue to read and process the instruction set.
Referring to fig. 4, in an embodiment of the present invention, the ram unit 15 includes a first mapping area 151 and a second mapping area 152, and the first mapping area 151 and the second mapping area 152 are both a section of storage space in the ram unit 15, wherein the first mapping area 151 is used for caching a first instruction set and a subsequent instruction set thereof, and the second mapping area 152 is used for caching a second instruction set and a subsequent instruction set thereof. I.e. the first mapping zone 151 is used by the first processor unit 121 and the second mapping zone 152 is used by the second processor unit 122.
The first instruction set and the second instruction set in the first mapping area 151 and the second mapping area 152 correspond to a certain section of instruction program 161, 162 in the flash memory unit 16, respectively, that is, the first mapping area 151 and the second mapping area 152 correspond to two "windows" of the flash memory unit 16, through which the first processor unit 121 and the second processor unit 122 can obtain the instruction program stored in the flash memory unit 16, respectively. The content displayed in the "window" is controlled by the memory control unit 13.
Specifically, when the first instruction set in the first mapping area 151 meets the first preset condition, the memory control unit 13 obtains the subsequent instruction set of the first instruction set from the flash memory unit 16 and writes the subsequent instruction set into the first mapping area 151, and when the second instruction set in the second mapping area 152 meets the second preset condition, obtains the subsequent instruction set of the second instruction set from the flash memory unit 16 and writes the subsequent instruction set into the second mapping area 152.
In another embodiment of the present invention, the random access memory unit 15 comprises a first mapping region, a second mapping region, a third mapping region and a fourth mapping region, wherein the first mapping region is used for caching a first instruction set, and the third mapping region is used for caching a subsequent instruction set of the first instruction set; the second mapping region is for caching a second instruction set and the fourth mapping region is for caching a subsequent instruction set of the second instruction set. By the method, the process of establishing the mapping is separated from the process of reading the instruction by the processor unit, so that the process of establishing the mapping does not influence the instruction execution operation of the processor unit, and the efficiency of executing the instruction by the processor unit is further improved.
Specifically, one of the first mapping area and the third mapping area is used as a main mapping area, and the other is used as a standby mapping area, where the main mapping area stores data instructions that are currently executed and are to be executed by the first processor unit 121, the memory control unit 13 stores a subsequent instruction set of the first instruction set in the main mapping area into the standby mapping area, and when a condition is met, the main mapping area and the standby mapping area are interchanged, that is, the main and standby states of the first mapping area and the third mapping area are switched. For example, the first and third mapping areas may switch the main mapping area and the standby mapping area according to a jump instruction (i.e., a jump instruction in the main mapping area) executed by the first processor unit 121. For example, when the first mapping region is the main mapping region, the first processor unit 121 obtains the data instruction from the first mapping region through the memory control unit 13 according to the Program address specified by the Program Counter (Program Counter). Under normal conditions, when the program counter finishes executing one data instruction, the original address +1 is automatically used as the program address of the next data instruction, so that the first processor unit 121 acquires the next data instruction from the first mapping area according to the updated program address; if the data instruction executed by the first processor unit 121 is a jump instruction, the program counter uses the original address + n or-n as the program address of the next data instruction according to the jump value n, and the first processor unit 121 obtains the next data instruction from the first mapping area according to the updated program address; when the program address specified by the program counter is located in the third mapping region (i.e. the standby mapping region), the main mapping region and the standby mapping region complete the switch.
Similarly, one of the second mapping area and the fourth mapping area is used as a main mapping area, and the other mapping area is used as a standby mapping area for switching when the data instruction set is cached.
As shown in fig. 3 and 4, in an embodiment of the present invention, the internal bus 14 may further include an arbiter unit 141, and the arbiter unit 141 is configured to determine an execution sequence of the requests of the processor unit 12. Specifically, when the arbiter unit 141 receives a plurality of requests from the processor unit 12 at the same time, the arbiter unit 141 may determine the priority of the requests and respond to the request with higher priority first, that is, return the data instruction corresponding to the request with higher priority to the processor unit 12 in advance, so that the parallel processing of the program may be implemented without affecting the execution of the main program. When there are multiple processor units in a processor system, for example: the first processor unit 121, the second processor unit 122, and the like, the arbiter unit 141 of the internal bus 14 is further configured to determine priorities of requests from different processor units, and respond to a request with a higher priority first, that is, return a data instruction corresponding to the request with a higher priority to the processor unit 12 in advance, so as to implement parallel processing of programs without affecting execution of the main program. In some embodiments, the arbiter unit 141 may also be used to coordinate or determine the processing order of the same data instruction set between different processor units, for example: according to the data processing sequence, the first processor unit processes the data, then the processed data is cached in the random access memory unit 15, and then the second processor unit reads the processed data instruction gold for secondary processing, so that the data processing efficiency is improved. In this manner, if there are more processor units, the processing of data can also be performed by each processor unit of different functions, respectively, according to the type of the data instruction set.
The utility model also provides a processor system, including processor chip, random access memory chip, memory control chip, flash memory chip and internal bus, above-mentioned processor chip passes through internal bus connection random access memory chip, and random access memory chip passes through memory control chip mapping flash memory chip; and the memory control chip is used for reading and feeding back a corresponding instruction from the random memory chip according to the request of the processor chip, acquiring a subsequent instruction set of the data instruction set from the flash memory chip and writing the subsequent instruction set into the random memory chip when the data instruction set in the random memory chip meets a preset condition, and the processor chip, the random memory chip, the memory control chip, the flash memory chip and the internal bus are packaged into a whole by adopting a system-level packaging process and provide a uniform external interface so as to be installed on devices such as a circuit board.
The processor system in this embodiment and the processor system in the embodiment corresponding to fig. 1-2 belong to the same concept, and specific implementation processes thereof are described in detail in the corresponding embodiments, and technical features in the embodiments of fig. 1-2 are correspondingly applicable in this embodiment, and are not described herein again.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The embodiments of the present invention have been described in detail with reference to the accompanying drawings, but the present invention is not limited to the above embodiments, and various changes can be made without departing from the spirit of the present invention within the knowledge of those skilled in the art. Furthermore, the embodiments of the present invention and features of the embodiments may be combined with each other without conflict.

Claims (10)

1. A processor system is characterized by comprising a processor unit, a random access memory unit, a memory control unit, a flash memory unit and an internal bus, wherein the processor unit is connected with the random access memory unit through the internal bus, the random access memory unit is connected with the flash memory unit through the memory control unit and maps the flash memory unit through the memory control unit, the processor unit reads a data instruction set of the random access memory unit through the internal bus, the memory control unit is used for reading and feeding back a corresponding data instruction from the random access memory unit, and when the data instruction set in the random access memory unit meets a preset condition, a subsequent instruction set of the data instruction set is obtained from the flash memory unit and written into the random access memory unit.
2. The processor system according to claim 1, wherein the processor unit comprises a first processor unit and a second processor unit, the data instruction set comprises a first instruction set and a second instruction set, the preset conditions comprise a first preset condition and a second preset condition, the first instruction set is an instruction set waiting to be processed by the first processor unit, the second instruction set is an instruction set waiting to be processed by the second processor unit;
the memory control unit is configured to read and feed back a corresponding instruction from the random access memory unit according to a request of the first processor unit and the second processor unit, acquire a subsequent instruction set of the first instruction set from the flash memory unit and write the subsequent instruction set into the random access memory unit when a first instruction set in the random access memory unit meets a first preset condition, and acquire a subsequent instruction set of the second instruction set from the flash memory unit and write the subsequent instruction set into the random access memory unit when a second instruction set in the random access memory unit meets a second preset condition.
3. The processor system according to claim 2, wherein said first processor unit is configured to read a subsequent instruction set of said second instruction set from said random access memory unit.
4. The processor system according to claim 2, wherein said first processor unit is configured to cache a first set of instructions and/or processing results after processing to said random access memory unit, and said second processor unit is configured to continue processing said first set of instructions and/or processing results after processing.
5. The processor system according to claim 3 or 4, wherein the random access memory unit comprises a first mapping area for caching the first instruction set and its successors and a second mapping area for caching the second instruction set and its successors.
6. The processor system according to claim 3 or 4, wherein the random access memory unit comprises a first mapping zone, a second mapping zone, a third mapping zone and a fourth mapping zone, the first mapping zone is configured to cache the first instruction set, the third mapping zone is configured to cache a subsequent instruction set of the first instruction set; the second mapping region is for caching the second instruction set and the fourth mapping region is for caching a subsequent instruction set of the second instruction set.
7. The processor system according to claim 6, wherein the first mapping region and the third mapping region are configured to cache the data instruction sets to switch with each other; the second mapping area and the fourth mapping area are used for switching each other when the data instruction set is cached.
8. The processor system according to claim 1, wherein the predetermined condition is that the number of data instruction sets waiting to be read by the processor unit in the ram unit is smaller than a predetermined value, or that a time for which the data instruction sets waiting to be read in the ram unit are expected to be executed in the processor is smaller than a predetermined time.
9. The processor system according to claim 1, wherein the internal bus further comprises an arbiter unit for determining an execution order of requests of the processor units and/or for determining a processing order of a same set of data instructions between different processor units.
10. The processor system according to claim 1, wherein the processor unit, the random access memory unit, the memory control unit, the flash memory unit and the internal bus are integrated on a same processor chip.
CN202121279201.1U 2021-06-08 2021-06-08 Processor system Active CN217588059U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202121279201.1U CN217588059U (en) 2021-06-08 2021-06-08 Processor system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202121279201.1U CN217588059U (en) 2021-06-08 2021-06-08 Processor system

Publications (1)

Publication Number Publication Date
CN217588059U true CN217588059U (en) 2022-10-14

Family

ID=83525308

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202121279201.1U Active CN217588059U (en) 2021-06-08 2021-06-08 Processor system

Country Status (1)

Country Link
CN (1) CN217588059U (en)

Similar Documents

Publication Publication Date Title
US10296217B2 (en) Techniques to configure a solid state drive to operate in a storage mode or a memory mode
EP3014623B1 (en) Hybrid memory device
CN110941395B (en) Dynamic random access memory, memory management method, system and storage medium
CN110737608B (en) Data operation method, device and system
US20210064535A1 (en) Memory system including heterogeneous memories, computer system including the memory system, and data management method thereof
CN108139994B (en) Memory access method and memory controller
US20190042415A1 (en) Storage model for a computer system having persistent system memory
CN114647446A (en) Storage-level storage device, computer module and server system
US11055220B2 (en) Hybrid memory systems with cache management
US10901883B2 (en) Embedded memory management scheme for real-time applications
CN116149554B (en) RISC-V and extended instruction based data storage processing system and method thereof
CN217588059U (en) Processor system
US6862675B1 (en) Microprocessor and device including memory units with different physical addresses
EP4060505A1 (en) Techniques for near data acceleration for a multi-core architecture
EP4071583A1 (en) Avoiding processor stall when accessing coherent memory device in low power
CN111177027B (en) Dynamic random access memory, memory management method, system and storage medium
CN113609034A (en) Processor system
US11526441B2 (en) Hybrid memory systems with cache management
CN113284532A (en) Processor system
CN113900711A (en) SCM (Single chip multiple Access) -based data processing method and device and computer-readable storage medium
WO2021196160A1 (en) Data storage management apparatus and processing core
CN114647599A (en) Computer network and data processing method
JPH03252856A (en) Processing system for program
JP2002049607A (en) Microcomputer with built-in cache
JP2002259209A (en) Arithmetic processing system

Legal Events

Date Code Title Description
GR01 Patent grant
GR01 Patent grant