CN115509959A - Processing system, control method, chip, and computer-readable storage medium - Google Patents

Processing system, control method, chip, and computer-readable storage medium Download PDF

Info

Publication number
CN115509959A
CN115509959A CN202211048195.8A CN202211048195A CN115509959A CN 115509959 A CN115509959 A CN 115509959A CN 202211048195 A CN202211048195 A CN 202211048195A CN 115509959 A CN115509959 A CN 115509959A
Authority
CN
China
Prior art keywords
memory
cache
mapping data
management unit
page table
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211048195.8A
Other languages
Chinese (zh)
Inventor
刘重力
朱凌刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202211048195.8A priority Critical patent/CN115509959A/en
Publication of CN115509959A publication Critical patent/CN115509959A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/10Address translation
    • G06F12/1009Address translation using page tables, e.g. page table structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0806Multiuser, multiprocessor or multiprocessing cache systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/76Architectures of general purpose stored program computers
    • G06F15/78Architectures of general purpose stored program computers comprising a single central processing unit
    • G06F15/7807System on chip, i.e. computer system on a single chip; System in package, i.e. computer system on one or more chips in a single package
    • G06F15/781On-chip cache; Off-chip memory
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

A processing system, a control method, a chip, and a computer-readable storage medium are provided. The processing system comprises: the memory management unit is used for managing the memory; a processor having an operating system running thereon, the processor being configured to: generating first mapping data, wherein the first mapping data is used for indicating a mapping relation between a first virtual address of an operating system and a first physical address of a memory; storing the first mapping data in a cache; and if the memory management unit does not take the first mapping data out of the cache, controlling the cache so that the first mapping data is not replaced into the memory. According to the embodiment of the application, the first mapping data can be prevented from being replaced to the memory from the cache, so that repeated memory access between the memory management unit and the memory is avoided, the power consumption is reduced, and the system performance is improved.

Description

Processing system, control method, chip, and computer-readable storage medium
Technical Field
The present embodiments relate to the field of data storage technologies, and in particular, to a processing system, a control method, a chip, and a computer-readable storage medium.
Background
In current larger-scale soc chips, an operating system generally needs to manage a storage space based on a virtual address, and a mapping relationship between the virtual address and a physical address is usually stored in a memory in the form of a page table. The operating system uses a memory management unit to take care of the virtual to physical address translation of the page table.
The newly created page table usually exists in the cache of the central processing unit, and if the memory management unit has no time to take the page table in the cache, the page table may be replaced into the memory by hardware. The memory management unit needs to fetch the page table from the memory, resulting in increased power consumption and performance loss.
Disclosure of Invention
The embodiments of the present application provide a processing system, a control method, a chip, and a computer-readable storage medium, and various aspects of the embodiments of the present application are introduced below.
In a first aspect, a processing system is provided, comprising: the memory management unit is used for managing the memory; a processor having an operating system running thereon, the processor being configured to: generating first mapping data, wherein the first mapping data is used for indicating a mapping relation between a first virtual address of the operating system and a first physical address of the memory; storing the first mapping data in a cache; and if the memory management unit does not take the first mapping data from the cache, controlling the cache so that the first mapping data is not replaced into the memory.
In a second aspect, a control method is provided, including: generating first mapping data, wherein the first mapping data is used for indicating a mapping relation between a first virtual address of an operating system and a first physical address of a memory; storing the first mapping data in a cache; and if the memory management unit does not take the first mapping data from the cache, controlling the cache so that the first mapping data is not replaced into the memory.
In a third aspect, a chip is provided, comprising the processing system according to the first aspect.
In a fourth aspect, there is provided a computer-readable storage medium having stored thereon a computer program for executing the control method according to the second aspect.
In the embodiment of the application, the first mapping data is created in the cache, and the first mapping data cannot be replaced from the cache to the memory before the memory management unit takes the first mapping data away. According to the embodiment of the application, the first mapping data can be prevented from being replaced to the memory from the cache, so that repeated memory access between the memory management unit and the memory is avoided, the memory power consumption is reduced, and the system performance is improved.
Drawings
FIG. 1 is a flow diagram illustrating a memory management unit obtaining a page table in a cache.
FIG. 2 is a flow chart illustrating a memory management unit obtaining a page table in a memory.
Fig. 3 is a schematic structural diagram of a processing system according to an embodiment of the present application.
FIG. 4 is a schematic diagram of one possible implementation of the processing system of FIG. 3.
Fig. 5 is a schematic flowchart of a control method provided in an embodiment of the present application.
Fig. 6 is a flow diagram of one possible implementation of the method of fig. 5.
Fig. 7 is a schematic structural diagram of a chip provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments.
Currently, in a System On Chip (SOC) chip, if a more complex operating system is involved, such as an operating system of android, linux, etc., the operating system is generally required to manage a storage space based on a virtual address. An SOC, also called a system-on-chip, is a complete system integrated on a single chip. A complete system generally includes a Central Processing Unit (CPU), a memory, and peripheral circuits. The CPU is also referred to as a central processing unit or microprocessor. The virtual address is the address used by a visitor (e.g., a central processor) that needs to access the chip memory space, while the physical address is the real address of the chip memory space. The mapping of virtual addresses to physical addresses is typically stored in memory in the form of page tables.
A Memory Management Unit (MMU) is used to handle the translation between the virtual address and the physical address of the page table. The MMU is an internal module of a chip, and the basic function of the MMU is address mapping, through which a virtual address can be realized to access a specified physical address.
When the application creates a page table newly, the new page table usually exists in a Cache (Cache) or a memory of the CPU, and the memory management unit needs to fetch the page table from the Cache or the memory.
A memory device is a device for storing information, located on-chip or off-chip from a processor, and configured to retrieve and store data associated with physical memory addresses. Depending on the application, memory (Memory), also called internal Memory, main Memory or main Memory, can be generally divided into internal Memory and external Memory (secondary Memory). The memory can be divided into Random Access Memory (RAM) and Read Only Memory (ROM) according to the working principle, and the RAM is an important memory. The storage device may also include a Cache (Cache), which includes one or more levels of Cache memory.
Cache refers to a memory that can perform high-speed data exchanges. The cache is usually smaller in capacity and stores data which is accessed more frequently by the CPU, but the speed is much higher than that of the memory, and the processing speed of the computer system can be obviously improved before the memory exchanges data with the CPU. The working principle of the cache is that when the CPU needs to read a piece of data, the CPU cache is firstly searched, and the data is immediately read and sent to the CPU for processing when being found. If the data is not found, the data is read from the memory with relatively low speed and sent to the CPU for processing, and the data block where the data is located is called into the cache, so that the whole data can be read from the cache later without calling the memory.
FIG. 1 is a flow diagram illustrating a memory management unit obtaining a page table in a cache. For ease of description, a shared cache in a multi-core processor system is described as an example. As shown in fig. 1, the processor system includes a CPU110, a memory management unit 130, and a memory 140. The cache 120 is located in the CPU110, and the cache 120 is a third-level cache and is also a shared cache of the multi-core processor. The memory management unit 130 completes the virtual-real address translation of the page table, and normally needs to go through the following steps to complete a memory access instruction of the page table:
in step one, in response to the instruction of the application software to create the page table, the page table is newly created in the cache 120. When software newly creates a page table, there is a high probability that the new page table will exist in the cache 120 of the CPU 110. Typically, the page tables in the cache are fast tables and the page tables in memory are slow tables.
In step two, the memory management unit 130 fetches a new page table from the cache 120 in time.
In step three, CPU110 reads and writes memory 140 as needed by the process, e.g., the process refers to the data of the new page table, the access is transmitted through memory management unit 130, and memory management unit 130 completes the translation from the virtual address to the physical address of the new page table.
In step four, the memory management unit 130 transmits the access instruction to the memory 140, and performs a memory access operation on the memory 140 according to the physical address. At this point, the transfer of the entire instruction is complete.
When the application software newly creates a page table in the Cache of the CPU, if the memory management unit has no time to take the page table in the Cache, the page table may be replaced into the memory by hardware. At this time, the memory management unit must fetch the page table from the memory, and the access speed of the memory is several times slower than that of the Cache, thereby causing power consumption increase and performance loss.
FIG. 2 is a flow diagram illustrating a process of a memory management unit obtaining a page table in a memory. As shown in fig. 2, the processor system includes a CPU110, a memory management unit 130, and a memory 140, wherein a cache 120 is located in the CPU 110. The memory management unit 130 completes the virtual-real address translation of the page table in the memory 140, and to complete a memory access instruction of the page table, the following steps are required:
in step one, in response to the instruction of the application software to create the page table, the page table is newly created in the cache 120.
In step two, the memory management unit 130 does not fetch the new page table from the cache 120 in time. Typically, the cache space is small and the page table is replaced into memory 140 without being removed from cache 120 in time.
In step three, CPU110 accesses memory 140 for processes that need to read and write data from memory 140, e.g., processes that relate to new page tables, via memory management unit 130.
At step four, if the memory management unit 130 does not store the required page table, then access is initiated to the memory 140.
In step five, memory management unit 130 retrieves the page table from memory 140.
In step six, memory management unit 130 completes the virtual address to physical address translation of the new page table. The memory management unit 130 transmits the access instruction to the memory 140, and performs read/write operation on the memory 140 according to the physical address. At this point, the transfer of the entire instruction is complete.
Therefore, when the page table has been replaced to the memory, the steps required for one-time reading and writing of the CPU or other peripheral devices are more than the scenario that the page table is timely taken away by the MMU, and repeated memory access occurs between the MMU and the memory, which results in increased power consumption, significant delay, and performance loss of the memory.
It should be noted that the above mentioned scenario that the replacement of the page table to the memory causes performance loss is only an example, and the embodiment of the present application can be applied to any type of scenario that the mapping relationship of the virtual and real addresses in the cache is replaced before being taken away by the MMU, which causes power consumption increase.
Therefore, how to develop a solution for memory management to reduce power consumption waste is a problem to be solved.
Based on this, this application embodiment proposes a processing system, and this application embodiment is described in detail below.
Fig. 3 is a schematic structural diagram of a processing system according to an embodiment of the present application. As shown in fig. 3, the processing system includes a memory management unit 310, a memory 320, a processor 330, and a cache 340.
The memory management unit 310 is connected to the memory 320 for managing the memory 320. The basic function of the memory management unit 310 is address mapping, by which a virtual address can be implemented to access a specified physical address. Such as translating a virtual address in a page table to a physical address.
The memory management unit 310 may be a memory management unit corresponding to the processor 330, or may be a memory management unit corresponding to the first external device. External devices, or peripheral devices, peripheral devices for short. Generally, the memory management units corresponding to the external devices are also called input/output memory management units (IOMMUs), because the memory management units are input devices and output devices. The first external device may be any external device connected to the processing system.
External devices can be roughly classified into three categories: human-computer interaction devices (e.g., printers, displays, plotters, speech synthesizers), storage devices for computer information (e.g., magnetic disks, optical disks, magnetic tapes), and machine-to-machine communication devices (e.g., modems).
The memory 320 may be RAM, ROM, etc. The memory 320 may be accessed according to the physical address, such as reading data, writing data, and the like.
The processor 330 is used for processing instructions, such as receiving an operation instruction of an application program and a new page table instruction, and performing processing. The processor 330 may be a single-core processor, or may be a multi-core processor, such as a four-core or eight-core processor.
A multi-core processor refers to the integration of two or more complete computational cores into a single processor. The processor may now support multiple processors on the system bus, with all bus control signals and command signals being provided by the bus controller.
Cache 340 typically stores data that is frequently accessed by the CPU. Cache 340 is typically located in CPU 330. In some embodiments, cache 340 may also be located external to CPU 330. A page table may be newly built in cache 340 for fast access to the page table. The Cache 340 may be an independent Cache or a shared Cache. For example, in a multi-core processor system, cache 340 may be a shared cache of the multi-core processor system.
In some implementations, the third level caches are all caches integrated within the CPU, including a first level cache L1, a second level cache L2, and a third level cache L3. They all function as a high-speed data buffer between the CPU and memory, with L1 being closest to the CPU core, L2 being second and L3 being second. In the aspect of operation speed, L1 is fastest, L2 is fast, and L3 is slowest; capacity size: l1 is smallest, L2 is larger, and L3 is largest.
In a multi-core processor system, each core processor usually has a separate first-level cache L1 and second-level cache L2, and the third-level cache L3 is generally a shared cache. The application of the L3 cache can further reduce the memory delay and simultaneously improve the performance of the processor during large-data-volume calculation.
An operating system, such as android, linux, etc., runs on the processor 330. Operating systems typically manage memory space based on virtual addresses. When the mapping data is newly created by the operating system, the instruction may come from the application software or the application program, and the processor 330 may perform the following operation steps:
in step one, first mapping data indicating a mapping relationship between a first virtual address of the operating system and a first physical address of the memory 320 is generated. The virtual address is an address used by a visitor (e.g., CPU) who accesses a memory space of the chip, and is also called a logical address or an effective address. The first virtual address may be any address of a virtual address space. The physical address is a real address of the chip memory space, and the first physical address may be any address of the chip physical address space.
In some implementations, the first mapping data may be, for example, a page table. A page table is a data structure that defines the rules for mapping virtual addresses to physical addresses. The logical address is an address generated by the CPU, and the logical address generated by the CPU can be divided into a page number and an offset in a page, where the page number includes a base address of each page in the physical memory and is used as an index of the page table. The in-page offset is combined with the base address to determine the physical address of the memory. The set of all logical addresses is a logical address space.
The physical address is the actual address of the memory, and the set of all physical addresses in the memory corresponding to the logical address is the physical address space.
The page table typically contains Page Table Entries (PTEs) to map virtual addresses to physical addresses. The page table may be single level or multi-level depending on the base page size, the number of page table entries at each level, and the number of bits of virtual address space supported.
In step two, the first mapping data is stored in the cache 340. And also facilitates fast access to the data by the processor 330.
In step three, if the memory management unit 310 does not retrieve the first mapping data from the cache 340, the cache 340 is controlled such that the first mapping data is not replaced into the memory 320.
The processor 330 controls the buffer 340 so that the first mapping data is not replaced into the memory 320. Therefore, when the first mapping data is accessed, the situation of repeated access between the memory management unit and the memory is avoided, and the waste of power consumption is reduced.
In some implementations, as shown by the dashed line in fig. 3, the CPU accessing the newly created first mapping data in the cache, e.g., the page table, needs to go through the following steps:
in step one, CPU330, in response to the instruction of the new page table, places the new page table in an address space that can be cached by the Cache, such as in Cache 340. The instructions may be from application software or an application program.
In step two, the memory management unit 310 fetches the new page table from the cache 340. If memory management unit 310 does not fetch the page table in time, the page table must not be replaced from cache 340 to memory 320 before memory management unit 310 fetches the page table.
In step three, CPU330 reads and writes data in memory 320 for a process, for example, the process involves a new page table, and the data is transmitted through memory management unit 310, and memory management unit 310 performs a virtual address to physical address translation for the new page table.
In step four, the memory management unit 330 transmits the access instruction to the memory 320, and performs read/write operation on the memory 320 according to the converted physical address. And at this point, the transmission of the whole access instruction is completed.
In some implementations, to facilitate querying whether the cached first mapping data has been fetched by the memory management unit, the memory management unit may set flag information, such as a specific flag, for the first mapping data to be fetched. The flag information is used to indicate whether the first mapping data is taken away by the memory management unit. The flag bits may indicate different states of whether the first mapping data is fetched by the memory management unit. For example, the flag bit indicates that the page table is not taken to be "0", if the flag bit indicates that the page table has been taken by the MMU, the page table can be set, and after taken by the MMU, the flag bit can be set to "1", and the reset is to set the flag bit to "0". If the page table is not set, the page table cannot be replaced in the memory. In some embodiments, the flag bit indicates that the normal state of the page table not taken away may also be "1", and may be switched to "0" after being taken away by the MMU.
The flag information may be set in a Cacheline (Cacheline) of the cache. The first mapping data is stored in a first cache line of the cache, and identification information is set in the first cache line. The identification information is used for indicating whether the first mapping data is taken away by the memory management unit.
The first cache line may be any one of cache lines (cachelines) of the cache. A Cache line, also referred to as a Cache line, is the smallest unit of Cache data, including a memory block and other information (valid and tag bits). Each cache line typically includes a valid bit (valid bit) to indicate whether the line contains meaningful information, a tag bit (tag bit) of length T to uniquely identify the address in memory of the block stored in the cache line.
In some implementations, if the flag information indicates that the cache line of the first mapping data has been taken by the MMU, the cache line may be set, for example, the normal state that the flag bit has not been taken may be "0", and after being taken by the MMU, the cache line may be set to "1", and the reset thereof may set it to "0". If the cache line is not set, the cache line should not be replaced into memory by the cache hardware.
Generally, the space of the cache is small, the use frequency is high, and the space of the cache occupied by the newly-built page table is not suitable for a long time. In some implementations, the processor 330 may also actively trigger the memory management unit 310 to remove the newly created first mapping data, such as the page table. For example, the processor 330 may actively trigger the memory management unit corresponding to the CPU330 to fetch the newly created page table in time, so as to avoid occupying the cache space for a long time. For another example, the processor 330 may also actively trigger the memory management unit corresponding to the first external device to take away the newly created page table in time.
In the embodiment of the application, the first mapping data is created in the cache, and the first mapping data cannot be replaced from the cache to the memory before the memory management unit takes the first mapping data away. According to the embodiment of the application, the first mapping data can be prevented from being replaced to the memory from the cache, so that repeated memory access between the memory management unit and the memory is avoided, the memory power consumption is reduced, and the system performance is improved.
Fig. 4 is a schematic diagram of one possible implementation of the processing system shown in fig. 3. As shown in fig. 4, the processor is a multi-core processor, and the processor system includes a memory management unit 410, a memory 420, a processor 430, and a cache 440.
The memory management unit 410 may translate the virtual address in the first mapping data into a physical address of the data. The memory management unit 410 may be a memory management unit corresponding to the processor, or may be a memory management unit corresponding to the first external device.
The memory 420 is connected to the memory management unit 410, and the memory 420 can be accessed according to the physical address, such as reading data, writing data, and the like.
Processor 430 is used to process instructions. Processor 410 may be a multicore processor, such as a quad, eight core processor. Each core processor has a separate level one cache L1 and level two cache L2.
Cache 440 may be located in processor 430 and typically stores data that is frequently accessed by the CPU. A page table may be created in cache 440 for fast access to the page table. Cache 440 is a shared cache, i.e., level three cache L3, of the multi-core processor system. The application of the L3 cache can further reduce the memory delay and simultaneously improve the performance of the processor during large-data-volume calculation.
For the newly created page table, a flag bit may be set in the cache line. For example, if the flag bit indicates that the page table is not taken away, the normal state is "0", if the page table has already been taken away by the MMU, the flag bit may be set, and after taken away by the MMU, the flag bit may be set to "1". If the cache line is not set, it may not be hardware replaced by cache 440 into memory 420.
After the page table is created, the processor 430 may also actively trigger the memory management unit 410 to take away the created page table in time. For example, the memory management unit corresponding to the processor 430 may be actively triggered to take away the newly created page table in time, so as to avoid occupying the cache space for a long time. For another example, the memory management unit corresponding to the first external device may be actively triggered to take away the newly created page table in time.
As shown in fig. 4, the following steps are required for the memory management unit 410 to obtain the page table and complete a memory access instruction:
in step one, in response to an instruction from the application software to create a new page table, the processor 430 should place the new page table in the address space of the Cache in the CPU, such as in the third level Cache 440.
In step two, when the page table is newly created in the cache 440, a flag bit is set for the cache line of the newly created page table. Before memory management unit 410 fetches the page table, the flag bit is not set and the page table must not be replaced from cache 440 into memory 420.
In step three, the processor 430 actively triggers the memory management unit 410 to remove the newly created page table in time, such as the page table with the flag bit not set.
At step four, memory management unit 410 fetches the new page table from cache 440. The processor 310 reads and writes the memory 420 by a process, for example, data related to a new page table, and the access is transmitted through the memory management unit 410, and the memory management unit 410 performs the virtual address to physical address translation of the new page table.
In step five, the memory management unit 430 transmits the access instruction to the memory 440 to convert the corresponding physical address to perform the read/write operation on the memory 340. At this point, the transfer of the entire instruction is complete.
In the embodiment of the application, the flag bit is set for the newly-built page table in the cache, the flag bit is not set before the memory management unit does not take away the page table, the page table cannot be replaced from the cache to the memory, and the CPU can also actively trigger the memory management unit to take away the newly-built page table with the flag bit not set in time. The embodiment of the application can avoid repeated memory access between the memory management unit and the memory, is beneficial to reducing the memory power consumption and improving the performance of the CPU and the peripheral for obtaining the page table.
System embodiments of the present application are described in detail above in conjunction with fig. 1-4, and method embodiments of the present application are described in detail below in conjunction with fig. 5-6. It is to be understood that the description of the method embodiments corresponds to the description of the system embodiments, and therefore reference may be made to the previous system embodiments for portions that are not described in detail.
Fig. 5 is a flowchart illustrating a control method according to an embodiment of the present application. The method of fig. 5 may be applied to the processing system described in any of the previous embodiments. The processing system may include: the memory management unit is used for managing the memory; a processor having an operating system running thereon, the processor being configured to: generating first mapping data, wherein the first mapping data is used for indicating a mapping relation between a first virtual address of the operating system and a first physical address of the memory; storing the first mapping data in a cache; and if the memory management unit does not take the first mapping data from the cache, controlling the cache so that the first mapping data is not replaced into the memory.
The method of fig. 5 includes steps S510 to S530, which are described in detail below.
In step S510, first mapping data indicating a mapping relationship between a first virtual address of an operating system and a first physical address of a memory is generated.
In step S520, the first mapping data is stored in the cache.
In step S530, if the memory management unit does not take the first mapping data out of the memory, the memory management unit controls the memory such that the first mapping data is not replaced into the memory.
Fig. 6 is a flow diagram of one possible implementation of the method of fig. 5. The memory management unit obtains the page table to complete a memory access instruction, and the method of fig. 6 may include steps S610 to S640, which are described in detail below.
In step S610, a page table is newly created in the cache, and a flag bit is set for the new page table.
In step S620, if the memory management unit does not take the page table out of the cache, the cache is controlled so that the page table is not replaced into the memory. And actively triggering the storage management unit to take the page table with the unset flag bit away in time.
In step S630, the memory management unit removes the page table from the cache, and converts the virtual address in the page table into a physical address.
In step S640, a memory access operation is performed on the main memory according to the physical address.
Fig. 7 is a schematic diagram of a chip provided in an embodiment of the present application. As shown in fig. 7, the chip 700 may include a processing system 710 as described in any of the previous paragraphs.
The control method of the foregoing embodiments of the present application, including the method of page table obtaining and managing, is generally described in the relevant module configuration of the chip, such as in the register configuration of the relevant module.
Embodiments of the present application further provide a computer-readable storage medium, on which a computer program is stored, where the computer program is used to execute the control method as described in any one of the foregoing descriptions.
It should be appreciated that the computer-readable storage media referred to in the embodiments of the present application can be any available media that can be read by a computer or a data storage device including one or more available media integrated servers, data centers, and the like. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a Digital Versatile Disk (DVD)), or a semiconductor medium (e.g., a Solid State Disk (SSD)), among others.
It should be understood that, in the various embodiments of the present application, "first", "second", and the like are used for distinguishing different objects, and are not used for describing a specific order, the order of execution of the above-mentioned processes is not meant to imply any order of execution, and the order of execution of the processes should be determined by their functions and inherent logic, and should not be construed as limiting the implementation processes of the embodiments of the present application.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
In the several embodiments provided in this application, it should be understood that when a portion is referred to as being "connected" or "coupled" to another portion, it is intended that the portion can be not only "directly connected," but also "electrically connected," with another element interposed therebetween. In addition, the term "connected" also means that the parts are "physically connected" as well as "wirelessly connected". In addition, when a portion is referred to as "comprising" an element, it means that the portion may include another element without excluding the other element unless otherwise stated.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A processing system, comprising:
the memory management unit is used for managing the memory;
a processor having an operating system running thereon, the processor being configured to:
generating first mapping data, wherein the first mapping data is used for indicating a mapping relation between a first virtual address of the operating system and a first physical address of the memory;
storing the first mapping data in a cache;
and if the memory management unit does not take the first mapping data from the cache, controlling the cache so that the first mapping data is not replaced into the memory.
2. The processing system according to claim 1, wherein the first mapping data is stored in a first cache line of the cache, and the first cache line is provided with identification information indicating whether the first mapping data is fetched by the memory management unit.
3. The processing system of claim 1, wherein the processor is further configured to:
and actively triggering the memory management unit to send and take the first mapping data from the cache.
4. The processing system of any of claims 1-3, wherein the first mapping data is a page table.
5. A control method, comprising:
generating first mapping data, wherein the first mapping data is used for indicating a mapping relation between a first virtual address of an operating system and a first physical address of a memory;
storing the first mapping data in a cache;
and if the memory management unit does not take the first mapping data out of the cache, controlling the cache so that the first mapping data is not replaced into the memory.
6. The method according to claim 5, wherein the first mapping data is stored in a first cache line of the cache, and the first cache line is provided with identification information indicating whether the first mapping data is fetched by the memory management unit.
7. The control method according to claim 5, characterized by further comprising:
and actively triggering the memory management unit to send and take the first mapping data from the cache.
8. The control method according to any one of claims 5 to 7, wherein the first mapping data is a page table.
9. A chip comprising the processing system of any one of claims 1-4.
10. A computer-readable storage medium, characterized in that a computer program is stored thereon for executing the control method according to any one of claims 5-8.
CN202211048195.8A 2022-08-30 2022-08-30 Processing system, control method, chip, and computer-readable storage medium Pending CN115509959A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211048195.8A CN115509959A (en) 2022-08-30 2022-08-30 Processing system, control method, chip, and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211048195.8A CN115509959A (en) 2022-08-30 2022-08-30 Processing system, control method, chip, and computer-readable storage medium

Publications (1)

Publication Number Publication Date
CN115509959A true CN115509959A (en) 2022-12-23

Family

ID=84502746

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211048195.8A Pending CN115509959A (en) 2022-08-30 2022-08-30 Processing system, control method, chip, and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN115509959A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116436587A (en) * 2023-06-14 2023-07-14 芯迈微半导体(上海)有限公司 Resource mapping method and device of control channel and resource demapping method and device
CN116775512A (en) * 2023-08-22 2023-09-19 摩尔线程智能科技(北京)有限责任公司 Page table management device and method, graphics processor and electronic equipment

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116436587A (en) * 2023-06-14 2023-07-14 芯迈微半导体(上海)有限公司 Resource mapping method and device of control channel and resource demapping method and device
CN116436587B (en) * 2023-06-14 2023-09-05 芯迈微半导体(上海)有限公司 Resource mapping method and device of control channel and resource demapping method and device
CN116775512A (en) * 2023-08-22 2023-09-19 摩尔线程智能科技(北京)有限责任公司 Page table management device and method, graphics processor and electronic equipment
CN116775512B (en) * 2023-08-22 2023-12-05 摩尔线程智能科技(北京)有限责任公司 Page table management device and method, graphics processor and electronic equipment

Similar Documents

Publication Publication Date Title
US6138208A (en) Multiple level cache memory with overlapped L1 and L2 memory access
US8171230B2 (en) PCI express address translation services invalidation synchronization with TCE invalidation
CN115509959A (en) Processing system, control method, chip, and computer-readable storage medium
US20160140042A1 (en) Instruction cache translation management
JP2011013858A (en) Processor and address translating method
US11474951B2 (en) Memory management unit, address translation method, and processor
US10423354B2 (en) Selective data copying between memory modules
US20140181461A1 (en) Reporting access and dirty pages
CN113641596B (en) Cache management method, cache management device and processor
KR101893966B1 (en) Memory management method and device, and memory controller
CN113039531B (en) Method, system and storage medium for allocating cache resources
KR20210025344A (en) Main memory device having heterogeneous memories, computer system including the same and data management method thereof
CN114546896A (en) System memory management unit, read-write request processing method, electronic equipment and system on chip
CN115481054A (en) Data processing method, device and system, system-level SOC chip and computer equipment
JP2003281079A5 (en)
KR20220061983A (en) Provides interrupts from the I/O memory management unit to the guest operating system
US11748107B2 (en) Complex I/O value prediction for multiple values with physical or virtual addresses
KR20220001016A (en) How to provide a copy of the I/O memory management unit registers to the guest operating system
WO2020251790A1 (en) Guest operating system buffer and log access by an input-output memory management unit
CN115087961B (en) Arbitration scheme for coherent and incoherent memory requests
CN115269457A (en) Method and apparatus for enabling cache to store process specific information within devices supporting address translation services
KR20040047398A (en) Method for data access using cache memory
WO2023064609A1 (en) Translation tagging for address translation caching
WO2023064590A1 (en) Software indirection level for address translation sharing
JP2023538241A (en) Monitoring memory locations to identify whether data stored in the memory location has been modified

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination