CN117891755A - Address translation system and address translation method - Google Patents

Address translation system and address translation method Download PDF

Info

Publication number
CN117891755A
CN117891755A CN202211220628.3A CN202211220628A CN117891755A CN 117891755 A CN117891755 A CN 117891755A CN 202211220628 A CN202211220628 A CN 202211220628A CN 117891755 A CN117891755 A CN 117891755A
Authority
CN
China
Prior art keywords
buffer
virtual
coupler
memory
instruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211220628.3A
Other languages
Chinese (zh)
Inventor
吴国荣
陈羿逞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Realtek Semiconductor Corp
Original Assignee
Realtek Semiconductor Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Realtek Semiconductor Corp filed Critical Realtek Semiconductor Corp
Priority to CN202211220628.3A priority Critical patent/CN117891755A/en
Publication of CN117891755A publication Critical patent/CN117891755A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/10Address translation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/16Handling requests for interconnection or transfer for access to memory bus
    • G06F13/1668Details of memory controller
    • G06F13/1673Details of memory controller using buffers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • G06F13/40Bus structure
    • G06F13/4004Coupling between buses

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

An address translation system and an address translation method are provided. The address translation system includes a memory device, a memory bus, and a processor. The processor is used for executing the following steps according to a plurality of instructions of the storage device: generating an entity buffer in the storage device; generating a virtual buffer in a virtual capacity of the storage device by a virtual buffer algorithm; establishing a coupling relation between the physical buffer area and the virtual buffer area through a coupling algorithm by a coupler of the memory bus; receiving compressed data from the first device via the physical buffer; when the second device wants to read the virtual buffer, the second device is guided to the physical buffer for reading through the coupling relation by the coupler; transmitting the compressed data of the physical buffer to the coupler via the memory bus; the compressed data is decompressed into decompressed data by the coupler.

Description

Address translation system and address translation method
Technical Field
The present disclosure relates to a system and a method for converting, and more particularly, to a system and a method for converting an address.
Background
Currently, data transfer between hardware (or elements) located on a System on a Chip (SOC) requires a Buffer (Buffer) disposed in a memory to perform a role of transferring data therebetween, and other devices are used to assist in compressing and decompressing data, so that two elements with different addresses can smoothly transfer data. However, two different elements are usually matched with two buffers, but the two buffers consume resources in the memory, which results in waste of resources and consumption of resources in the memory for transmitting data only.
Disclosure of Invention
This summary is intended to provide a simplified summary of the disclosure so that the reader will have a basic understanding of the disclosure. This summary is not an extensive overview of the disclosure and is intended to neither identify key/critical elements of the embodiments nor delineate the scope of the present disclosure.
One aspect of the present disclosure relates to an address translation system. The address translation system includes a memory device, a memory bus, and a processor. The memory bus is used to couple the first device to the second device. The processor is used for executing the following steps according to a plurality of instructions of the storage device: generating an entity buffer in the storage device; generating a virtual buffer in a virtual capacity of the storage device by a virtual buffer algorithm; establishing a coupling relation between the physical buffer area and the virtual buffer area through a coupling algorithm by a coupler of the memory bus; receiving compressed data from the first device via the physical buffer; when the second device wants to read the virtual buffer, the second device is guided to the physical buffer for reading through the coupling relation by the coupler; transmitting the compressed data of the physical buffer to the coupler via the memory bus; decompressing the compressed data into decompressed data by the coupler; and transmitting the decompressed data to the second device via the memory bus.
Another aspect of the present disclosure relates to an address translation method. The address conversion method comprises the following steps: generating an entity buffer in the storage device; generating a virtual buffer in a virtual capacity of the storage device by a virtual buffer algorithm; establishing a coupling relation between the physical buffer area and the virtual buffer area through a coupling algorithm by using a memory bus; receiving compressed data from the first device via the physical buffer; when the second device wants to read the virtual buffer, the second device is guided to the physical buffer for reading through the coupling relation by the coupler; transmitting the compressed data of the physical buffer to the coupler via the memory bus; decompressing the compressed data into decompressed data by the coupler; and transmitting the decompressed data to the second device via the memory bus.
Therefore, according to the technical content of the present disclosure, the address conversion system and the address conversion method in the embodiments of the present disclosure can reduce the consumption of resources in the memory, so as to achieve the effect of transmitting data by two hardware with different addresses.
The basic spirit and other objects of the present invention, as well as the means and aspects of the present invention will be readily apparent to those of ordinary skill in the art from consideration of the following detailed description.
Drawings
The foregoing and other objects, features, advantages and embodiments of the present invention will be apparent from the following description of the drawings in which:
FIG. 1 is a block diagram of an address translation system according to an embodiment of the present disclosure.
FIG. 2 is a block diagram of a processor of an address translation system according to an embodiment of the present disclosure.
FIG. 3 is a flow chart of an address translation method according to an embodiment of the present disclosure.
Fig. 4 to 6 are flowcharts illustrating another address translation method according to an embodiment of the present disclosure.
Various features and elements are not drawn to scale in accordance with conventional practice in the drawings in order to best illustrate the specific features and elements associated with the disclosure. In addition, like elements/components are referred to by the same or similar reference numerals among the different figures.
Detailed Description
For a more complete and thorough description of the present disclosure, the following illustrative descriptions of embodiments and examples of the present disclosure are presented; this is not the only form of implementation or use of the specific embodiments of the present disclosure. The description covers the features of the embodiments and the method steps and sequences for constructing and operating the embodiments. However, other embodiments may be utilized to achieve the same or equivalent functions and sequences of steps.
Unless defined otherwise herein, the meanings of scientific and technical terms used herein are the same as those commonly understood and used by one of ordinary skill in the art to which this invention belongs. Furthermore, as used in this specification, the singular noun encompasses the plural version of the noun without conflict with the context; and plural nouns as used also encompasses singular versions of that noun.
In addition, as used herein, "coupled" or "connected" may mean that two or more elements are in direct physical or electrical contact with each other, or in indirect physical or electrical contact with each other, or that two or more elements may be in operation or action with each other.
Certain terms are used throughout the description and claims to refer to particular components. However, it will be understood by those of ordinary skill in the art that like elements may be referred to by different names. The description and claims do not distinguish between components that differ in name but not function. In the description and claims, the terms "comprise" and "comprise" are used in an open-ended fashion, and thus should be interpreted to mean "include, but not limited to.
FIG. 1 is a block diagram of an address translation system according to an embodiment of the present disclosure. As shown in FIG. 1, the address translation system 100 includes a memory device 110, a memory bus 120, and a processor 130. In connection, the memory device 110 is coupled to the memory bus 120, and the memory bus 120 is coupled to the processor 130.
To reduce the consumption of resources in memory to achieve the effect of data transfer by two different address hardware, an address translation system 100 as shown in FIG. 1 is provided, and the details of the operation thereof are described below.
In one embodiment, the coupler 121 of the memory bus 120 is used to couple a first device 900 to a second device 910. In one embodiment, the processor 130 is configured to execute the following steps according to a plurality of instructions of the storage device 110: generating a physical buffer 111 in the storage device 110; generating a virtual buffer 113 in the virtual capacity of the storage device 110 by a virtual buffer algorithm; establishing a coupling relation between the physical buffer area 111 and the virtual buffer area 113 through a coupling algorithm by a coupler 121 of the memory bus 120; receiving compressed data from the first device 900 via the physical buffer 111; when the second device 910 wants to read the virtual buffer 113, the second device 910 is guided to the physical buffer 111 for reading by the coupling relation through the coupler 121; transmitting the compressed data of the physical buffer 111 to the coupler 121 via the memory bus 120; decompressing the compressed data into decompressed data by the coupler 121; and transferring the decompressed data to the second device 910 via the memory bus 120.
For easy understanding of the above operation of the address translation system 100, please refer to fig. 2, fig. 2 is a block diagram illustrating a processor of the address translation system according to an embodiment of the present invention.
Referring also to FIG. 1, in operation, in one embodiment, the coupler 121 of the memory bus 120 is used to couple the first device 900 to the second device 910. For example, the coupler 121 of the memory bus 120 may be used to map (map), bond (bind) or couple (couple) the first device 900 to the second device 910. In some embodiments, the first device 900 may include one address, the second device 910 may include another address, and the coupler 121 of the memory bus 120 may be configured to map, combine, or couple the address of the first device 900 to the other address of the second device 910, but is not limited thereto.
Furthermore, in some embodiments, the first device 900 may be an In-house IP (In-house IP), that is, an element manufactured by home corporation, and further, the first device 900 may be a video decoder (video decoder). The second device 910 may be a outsource product (Vendor IP), which is an element manufactured by other companies, and the second device 910 may also be a graphics processor (Graphics Processing Unit, GPU), but the present application is not limited thereto. In some embodiments, the first device 900 and the second device 910 may be located on a System on a Chip (SOC), but the present disclosure is not limited thereto. In some embodiments, the first device 900 may also be a graphics processor (Graphics Processing Unit, GPU), and the second device 910 may also be a video decoder (video decoder), where the memory bus 120 may have a Design (Design) or Function (Function) for decompressing data in the graphics processor, but the present invention is not limited thereto.
In some embodiments, the processor 130 is configured to execute the following steps according to a plurality of instructions of the storage device 110: a physical buffer 111 is generated in the storage device 110. For example, the memory device 110 may be a double data rate synchronous dynamic random access memory (Double Data Rate Synchronous Dynamic Random Access Memory, DDR SDRAM, hereinafter referred to as DDR) with a capacity of 4GB, and the physical buffer 111 may be located in 0-4 GB of the memory device 110, in other words, the physical buffer 111 may use 0-4 GB of resources, but the present invention is not limited thereto.
Referring to fig. 2, in an embodiment, the processor 130 may include a memory allocator (memory allocator) for generating the physical buffer 111 and the virtual buffer 113 in the storage device 110, but is not limited thereto. In some embodiments, the memory allocator 130 may be a third algorithm 135A (e.g., a first semiconductor memory management (RXX dvrMemory Manager) algorithm, as shown in fig. 2), and the third algorithm 135A (e.g., a first semiconductor memory management (RXX dvrMemory Manager) algorithm) 135A may be a software module developed by the company a semiconductor (RXX) on a Linux-based operating system, but the present disclosure is not limited thereto.
The processor 130 then generates the virtual buffer 113 in the virtual capacity of the storage device 110 by a virtual buffer algorithm. For example, the virtual buffer algorithm may be a first algorithm 131A (e.g., linux system kernel (Linux kernel) algorithm, as shown in fig. 2), the virtual capacity may be a virtual address space (fake address space), and the virtual buffer 113 may be a Sparse Memory (Sparse Memory) block 113, in other words, the processor 130 may register the Sparse Memory (Sparse Memory) block 113 with the first algorithm 131A (e.g., linux system kernel (Linux kernel) algorithm), but the present invention is not limited thereto.
In some embodiments, the total capacity of the Memory device 110 may be 4GB, the physical Buffer 111 may be a first Buffer (Buffer 1), the virtual Buffer 113 may be a second Buffer (Buffer 2), the second Buffer (Buffer 2) may be a Sparse Memory (Memory) block 113, the processor 130 may generate the virtual address space (fake address space) in the Memory device 110 through a virtual Buffer algorithm, the first Buffer (Buffer 1) may use a 0-4 GB area of the Memory device 110, and the Sparse Memory (Memory) block 113 may use a 4-5 GB area (not actually existing in the virtual address space (fake address space) for the Memory device 110), however, the Sparse Memory (Memory) block 113 has a core paging structure (kernel pages structure) characteristic, so the Sparse Memory (Memory) block 113 may be regarded as an actually existing physical Memory (Memory) for the second device 910, but the present invention is not limited thereto. The address translation system 100 does not require additional hardware (e.g., additional memory capacity) or resources to couple the first device 900 to the second device 910.
In some embodiments, the physical Buffer 111 may be a first Buffer (Buffer 1), the virtual Buffer 113 may be a second Buffer (Buffer 2), and the total number of the first Buffer (Buffer 1) and the second Buffer (Buffer 2) may be 12, although the present invention is not limited thereto.
Furthermore, the coupling relation between the physical buffer 111 and the virtual buffer 113 is established by the coupler 121 of the memory bus 120 through a coupling algorithm. For example, the memory bus 120 may include a Monitor switch (Monitor switch) 121, and the Monitor switch (Monitor switch) 121 may couple a first address of the buffer 111 to a second address of the virtual buffer 113 through a coupling algorithm, but is not limited thereto.
The compressed data is then received from the first device 900 via the physical buffer 111. For example, the first device 900 may be an In-house IP (In-house IP) 900, which may output compressed data (compressed data) that may be written into the physical buffer 111, but the present invention is not limited thereto.
When the second device 910 wants to read the virtual buffer 113, the second device 910 is guided to the physical buffer 111 for reading by the coupling relation through the coupler 121. For example, the second device 910 may be a Vendor IP (Vendor IP) 910, and the memory bus 120 may include a Monitor switch (Vendor IP) 121, and when the Vendor IP (Vendor IP) 910 wants to read the virtual buffer 113, the Monitor switch (Vendor IP) 121 may direct the Vendor IP (Vendor IP) 910 to read the physical buffer 111 through a coupling relationship, but the present application is not limited thereto.
The compressed data of the physical buffer 111 is then transferred to the coupler 121 via the memory bus 120. The compressed data is then decompressed into decompressed data by the memory bus 120. The decompressed data is transferred to the second device 910 via the memory bus 120. For example, the buffer 111 can transmit compressed data to the coupler 121, the coupler 121 can decompress the compressed data into decompressed data (decompressed data) after the coupler 121 receives the compressed data, and then the memory bus 120 transmits the decompressed data to the second device 910, but the present invention is not limited thereto.
In some embodiments, the processor 130 is further configured to perform the following steps according to the complex instructions of the storage device 110: transmitting a generate buffer instruction by the first device 900; and generating the physical buffer 111 in the storage 110 by a buffer algorithm according to the generated buffer instruction. For example, the physical Buffer 111 may be a first Buffer (Buffer 1), and the generated Buffer instruction may be a Buffer output request (request output Buffer) instruction, but the present invention is not limited thereto.
In some embodiments, the processor 130 is further configured to perform the following steps according to the complex instructions of the storage device 110: transmitting a return buffer instruction to the first device 900; and generating virtual buffer instructions by the second device 910. For example, the return Buffer instruction may be a return Buffer 1 instruction, and the virtual Buffer instruction may be a Buffer input request request input Buffer instruction, but the present invention is not limited thereto.
In some embodiments, the processor 130 is further configured to perform the following steps according to the complex instructions of the storage device 110: virtual buffer 113 is generated in accordance with the generation of virtual buffer instructions to generate virtual buffer capacity in storage 110 by a virtual buffer algorithm. For example, the processor 130 may generate the virtual Buffer 113 (e.g., the second Buffer 2) in the virtual capacity (e.g., fake address space) of the storage device 110 according to a virtual Buffer instruction (e.g., a Buffer input request (request input Buffer) instruction) by a virtual Buffer algorithm (e.g., a Linux kernel algorithm), but the present application is not limited thereto.
In some embodiments, the processor 130 is further configured to perform the following steps according to the complex instructions of the storage device 110: a coupler 121 that outputs a coupling instruction to the memory bus 120; and coupling the first address of the physical buffer 111 to the second address of the virtual buffer 113 by a coupling algorithm according to the coupling instruction by the coupler 121. For example, the coupling instruction may be an instruction to allocate a virtual second Buffer (allocate fake Buffer 2), and the coupling instruction is written into a coupler 121 (e.g., a Monitor switch (Monitor switch) 121) of the memory bus 120 to map (map), combine (bind) or couple (couple) a second address of a virtual Buffer 113 (e.g., a second Buffer (Buffer 2)) to a first address of a physical Buffer 111 (e.g., a first Buffer (Buffer 1)), but the present invention is not limited thereto.
In some embodiments, the coupling algorithm may be a Monitor switch (Monitor switch) 121 that monitors the address (e.g., read address) of the second device 910 in real-time (real-time). When the start address (read address) falls between the start address (start address) and the end address (end address) of the second Buffer (Buffer 2) of the virtual Buffer 113, the read address (read address) is first converted (e.g., zero-order address conversion (Level-0 address translator)) to convert the read address (read address) into an address corresponding to the first Buffer (Buffer 1) (e.g., first Buffer read address (Buffer 1_read_address)).
Then, the first Buffer read address (Buffer 1_read_address) is subjected to a second conversion (e.g., first-order conversion (Level-1 translator)) to obtain a data offset (e.g., first Buffer offset address (Buffer 1_offset_address)) and a first cache (e.g., header cache) corresponding to the first Buffer (Buffer 1). In addition, a first cache (e.g., header cache) may generate a second cache (e.g., decompressed data cache).
After receiving the first Buffer offset address (Buffer 1_offset_address) and the decompressed data cache (decompensation), the interface (e.g., the memory interface (DDR interface)) of the memory device 110 drives the bus monitoring converter (Monitor Wrapper) 121 to decompress the compressed data of the first device 900 (e.g., the In-house IP) into decompressed data (decompensated data) according to the decompressed data cache (decompensation). The Monitor Wrapper 121 then transmits the decompressed data to the second device 910 (e.g., vendor IP), but the present application is not limited thereto.
In some embodiments, the processor 130 is further configured to perform the following steps according to the complex instructions of the storage device 110: the return virtual buffer instruction is transferred to the second device 910. For example, the return virtual Buffer instruction may be a return Buffer 2 instruction, but the present invention is not limited thereto.
In some embodiments, the processor 130 is further configured to perform the following steps according to the complex instructions of the storage device 110: the first device 900 transfers compressed data to the physical buffer 111 according to the return buffer instruction. For example, the first device 900 may transmit the compressed data to the first Buffer (Buffer 1) according to a return Buffer1 instruction, but the present invention is not limited thereto.
In some embodiments, the processor 130 is further configured to perform the following steps according to the complex instructions of the storage device 110: transmitting a read data instruction to the memory bus 120 by the second device 910 according to the return virtual buffer instruction; and confirming, by the coupler 121, whether the dummy read data instruction is received from the second device 910. For example, the read data instruction may be a read data (read data from Buffer) instruction from the second Buffer, the virtual read data instruction may be a virtual read trigger (faked read trigger) instruction, the second device 910 may send a read data (read data from Buffer 2) instruction from the second Buffer to the memory bus 120 according to a return Buffer (return Buffer 2) instruction, and the coupler 121 may further recognize whether the read data (read data from Buffer 2) instruction from the second Buffer output from the second device 910 is a virtual read trigger (faked read trigger) instruction, but the present invention is not limited thereto.
In some embodiments, the processor 130 is further configured to perform the following steps according to the complex instructions of the storage device 110: confirming receipt of the dummy read data command from the second device 910 by the coupler 121 to transfer the read buffer command to the physical buffer 111; and transmitting the compressed data to the coupler 121 by the physical buffer 111 according to the read buffer instruction. For example, the read buffer instruction may be a read request from first buffer (request read from Buffer 1) instruction, the coupler 121 may recognize the read data from second buffer (read data from Buffer 2) instruction output from the second device 910 as a virtual read trigger (faked read trigger) instruction, and the coupler 121 may then transmit the read request from first buffer (request read from Buffer 1) instruction to the physical buffer 111, the physical buffer 111 transmitting compressed data to the coupler 121 according to the read request from first buffer (request read from Buffer 1) instruction. The coupler 121 may then decompress the compressed data into decompressed data. Furthermore, the memory bus 120 may transmit the decompressed data to the second device 910, but the present invention is not limited thereto.
In some embodiments, the virtual buffer algorithm comprises a Linux algorithm and the coupler 121 comprises a decompressor. For example, the virtual buffer algorithm may be an algorithm technology related to the first algorithm 131A (for example, linux kernel, as shown in fig. 2), the Decompressor may be any Decompressor (decompresser) on a generic entity or software, and the coupler 121 may include a bus monitoring converter (Monitor Wrapper) and a Decompressor (decompresser), but the present application is not limited thereto.
Referring to fig. 2, in some embodiments, the processor 130 includes a plurality of algorithms. For example, the processor 130 may include a first algorithm 131A, a second algorithm 133A and a third algorithm 135A, where the first algorithm 131A may be a Linux system core page (Linux kernel pages) algorithm, the second algorithm 133A may be a Sparse Memory (Sparse Memory) algorithm, the third algorithm 135A may be a first semiconductor Memory management (RXX dvrMemory Manager) algorithm, and the first semiconductor Memory management (RXX dvrMemory Manager) algorithm may be a software module developed by a first semiconductor (RXX) company based on a Linux operating system, in other words, the first semiconductor Memory management (RXX dvrMemory Manager) algorithm may be a Memory management mode developed by a first semiconductor (RXX) company and a Sparse Memory (Sparse Memory) technology using a Linux system core (Linux kernel) as a bottom layer.
In some embodiments, the storage device 110 may include a first algorithm 131A, a second algorithm 133A, and a third algorithm 135A (not shown). For example, the first algorithm 131A may be a Linux system core page (Linux kernel pages) algorithm, the second algorithm 133A may be a Sparse Memory (Sparse Memory) algorithm, and the third algorithm 135A may be a first semiconductor Memory management (RXX dvrMemory Manager) algorithm, but the present application is not limited thereto.
In some embodiments, the virtual buffer 113 is used to reduce the size (size) of the physical buffer 111. For example, the size (size) of the virtual Buffer 113 (e.g., the second Buffer (Buffer 2)) may be wide (width) ×high (height), and the size (size) of the physical Buffer 111 (e.g., the first Buffer (Buffer 1)) of the storage device 110 may be reduced by a ratio of width (width) ×high (height), the reduction ratio may be 50%, and the reduction ratio may be related to the compression rate (compression rate) of the first device 900 (e.g., the home-made product (In-house IP) 900), but the present invention is not limited thereto.
In some embodiments, the Buffer of the conventional memory has a first volume, and the physical Buffer 111 (e.g., buffer 1) of the present memory device 110 has a second volume, which is smaller than the first volume. For example, by the virtual buffer area establishment technique of the present invention, the effect of effectively reducing the buffer size can be achieved. In some embodiments, the second device 910 (e.g., the outsource product (Vendor IP) 910) reads the (read/write) instruction through the virtual Buffer 113 (e.g., the second Buffer (Buffer 2)), and the second device 910 (e.g., the outsource product (Vendor IP) 910) may consider itself to read the full Buffer size (Buffer size) instead of the compressed size (size) in the physical Buffer 111 (e.g., the first Buffer (Buffer 1)), but the present application is not limited thereto.
In some embodiments, the bus monitoring switch (Monitor Wrapper) 121 may have the function of first order address translation (level 1address translation) without a mechanism to handle the virtual Buffer 113 (e.g., second Buffer 2). Further, the buffer size (buffer size) and the buffer address (buffer address) read by the first device 900 (e.g., home-made product (In-house IP) 900) and the second device 910 (e.g., outsource product (Vendor IP) 910) may be the same. That is, the size (size) of the physical Buffer 111 (e.g., the first Buffer (Buffer 1)) of the storage device 110 of the first device 900 (e.g., the In-house IP 900) and the second device 910 (e.g., the outsource IP 910) is not limited to the size (size) of the original virtual Buffer 113 (e.g., the second Buffer (Buffer 2)).
FIG. 3 is a flow chart of an address translation method according to an embodiment of the present disclosure. For easy understanding of the address translation method 300 of fig. 3, please refer to fig. 1 and 3 together. The address translation method 300 of fig. 3 includes the steps of:
step 301: generating a physical buffer 111 in the storage device 110;
Step 302: generating a virtual buffer 113 in the virtual capacity of the storage device 110 by a virtual buffer algorithm;
step 303: establishing a coupling relation between the physical buffer area 111 and the virtual buffer area 113 through a coupling algorithm by a coupler 121 of the memory bus 120;
step 304: receiving compressed data from the first device 900 via the physical buffer 111;
step 305: when the second device 910 wants to read the virtual buffer 113, the second device 910 is guided to the physical buffer 111 for reading by the coupling relation through the coupler 121;
step 306: transmitting the compressed data of the physical buffer 111 to the coupler 121 via the memory bus 120;
step 307: decompressing the compressed data into decompressed data by the coupler 121;
step 308: the decompressed data is transferred to the second device via the memory bus 120.
In one embodiment, referring to 301, the processor 130 generates a physical buffer 111 in the storage device 110. For example, the memory device 110 may be a double data rate synchronous dynamic random access memory (Double Data Rate Synchronous Dynamic Random Access Memory, DDR SDRAM, hereinafter referred to as DDR) with a capacity of 4GB, and the physical buffer 111 may be in a range of 0-4 GB of the memory device 110, in other words, the physical buffer 111 may use a resource of 0-4 GB, but the present invention is not limited thereto.
Referring to fig. 2, in an embodiment, the processor 130 may include a memory allocator (memory allocator) for generating the physical buffer 111 and the virtual buffer 113 in the storage device 110, but is not limited thereto. In some embodiments, the memory allocator 130 may be a third algorithm 135A (e.g., a first semiconductor memory management (RXX dvrMemory Manager) algorithm, as shown in fig. 2), and the third algorithm 135A (e.g., a first semiconductor memory management (RXX dvrMemory Manager) algorithm) may be a software module developed by a first semiconductor company based on a Linux operating system, but the present disclosure is not limited thereto.
In one embodiment, referring to step 302, the processor 130 generates the virtual buffer 113 in the virtual capacity of the storage device 110 through a virtual buffer algorithm. For example, the virtual buffer algorithm may be a first algorithm 131A (e.g., linux system kernel (Linux kernel) algorithm, as shown in fig. 2), the virtual capacity may be a virtual address space (fake address space), and the virtual buffer 113 may be a Sparse Memory (Sparse Memory) block 113, in other words, the processor 130 may register the Sparse Memory (Sparse Memory) block 113 with the first algorithm 131A (e.g., linux system kernel (Linux kernel) algorithm), but the present invention is not limited thereto.
In some embodiments, the total capacity of the Memory device 110 may be 4GB, the physical Buffer 111 may be a first Buffer (Buffer 1), the virtual Buffer 113 may be a second Buffer (Buffer 2), the second Buffer (Buffer 2) may be a Sparse Memory (Memory) block 113, the processor 130 may generate the virtual address space (fake address space) in the Memory device 110 through a virtual Buffer algorithm, the first Buffer (Buffer 1) may use a 0-4 GB area of the Memory device 110, and the Sparse Memory (Memory) block 113 may use a 4-5 GB area (not actually existing in the virtual address space (fake address space) for the Memory device 110), however, the Sparse Memory (Memory) block 113 has a core paging structure (kernel pages structure) characteristic, so the Sparse Memory (Memory) block 113 may be regarded as an actually existing physical Memory (Memory) for the second device 910, but the present invention is not limited thereto. The address translation system 100 does not require additional hardware (e.g., additional memory capacity) or resources to couple the first device 900 to the second device 910.
In some embodiments, the physical Buffer 111 may be a first Buffer (Buffer 1), the virtual Buffer 113 may be a second Buffer (Buffer 2), and the total number of the first Buffer (Buffer 1) and the second Buffer (Buffer 2) may be 12, although the present invention is not limited thereto.
In one embodiment, referring to step 303, the coupling relationship between the physical buffer 111 and the virtual buffer 113 is established by the coupler 121 of the memory bus 120 through a coupling algorithm. For example, the memory bus 120 may include a Monitor switch (Monitor switch) 121, and the Monitor switch (Monitor switch) 121 may couple a first address of the buffer 111 to a second address of the virtual buffer 113 through a coupling algorithm, but is not limited thereto.
In one embodiment, referring to step 304, compressed data is received from the first device 900 via the physical buffer 111. For example, the first device 900 may be an In-house IP (In-house IP) 900, which may output compressed data (compressed data), and the compressed data may be written into the buffer 111, but the present invention is not limited thereto.
In one embodiment, referring to step 305, when the second device 910 wants to read the virtual buffer 113, the second device 910 is guided to the physical buffer 111 for reading through the coupling relationship by the coupler 121. For example, the second device 910 may be a Vendor IP (Vendor IP) 910, and the memory bus 120 may include a Monitor switch (Vendor IP) 121, and when the Vendor IP (Vendor IP) 910 wants to read the virtual buffer 113, the Monitor switch (Vendor IP) 121 may direct the Vendor IP (Vendor IP) 910 to read the physical buffer 111 through a coupling relationship, but the present application is not limited thereto.
In one embodiment, referring to step 306, the physical buffer 111 transfers the compressed data to the coupler 121 via the memory bus 120. In one embodiment, referring to step 307, the compressed data is decompressed into decompressed data by the coupler 121. In one embodiment, referring to step 308, the decompressed data is transferred to the second device 910 via the memory bus 120. For example, the physical buffer 111 can transmit compressed data to the coupler 121, the coupler 121 can decompress the compressed data into decompressed data (decompressed data) after the coupler 121 receives the compressed data, and then the memory bus 120 transmits the decompressed data to the second device 910, but the present invention is not limited thereto.
Fig. 4 to 6 are flowcharts illustrating another address translation method according to an embodiment of the present disclosure. For easy understanding of the address translation method 400 of fig. 4 to 6, please refer to fig. 1, 2, and 4 to 6 together. The address translation method 400 of fig. 4 to 6 includes the steps of:
step 401: transmitting a generate buffer instruction by the first device 900;
step 402: generating a physical buffer 111 in the storage device 110 by a buffer algorithm according to the generated buffer instruction;
Step 403: transmitting a return buffer instruction to the first device 900;
step 404: generating virtual buffer instructions by the second device 910;
step 405: generating a virtual buffer 113 in the virtual capacity of the storage device 110 according to the generated virtual buffer instruction by a virtual buffer algorithm;
step 406: a coupler 121 that outputs a coupling instruction to the memory bus 120;
step 407: coupling the first address of the physical buffer 111 to the second address of the virtual buffer 113 by a coupling algorithm according to the coupling instruction by the coupler 121;
step 408: transmitting a return virtual buffer instruction to the second device 910;
step 409: transmitting the compressed data to the physical buffer 111 by the first device 900 according to the return buffer instruction;
step 410: transmitting a read data instruction to the memory bus 120 by the second device 910 according to the return virtual buffer instruction;
step 411: confirming, by the coupler 121, whether the virtual read data instruction is received from the second device 910;
step 412: confirming receipt of the dummy read data command from the second device 910 by the coupler 121 to transfer the read buffer command to the physical buffer 111;
step 413: transmitting the compressed data to the coupler 121 by the physical buffer 111 according to the read buffer instruction;
Step 414: decompressing the compressed data into decompressed data by the coupler 121;
step 415: the decompressed data is transferred to the second device 910 via the memory bus 120.
In one embodiment, referring to steps 301, 401 and 402, the step of generating the buffer 111 in the storage device 110 by the processor 130 includes: transmitting a generate buffer instruction by the first device 900; and processor 130 generates physical buffers 111 in storage 110 according to the generate buffer instructions to pass the buffer algorithm. For example, the physical Buffer 111 may be a first Buffer (Buffer 1), and the generated Buffer instruction may be a Buffer output request (request output Buffer) instruction, but the present invention is not limited thereto.
In one embodiment, referring to steps 301, 403 and 404, the step of generating the physical buffer 111 in the storage device 110 by the processor 130 further includes: the processor 130 transmits a return buffer instruction to the first device 900; and generating virtual buffer instructions by the second device 910. For example, the return Buffer instruction may be a return Buffer 1 instruction, and the virtual Buffer instruction may be a Buffer input request request input Buffer instruction, but the present invention is not limited thereto.
In one embodiment, referring to steps 302 and 405, the step of generating the virtual buffer 113 in the virtual capacity of the storage device 110 by the processor 130 through the virtual buffer algorithm includes: processor 130 generates virtual buffer 113 in the virtual capacity of storage device 110 according to the generate virtual buffer instruction to pass through the virtual buffer algorithm. For example, the processor 130 may generate the virtual Buffer 113 (e.g., the second Buffer 2) in the virtual capacity (e.g., fake address space) of the storage device 110 according to a virtual Buffer instruction (e.g., a Buffer input request (request input Buffer) instruction) by creating a virtual Buffer algorithm (e.g., a Linux system kernel (Linux kernel) algorithm), but the present application is not limited thereto.
In one embodiment, referring to step 303, step 406 and step 407, the step of establishing the coupling relationship between the physical buffer 111 and the virtual buffer 113 by the memory bus 120 through the coupling algorithm includes: processor 130 outputs a coupling instruction to coupler 121 of memory bus 120; and coupling the first address of the physical buffer 111 to the second address of the virtual buffer by a coupling algorithm according to the coupling instruction by means of the coupler 121. For example, the coupling instruction may be an instruction to allocate a virtual second Buffer (allocate fake Buffer 2), and the coupling instruction is written into a coupler 121 (e.g., a Monitor switch (Monitor switch) 121) of the memory bus 120 to map (map), combine (bind) or couple (couple) a second address of a virtual Buffer 113 (e.g., a second Buffer (Buffer 2)) to a first address of a Buffer 111 (e.g., a first Buffer (Buffer 1)), but the present invention is not limited thereto.
In some embodiments, the coupling algorithm may be a Monitor switch (Monitor switch) 121 that monitors the address (e.g., read address) of the second device 910 in real-time (real-time). When the read address falls between the start address and the end address of the second Buffer (Buffer 2) in the virtual Buffer 113, the read address is first converted (e.g., zero-order address conversion (Level-0 address translator)) to convert the read address into an address corresponding to the first Buffer (Buffer 1) (e.g., first Buffer read address).
Then, the first Buffer read address (Buffer 1_read_address) is subjected to a second conversion (e.g., first-order conversion (Level-1 translator)) to obtain a data offset (e.g., first Buffer offset address (Buffer 1_offset_address)) and a first cache (e.g., header cache) corresponding to the first Buffer (Buffer 1). In addition, a first cache (e.g., header cache) may generate a second cache (e.g., decompressed data cache).
After receiving the first Buffer offset address (Buffer 1_offset_address) and the decompressed data cache (decompensation), the interface (e.g., the memory interface (DDR interface)) of the memory device 110 drives the bus monitoring converter (Monitor Wrapper) 121 to decompress the compressed data of the first device 900 (e.g., the In-house IP) into decompressed data (decompensated data) according to the decompressed data cache (decompensation). The Monitor Wrapper 121 then transmits the decompressed data to the second device 910 (e.g., vendor IP), but the present application is not limited thereto.
In one embodiment, referring to step 303 and step 408, the step of establishing the coupling relationship between the physical buffer 111 and the virtual buffer 113 by the memory bus 120 through the coupling algorithm further comprises: the processor 130 transmits a return virtual buffer instruction to the second device 910. For example, the return virtual Buffer instruction may be a return Buffer 2 instruction, but the present invention is not limited thereto.
In one embodiment, referring to steps 304 and 409, the step of receiving compressed data from the first device 900 via the physical buffer 111 includes: the compressed data is transferred to the physical buffer 111 by the first device 900 according to the return buffer instruction. For example, the first device 900 may transmit the compressed data to the first Buffer (Buffer 1) according to a return Buffer1 instruction, but the present invention is not limited thereto.
In one embodiment, referring to steps 305, 410 and 411, when the second device 910 wants to read the virtual buffer 113, the step of guiding the second device 910 to the physical buffer 111 for reading through the coupling relationship by the memory bus 120 includes: transmitting a read data instruction to the memory bus 120 by the second device 910 according to the return virtual buffer instruction; and confirming, by the coupler 121, whether the dummy read data instruction is received from the second device 910.
For example, the read data instruction may be a read data (read data from Buffer) instruction from the second Buffer, the virtual read data instruction may be a virtual read trigger (faked read trigger) instruction, the second device 910 may send a read data (read data from Buffer 2) instruction from the second Buffer to the memory bus 120 according to a return Buffer (return Buffer 2) instruction, and the coupler 121 may further confirm (or identify) whether the read data (read data from Buffer 2) instruction from the second Buffer output from the second device 910 is a virtual read trigger (faked read trigger) instruction, but the present invention is not limited thereto.
In one embodiment, referring to steps 306, 412 and 413, the step of transferring compressed data to the memory bus 120 via the physical buffer 111 includes: confirming receipt of the dummy read data command from the second device 910 by the coupler 121 to transfer the read buffer command to the physical buffer 111; and transmitting the compressed data to the coupler 121 by the physical buffer 111 according to the read buffer instruction.
For example, the read buffer command may be a read request (request read from Buffer 1) command from the first buffer, the coupler 121 may confirm (or recognize) the read data (read data from Buffer 2) command from the second buffer output from the second device 910 as a virtual read trigger (faked read trigger) command, and the coupler 121 may then transmit the read request (request read from Buffer 1) command from the first buffer to the physical buffer 111, and the physical buffer 111 transmits the compressed data to the coupler 121 according to the read request (request read from Buffer 1) command from the first buffer.
In one embodiment, referring to steps 307 and 414, the compressed data is decompressed into decompressed data by the coupler 121. In one embodiment, referring to steps 307 and 415, the decompressed data is transferred to the second device 910 via the memory bus 120. For example, after the coupler 121 receives the compressed data, the coupler 121 decompresses the compressed data into decompressed data (decompressed data), and then the memory bus 120 transmits the decompressed data to the second device 910, but the present invention is not limited thereto.
In some embodiments, the virtual buffer algorithm comprises a Linux algorithm and the coupler 121 comprises a decompressor. For example, the virtual buffer algorithm may be an algorithm technology related to the first algorithm 131A (for example, linux kernel, as shown in fig. 2), the Decompressor may be any Decompressor (decompresser) on a generic entity or software, and the memory bus 120 may include a bus monitoring converter (monitor wrapper) and a Decompressor (decompresser), but the present application is not limited thereto.
As can be seen from the above embodiments, the application of the present invention has the following advantages. The address conversion system and the address conversion method in the embodiment of the invention can reduce the consumption of resources in the memory, so as to achieve the effect of transmitting data by the hardware with two different addresses.
Although the embodiments of the present invention have been described in detail in the foregoing description, the invention is not limited thereto, but rather the scope of the invention is to be determined by the appended claims, since various changes and modifications may be made therein by those skilled in the art without departing from the spirit and principles of the invention.
[ symbolic description ]
100: address translation system
110: storage device
111: buffer zone
113: virtual buffer
120: memory bus
121: coupler (bus monitoring converter)
130. 130A: processor and method for controlling the same
900: first device (homemade product)
910: second device (outsourcing products)
131A: first algorithm (Algorithm)
133A: second algorithm (Algorithm)
135A: third algorithm (Algorithm)
300. 400: address translation method
301 to 307: step (a)
401-414: step (a)

Claims (10)

1. An address translation system, comprising:
a storage device;
a memory bus, the memory bus comprising: a coupler to couple the first device to the second device; and
a processor for executing the following steps according to the instructions of the storage device:
generating an entity buffer in the storage device;
generating a virtual buffer in the virtual capacity of the storage device by a virtual buffer algorithm;
establishing a coupling relation between the physical buffer area and the virtual buffer area through a coupling algorithm by the coupler of the memory bus;
receiving compressed data from the first device via the physical buffer;
when the second device wants to read the virtual buffer, the second device is guided to the physical buffer for reading through the coupling relation by the coupler;
Transmitting the compressed data of the physical buffer to the coupler via the memory bus;
decompressing the compressed data into decompressed data by the coupler; and
the decompressed data is transferred to the second device via the memory bus.
2. The address translation system of claim 1, wherein said processor is further configured to execute the following steps according to the instructions of the memory device:
generating a buffer instruction by the first device; and
generating the physical buffer in the storage device by a buffer algorithm according to the generate buffer instruction.
3. The address translation system of claim 2, wherein the processor is further configured to execute the following steps according to the instructions of the memory device:
transmitting a return buffer instruction to the first device; and
generating virtual buffer instructions by the second device transfer.
4. The address translation system of claim 3, wherein said processor is further configured to execute the following steps according to the instructions of the memory device:
generating the virtual buffer in the virtual capacity of the storage device according to the virtual buffer generating instruction.
5. The address translation system of claim 4, wherein said processor is further configured to execute the following steps according to the instructions of the memory device:
outputting a coupling instruction to the coupler of the memory bus; and
the coupler is used for coupling the first address of the physical buffer area with the second address of the virtual buffer area through the coupling algorithm according to the coupling instruction.
6. The address translation system of claim 5, wherein said processor is further configured to execute the following steps according to the instructions of the memory device:
and transmitting a return virtual buffer instruction to the second device.
7. The address translation system of claim 6, wherein said processor is further configured to execute the following steps according to the instructions of the memory device:
the compressed data is transferred to the physical buffer by the first device according to the return buffer instruction.
8. The address translation system of claim 7, wherein said processor is further configured to execute the following steps according to the instructions of the memory device:
transmitting a read data command to the memory bus by the second device according to the return virtual buffer command; and
Determining, by the coupler, whether to receive a dummy read data command from the second device.
9. The address translation system of claim 8, wherein said processor is further configured to execute the following steps according to the instructions of the memory device:
confirming, by the coupler, that the virtual read data command is received from the second device to transmit a read buffer command to the physical buffer; and
the physical buffer is used for transmitting the compressed data to the coupler according to the reading buffer instruction.
10. An address translation method, comprising:
generating an entity buffer in the storage device;
generating a virtual buffer in the virtual capacity of the storage device by a virtual buffer algorithm;
establishing a coupling relation between the physical buffer area and the virtual buffer area through a coupling algorithm by a coupler of a memory bus;
receiving compressed data from the first device via the physical buffer;
when the second device wants to read the virtual buffer, the second device is guided to the physical buffer for reading through the coupling relation by the coupler;
transmitting the compressed data of the physical buffer to the coupler via the memory bus;
Decompressing the compressed data into decompressed data by the coupler: and
The decompressed data is transferred to the second device via the memory bus.
CN202211220628.3A 2022-10-06 2022-10-06 Address translation system and address translation method Pending CN117891755A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211220628.3A CN117891755A (en) 2022-10-06 2022-10-06 Address translation system and address translation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211220628.3A CN117891755A (en) 2022-10-06 2022-10-06 Address translation system and address translation method

Publications (1)

Publication Number Publication Date
CN117891755A true CN117891755A (en) 2024-04-16

Family

ID=90641773

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211220628.3A Pending CN117891755A (en) 2022-10-06 2022-10-06 Address translation system and address translation method

Country Status (1)

Country Link
CN (1) CN117891755A (en)

Similar Documents

Publication Publication Date Title
US6263413B1 (en) Memory integrated circuit and main memory and graphics memory systems applying the above
US5974471A (en) Computer system having distributed compression and decompression logic for compressed data movement
US8001294B2 (en) Methods and apparatus for providing a compressed network in a multi-processing system
JP6069031B2 (en) Computer and memory management method
WO2023197507A1 (en) Video data processing method, system, and apparatus, and computer readable storage medium
US11625344B2 (en) Transmission control circuit, data transmission system using different data formats, and operating method thereof
CN117891755A (en) Address translation system and address translation method
TWI813455B (en) Address conversion system and address conversion method
US7864359B2 (en) Data compression and decompression unit
CN111724295B (en) Collaborative access method and system for external memory and collaborative access architecture
JPH10329371A (en) Printer memory boost
KR100489719B1 (en) A specialized memory device
CN104750634A (en) Reading method, system and interconnecting device controller
KR100591243B1 (en) On-chip serialized peripheral bus system and operating method thereof
US20050204081A1 (en) [data compression/decompression device and system applying the same]
KR102561316B1 (en) Electronic device and computing system including same
US20240319878A1 (en) Electronic device and computing system including same
WO2024066547A1 (en) Data compression method, apparatus, computing device, and storage system
JP3146197B2 (en) Data transfer device and storage device
JP7206485B2 (en) Information processing system, semiconductor integrated circuit and information processing method
US7053900B2 (en) Personal computer system and core logic chip applied to same
TWM628892U (en) Cloud desktop display system
CN117850661A (en) Method, apparatus and computer program product for processing compressed data
CN112468576A (en) Method and system for sharing cloud memory
CN116743523A (en) Data transmission method, device, electronic equipment, readable storage medium and product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination