CN117707993A - Memory resource access method, device, equipment and medium - Google Patents

Memory resource access method, device, equipment and medium Download PDF

Info

Publication number
CN117707993A
CN117707993A CN202311696050.3A CN202311696050A CN117707993A CN 117707993 A CN117707993 A CN 117707993A CN 202311696050 A CN202311696050 A CN 202311696050A CN 117707993 A CN117707993 A CN 117707993A
Authority
CN
China
Prior art keywords
memory
physical address
target
local
resource
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311696050.3A
Other languages
Chinese (zh)
Inventor
吴海乔
吴伟雄
张晨
黄韬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Network Communication and Security Zijinshan Laboratory
Original Assignee
Network Communication and Security Zijinshan Laboratory
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Network Communication and Security Zijinshan Laboratory filed Critical Network Communication and Security Zijinshan Laboratory
Priority to CN202311696050.3A priority Critical patent/CN117707993A/en
Publication of CN117707993A publication Critical patent/CN117707993A/en
Pending legal-status Critical Current

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The application discloses a memory resource access method, a device, equipment and a medium, which relate to the technical field of computers and are applied to a server-free data center architecture, wherein memory resources of a computing pool comprise a local memory and a cache, the memory resources of the memory pool form a pooled memory, the local memory is only accessed by a resource user positioned in the local computing pool, and the pooled memory can be accessed by resource users positioned in all computing pools, and the method comprises the following steps: determining a target physical address corresponding to a physical address to be accessed of a resource user; if the target physical address is located in the local memory, returning first target data located in the target physical address in the local memory to the resource user; if the target physical address is in the pooled memory, storing second target data on the target hardware address corresponding to the target physical address into a cache corresponding to the resource user, and obtaining the second target data from the cache by the resource user. And realizing access to memory resources in the architecture of the server-free data center.

Description

Memory resource access method, device, equipment and medium
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a method, an apparatus, a device, and a medium for accessing memory resources.
Background
In the existing data center, servers are mostly used as deployment units, and each server contains resources for running user programs, such as a processor, a memory, an SSD (Solid State Drive, i.e. a solid state disk), and the like, so that the architecture has the problems of insufficient resource expansibility, low resource utilization rate, low fault tolerance and the like. With the rapid development of network technology, the architecture of the decoupling of the storage resources can effectively solve the limitations, the implementation scheme of the decoupling of the resources is various, the center architecture of server-free data (Serverless Datacenter) based on an IPU (IO Processing Unit ) takes the IPU as a core, and a central processing unit (Central Processing Unit, CPU), a graphic processor (Graphics Processing Unit, GPU), a field programmable gate array (Field Programmable Gate Array, FPGA), a random access memory (Random Access Memory, RAM), a solid state Disk, a mechanical Hard Disk (Hard Disk Drive, HDD) and other heterogeneous storage resource pools are constructed, and the interconnection and intercommunication among the resource pools are realized by connecting all the hardware resource pools (i.e. heterogeneous storage pools) through a network, so that the method has the characteristics of high resource utilization rate, high expansibility and high fault tolerance.
Memory resources in the server-free data center architecture have the characteristics of huge capacity and long distance from computing resources, and most of the prior art is based on memory resource access logic designed by a server, and cannot be completely suitable for the server-free data center architecture.
In summary, how to achieve access to memory resources in a serverless data center architecture is a problem to be solved in the art.
Disclosure of Invention
Accordingly, the present invention is directed to a method, apparatus, device and medium for accessing memory resources, which can access memory resources in a serverless data center architecture. The specific scheme is as follows:
in a first aspect, the present application discloses a memory resource access method, applied to a server-less data center architecture, where memory resources of a computing pool in the server-less data center architecture include a local memory and a cache, and the memory resources of the memory pool form a pooled memory, where the local memory is only accessed by resource users located in the local computing pool, and the pooled memory is accessible by resource users located in all computing pools, and the method includes:
determining a preset physical address space corresponding to a resource user, and determining a target physical address corresponding to a physical address to be accessed of the resource user based on the preset physical address space;
if the target physical address is located in the local memory, returning first target data located on the target physical address in the local memory to the resource user;
if the target physical address is located in the pooled memory, searching a target hardware address corresponding to the target physical address, and storing second target data on the target hardware address into the cache corresponding to the resource user, so that the resource user can acquire the second target data from the cache.
Optionally, before determining the preset physical address space corresponding to the resource user, the method further includes:
constructing a preset physical address space containing a local memory and a pooled memory for each computing pool; the local memory is located at the front end of the preset physical address space, and the pooled memory is located at the rear end of the preset physical address space.
Optionally, the memory resource access method further includes:
when the total capacity of the pooled memory changes, updating the preset physical address space; the total capacity of the pooled memory is the total capacity of all memory pools.
Optionally, the searching for the target hardware address corresponding to the target physical address includes:
searching a target local physical address corresponding to the target physical address in a first preset address mapping table;
and determining a target hardware address corresponding to the target local physical address in a second preset address mapping table.
Optionally, the searching the target local physical address corresponding to the target physical address in the first preset address mapping table includes:
searching a target local physical address corresponding to the target physical address in a first preset address mapping table through an IO processing unit of a computing pool corresponding to the resource user in the server-free data center architecture;
correspondingly, the determining, in the second preset address mapping table, the target hardware address corresponding to the target local physical address includes:
and determining a target hardware address corresponding to the target local physical address in a second preset address mapping table through an IO processing unit of a memory pool in the server-free data center architecture.
Optionally, before the target local physical address corresponding to the target physical address is found in the first preset address mapping table, the method further includes:
and constructing a mapping relation between the local physical address of the memory pool and the physical address of the pooled memory by the control center of the server-free data center architecture to obtain a first preset address mapping table.
Optionally, before determining the target hardware address corresponding to the target local physical address in the second preset address mapping table, the method further includes:
and constructing a mapping relation between a local physical address and a hardware address of the memory pool through an IO processing unit of the memory pool in the server-free data center architecture so as to obtain a second preset address mapping table.
In a second aspect, the present application discloses a memory resource access device, applied to a server-less data center architecture, where memory resources of a computing pool in the server-less data center architecture include a local memory and a cache, and the memory resources of the memory pool form a pooled memory, where the local memory is only accessed by resource users located in the local computing pool, and the pooled memory is accessible by resource users located in all computing pools, and the device includes:
the physical address determining module is used for determining a preset physical address space corresponding to a resource user and determining a target physical address corresponding to a physical address to be accessed of the resource user based on the preset physical address space;
the first data access module is used for returning first target data positioned on the target physical address in the local memory to the resource user if the target physical address is positioned in the local memory;
and the second data access module is used for searching out a target hardware address corresponding to the target physical address if the target physical address is positioned in the pooled memory, and storing second target data on the target hardware address into the cache corresponding to the resource user so that the resource user can acquire the second target data from the cache.
In a third aspect, the present application discloses an electronic device comprising:
a memory for storing a computer program;
and a processor for executing the computer program to implement the steps of the memory resource access method disclosed above.
In a fourth aspect, the present application discloses a computer-readable storage medium for storing a computer program; wherein the computer program when executed by a processor implements the steps of the previously disclosed memory resource access method.
The beneficial effects of the application are that: the method is applied to a server-free data center architecture, memory resources of a computing pool in the server-free data center architecture comprise a local memory and a cache, the memory resources of the memory pool form a pooled memory, wherein the local memory is only accessed by resource users positioned in the local computing pool, the pooled memory can be accessed by resource users positioned in all computing pools, and the method comprises the following steps: determining a preset physical address space corresponding to a resource user, and determining a target physical address corresponding to a physical address to be accessed of the resource user based on the preset physical address space; if the target physical address is located in the local memory, returning first target data located on the target physical address in the local memory to the resource user; if the target physical address is located in the pooled memory, searching a target hardware address corresponding to the target physical address, and storing second target data on the target hardware address into the cache corresponding to the resource user, so that the resource user can acquire the second target data from the cache. The server-free data center architecture comprises computing pool memory resources and memory pool memory resources, the memory resources of the computing pool comprise local memories and caches, the memory pool memory resources form pooled memory, wherein the local memories can only be accessed by local computing tasks, the caches are shared by all computing units of the computing pool, and the pooled memory can be accessed to the stored memory resources through physical addresses.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present invention, and that other drawings may be obtained according to the provided drawings without inventive effort to a person skilled in the art.
FIG. 1 is a flow chart of a memory resource access method disclosed in the present application;
FIG. 2 is a schematic diagram of memory resources of a server-less data center architecture according to one embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a specific physical address space construction disclosed herein;
FIG. 4 is a schematic diagram illustrating a specific memory resource access disclosed herein;
fig. 5 is a schematic structural diagram of a memory resource access device disclosed in the present application;
fig. 6 is a block diagram of an electronic device disclosed in the present application.
Detailed Description
The following description of the technical solutions in the embodiments of the present application will be made clearly and completely with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In the existing data center, servers are used as deployment units, and each server comprises resources for running user programs, such as a processor, a memory, an SSD and the like, so that the architecture has the problems of insufficient resource expansibility, low resource utilization rate, low fault tolerance and the like. With the rapid development of network technology, the architecture of the storage and calculation resource decoupling can effectively solve the limitations, the implementation scheme of the resource decoupling is various, the architecture of the server-free data center based on the IPU takes the IPU as a core, the heterogeneous storage and calculation resource pools such as a central processing unit, a graphic processor, a field programmable logic gate array, a random access memory, a solid state disk, a mechanical hard disk and the like are constructed, and the heterogeneous storage and calculation resource pools are connected with each hardware resource pool through a network, so that the interconnection and the intercommunication among the resource pools are realized, and the architecture has the characteristics of high resource utilization rate, high expansibility and high fault tolerance.
Memory resources in the server-free data center architecture have the characteristics of huge capacity and long distance from computing resources, and most of the prior art is based on memory resource access logic designed by a server, and cannot be completely suitable for the server-free data center architecture.
Therefore, the application correspondingly provides a memory resource access scheme which can realize the access to the memory resource in the server-free data center architecture.
Referring to fig. 1, an embodiment of the present application discloses a memory resource access method, which is applied to a server-less data center architecture, wherein memory resources of a computing pool in the server-less data center architecture include a local memory and a cache, and the memory resources of the memory pool form a pooled memory, wherein the local memory is only accessed by a resource user located in the local computing pool, and the pooled memory is accessible by resource users located in all computing pools, and the method includes:
step S11: and determining a preset physical address space corresponding to the resource using party, and determining a target physical address corresponding to the physical address to be accessed of the resource using party based on the preset physical address space.
A resource consumer may understand a computing task that requires the use of a resource; the cache may preferably be an L4 cache (level four cache).
For example, a specific memory resource diagram of a server-less data center architecture is shown in fig. 2, where the server-less data center architecture includes a computing pool, a memory pool and a control center, and the memory resource of the server-less data center architecture is divided into a computing pool memory resource and a memory pool memory resource, the memory resource of the computing pool includes a local memory and an L4 cache (a fourth level cache), and the memory pool memory resource forms a pooled memory, specifically as follows:
1) The local memory is only accessible to local computing tasks, namely, the local memory is only used by the computing tasks running in the CPU pool and is mainly used for storing read-only kernel codes and various kernel table entries, such as an interrupt vector table, a process state record table and the like, and the local memory is of a fixed size;
2) The L4 cache is a section of cache opened up by the memory of the computing pool, shared by all computing units of the local computing pool, and data on a hardware address in the pooled memory can be taken into the L4 cache in advance through a prefetching strategy;
3) The pooled memory can be used for all computing tasks in the architecture of the server-free data center, and is mainly used for storing user-related data, such as user codes, user stacks, memory mapping files and the like, and the pooled memory can be expanded as required.
In this embodiment, before determining the preset physical address space corresponding to the resource user, the method further includes: constructing a preset physical address space containing a local memory and a pooled memory for each computing pool; the local memory is located at the front end of the preset physical address space, and the pooled memory is located at the rear end of the preset physical address space. For example, in a specific physical address space construction schematic diagram shown in fig. 3, a preset physical address space corresponding to each computing pool is constructed, for example, a preset physical address space 1 corresponding to the computing pool 1 (i.e., the CPU pool 1) is constructed, and the local memory 1 and the pooled memory of the computing pool 1 are written into the preset physical address space 1, where the local memory 1 is located at the front end of the preset physical address space 1, and the pooled memory is located at the rear end of the preset physical address space 1. In this way, when the physical address to be accessed of the resource user is received, a preset physical address space corresponding to the resource user is first determined, for example, the resource use position is in the computing pool 1, then the preset physical address space 1 corresponds to the resource user, the resource use position is in the computing pool 2, then the preset physical address space 2 corresponds to the resource user, and then the target physical address corresponding to the physical address to be accessed of the resource user is determined in the preset physical address space. It should be noted that, in each preset physical address space, the size of the address space used for storing the local memory is consistent, so that the storage starting position of the pooled memory in each preset physical address space is consistent, that is, the offset of the pooled memory is consistent.
In this embodiment, the method further includes: when the total capacity of the pooled memory changes, updating the preset physical address space; the total capacity of the pooled memory is the total capacity of all memory pools. It should be understood that, because the total capacity of the pooled memory is the total capacity of all the memory pools, that is, the pooled memory physical addresses are formed by the local physical addresses of all the memory pools according to the memory pool capacity offset, assuming that the capacities of 3 memory pools are A, B, C, the physical addresses of the pooled memory are 0-a-1-a+b-1-a+b+c-1, assuming that the memory pool a has 3 memory units with capacities of a1, a2, and a3, the local physical addresses in 0-a-1 are 0-a 1-a 1+a 2-1-a 1+a2+a3-1. It can be understood that when the memory pool changes, the total capacity of the pooled memory also changes, so that the preset physical address space needs to be updated, so that when the memory access is performed based on the preset physical address space, the preset physical address space can be ensured to be the latest version, and the subsequent memory access can be ensured.
It should be noted that, because the size of the local memory is fixed and does not change, and the size of the pooled memory is not fixed and does change, when the preset physical address space is constructed, the local memory needs to be located at the front end of the preset physical address space, and the pooled memory is located at the rear end of the preset physical address space, even if the pooled memory changes, the local memory is not affected, so that the amount of addresses that need to be changed when the preset physical address space is updated is small.
Step S12: and if the target physical address is located in the local memory, returning the first target data located on the target physical address in the local memory to the resource user.
For example, in a specific memory resource access schematic diagram shown in fig. 4, if it is determined that the target physical address corresponding to the physical address to be accessed sent by the resource user is located in the local memory, the first target data on the target physical address is directly read, and the first target data is returned to the resource user.
Step S13: if the target physical address is located in the pooled memory, searching a target hardware address corresponding to the target physical address, and storing second target data on the target hardware address into the cache corresponding to the resource user, so that the resource user can acquire the second target data from the cache.
In this embodiment, the searching for the target hardware address corresponding to the target physical address includes: searching a target local physical address corresponding to the target physical address in a first preset address mapping table; and determining a target hardware address corresponding to the target local physical address in a second preset address mapping table.
In this embodiment, the searching for the target local physical address corresponding to the target physical address in the first preset address mapping table includes: and searching a target local physical address corresponding to the target physical address in a first preset address mapping table through an IO processing unit of a computing pool corresponding to the resource user in the server-free data center architecture.
In this embodiment, the determining, in the second preset address mapping table, the target hardware address corresponding to the target local physical address includes: and determining a target hardware address corresponding to the target local physical address in a second preset address mapping table through an IO processing unit of a memory pool in the server-free data center architecture.
As shown in fig. 4, if the target physical address is located in the pooled memory, the target hardware address corresponding to the target physical address is found, specifically:
1) The memory access request containing the target physical address is sent to the IPU (i.e. IO processing unit) of the computing pool corresponding to the resource using party, that is, if the resource using party is located in the computing pool 1, the memory access request containing the target physical address is sent to the IPU of the computing pool 1, so that the IPU of the computing pool searches the target local physical address corresponding to the target physical address in the first preset address mapping table, and the IP address of the memory pool corresponding to the target local physical address can be also searched; it can be understood that if the target local physical address corresponding to the target physical address and the IP address of the memory pool are not found in the first preset address mapping table, the access failure is indicated;
2) The IPU of the computing pool sends the target local physical address to the IPU of the memory pool corresponding to the IP address of the memory pool, so that the IPU of the memory pool determines a target hardware address corresponding to the target local physical address in a second preset address mapping table; it can be understood that if the target hardware address corresponding to the target physical address is not found in the second preset address mapping table, the access failure is indicated; therefore, the work of inquiring the first preset address mapping table and the second preset address mapping table is realized by the IO processing unit, the low-delay characteristic forwarded by the hardware device can be fully utilized, and the time required by memory access is reduced;
3) And the IPU of the memory pool reads the second target data on the target hardware address, and stores the second target data into a cache corresponding to the resource user so that the resource user can acquire the second target data from the cache.
In this embodiment, before the target local physical address corresponding to the target physical address is found in the first preset address mapping table, the method further includes: and constructing a mapping relation between the local physical address of the memory pool and the physical address of the pooled memory by the control center of the server-free data center architecture to obtain a first preset address mapping table. It can be understood that the first preset address mapping table further includes an IP address of the memory pool corresponding to the local physical address.
The control center of the server-free data center architecture establishes a first preset address mapping table, namely a mapping table from 'pooled memory physical addresses' to 'local physical addresses', and synchronizes the total first preset address mapping table to the IPU of each computing pool.
In this embodiment, before determining the target hardware address corresponding to the target local physical address in the second preset address mapping table, the method further includes: and constructing a mapping relation between a local physical address and a hardware address of the memory pool through an IO processing unit of the memory pool in the server-free data center architecture so as to obtain a second preset address mapping table. The IO processing unit of the memory pool establishes a second preset address mapping table, namely a mapping table from a local physical address to a hardware address, and reports information to the control center, wherein the information comprises the IP address of the memory pool, the capacity of memory resources and the like.
The beneficial effects of the application are that: the method is applied to a server-free data center architecture, memory resources of a computing pool in the server-free data center architecture comprise local memory and cache, and the memory resources of the memory pool form pooled memory, and the method comprises the following steps: determining a preset physical address space corresponding to a resource user, and determining a target physical address corresponding to a physical address to be accessed of the resource user based on the preset physical address space; if the target physical address is located in the local memory, returning first target data located on the target physical address in the local memory to the resource user; if the target physical address is located in the pooled memory, searching a target hardware address corresponding to the target physical address, and storing second target data on the target hardware address into the cache corresponding to the resource user, so that the resource user can acquire the second target data from the cache. The server-free data center architecture comprises computing pool memory resources and memory pool memory resources, the memory resources of the computing pool comprise local memories and caches, the memory pool memory resources form pooled memory, wherein the local memories can only be accessed by local computing tasks, the caches are shared by all computing units of the computing pool, and the pooled memory can be accessed to the stored memory resources through physical addresses.
Referring to fig. 5, an embodiment of the present application discloses a memory resource access device, where memory resources of a computing pool in a server-less data center architecture include a local memory and a cache, and the memory resources of the memory pool form a pooled memory, where the local memory is only accessed by resource users located in the local computing pool, and the pooled memory is accessible by resource users located in all computing pools, and the device includes:
a physical address determining module 11, configured to determine a preset physical address space corresponding to a resource user, and determine a target physical address corresponding to a physical address to be accessed of the resource user based on the preset physical address space;
a first data access module 12, configured to return, if the target physical address is located in the local memory, first target data located in the target physical address in the local memory to the resource user;
and the second data access module 13 is configured to find a target hardware address corresponding to the target physical address if the target physical address is located in the pooled memory, and store second target data on the target hardware address in the cache corresponding to the resource consumer, so that the resource consumer obtains the second target data from the cache.
The beneficial effects of the application are that: the method is applied to a server-free data center architecture, memory resources of a computing pool in the server-free data center architecture comprise local memory and cache, and the memory resources of the memory pool form pooled memory, and the method comprises the following steps: determining a preset physical address space corresponding to a resource user, and determining a target physical address corresponding to a physical address to be accessed of the resource user based on the preset physical address space; if the target physical address is located in the local memory, returning first target data located on the target physical address in the local memory to the resource user; if the target physical address is located in the pooled memory, searching a target hardware address corresponding to the target physical address, and storing second target data on the target hardware address into the cache corresponding to the resource user, so that the resource user can acquire the second target data from the cache. The server-free data center architecture comprises computing pool memory resources and memory pool memory resources, the memory resources of the computing pool comprise local memories and caches, the memory pool memory resources form pooled memory, wherein the local memories can only be accessed by local computing tasks, the caches are shared by all computing units of the computing pool, and the pooled memory can be accessed to the stored memory resources through physical addresses.
The specific implementation manner of the memory resource access device disclosed in the embodiment of the present application is the same as the specific steps of the memory resource access method described above, and will not be described in detail herein.
Further, the embodiment of the application also provides electronic equipment. Fig. 6 is a block diagram of an electronic device 20, according to an exemplary embodiment, and the contents of the diagram should not be construed as limiting the scope of use of the present application in any way.
Fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application. Specifically, the method comprises the following steps: at least one processor 21, at least one memory 22, a power supply 23, a communication interface 24, an input output interface 25, and a communication bus 26. The memory 22 is configured to store a computer program, where the computer program is loaded and executed by the processor 21 to implement relevant steps in the memory resource access method performed by the electronic device disclosed in any of the foregoing embodiments.
In this embodiment, the power supply 23 is configured to provide an operating voltage for each hardware device on the electronic device; the communication interface 24 can create a data transmission channel between the electronic device and an external device, and the communication protocol to be followed is any communication protocol applicable to the technical solution of the present application, which is not specifically limited herein; the input/output interface 25 is used for acquiring external input data or outputting external output data, and the specific interface type thereof may be selected according to the specific application requirement, which is not limited herein.
Processor 21 may include one or more processing cores, such as a 4-core processor, an 8-core processor, etc. The processor 21 may be implemented in at least one hardware form of DSP (Digital Signal Processing ), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array ). The processor 21 may also comprise a main processor, which is a processor for processing data in an awake state, also called CPU (Central Processing Unit ); a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 21 may integrate a GPU (Graphics Processing Unit, image processor) for rendering and drawing of content required to be displayed by the display screen. In some embodiments, the processor 21 may also include an AI (Artificial Intelligence ) processor for processing computing operations related to machine learning.
The memory 22 may be a carrier for storing resources, such as a read-only memory, a random access memory, a magnetic disk, or an optical disk, and the resources stored thereon include an operating system 221, a computer program 222, and data 223, and the storage may be temporary storage or permanent storage.
The operating system 221 is used for managing and controlling various hardware devices on the electronic device and the computer program 222, so as to implement the operation and processing of the processor 21 on the mass data 223 in the memory 22, which may be Windows, unix, linux. The computer program 222 may further include a computer program that can be used to perform other specific tasks in addition to the computer program that can be used to perform the memory resource access method performed by the electronic device as disclosed in any of the previous embodiments. The data 223 may include, in addition to data received by the electronic device and transmitted by the external device, data collected by the input/output interface 25 itself, and so on.
Further, the application also discloses a computer readable storage medium for storing a computer program; the computer program, when executed by the processor, implements the memory resource access method disclosed above. For specific steps of the method, reference may be made to the corresponding contents disclosed in the foregoing embodiments, and no further description is given here.
In this specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, so that the same or similar parts between the embodiments are referred to each other. For the device disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative elements and steps are described above generally in terms of functionality in order to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application. The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. The software modules may be placed in random access Memory (Random Access Memory), memory, read-Only Memory (ROM), electrically programmable EPROM (Erasable Programmable Read Only Memory), electrically erasable programmable EEPROM (Electrically Erasable Programmable Read Only Memory), registers, hard disk, removable disk, CD-ROM (CoMP 23030188act Disc Read-Only Memory), or any other form of storage medium known in the art.
Finally, it is further noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above detailed description of the memory resource access method, device, equipment and medium provided by the present invention applies specific examples to illustrate the principles and embodiments of the present invention, and the above examples are only used to help understand the method and core ideas of the present invention; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present invention, the present description should not be construed as limiting the present invention in view of the above.

Claims (10)

1. The memory resource access method is characterized by being applied to a server-free data center architecture, wherein memory resources of a computing pool in the server-free data center architecture comprise local memories and caches, the memory resources of the memory pool form pooled memories, the local memories are only accessed by resource users located in the local computing pool, and the pooled memories can be accessed by resource users located in all computing pools, and the method comprises the following steps:
determining a preset physical address space corresponding to a resource user, and determining a target physical address corresponding to a physical address to be accessed of the resource user based on the preset physical address space;
if the target physical address is located in the local memory, returning first target data located on the target physical address in the local memory to the resource user;
if the target physical address is located in the pooled memory, searching a target hardware address corresponding to the target physical address, and storing second target data on the target hardware address into the cache corresponding to the resource user, so that the resource user can acquire the second target data from the cache.
2. The memory resource access method according to claim 1, wherein before determining the preset physical address space corresponding to the resource-using party, the method further comprises:
constructing a preset physical address space containing a local memory and a pooled memory for each computing pool; the local memory is located at the front end of the preset physical address space, and the pooled memory is located at the rear end of the preset physical address space.
3. The memory resource access method of claim 2, further comprising:
when the total capacity of the pooled memory changes, updating the preset physical address space; the total capacity of the pooled memory is the total capacity of all memory pools.
4. A memory resource access method according to any one of claims 1 to 3, wherein said finding a target hardware address corresponding to said target physical address comprises:
searching a target local physical address corresponding to the target physical address in a first preset address mapping table;
and determining a target hardware address corresponding to the target local physical address in a second preset address mapping table.
5. The method of claim 4, wherein the searching for the target local physical address corresponding to the target physical address in the first preset address mapping table includes:
searching a target local physical address corresponding to the target physical address in a first preset address mapping table through an IO processing unit of a computing pool corresponding to the resource user in the server-free data center architecture;
correspondingly, the determining, in the second preset address mapping table, the target hardware address corresponding to the target local physical address includes:
and determining a target hardware address corresponding to the target local physical address in a second preset address mapping table through an IO processing unit of a memory pool in the server-free data center architecture.
6. The memory resource access method according to claim 4, wherein before the target local physical address corresponding to the target physical address is found in the first preset address mapping table, the method further comprises:
and constructing a mapping relation between the local physical address of the memory pool and the physical address of the pooled memory by the control center of the server-free data center architecture to obtain a first preset address mapping table.
7. The memory resource access method according to claim 6, wherein before determining the target hardware address corresponding to the target local physical address in the second preset address mapping table, the method further comprises:
and constructing a mapping relation between a local physical address and a hardware address of the memory pool through an IO processing unit of the memory pool in the server-free data center architecture so as to obtain a second preset address mapping table.
8. A memory resource access device, applied to a server-less data center architecture, wherein memory resources of a computing pool in the server-less data center architecture include a local memory and a cache, and the memory resources of the memory pool form a pooled memory, wherein the local memory is only accessed by resource users located in the local computing pool, and the pooled memory is accessible by resource users located in all computing pools, the device comprising:
the physical address determining module is used for determining a preset physical address space corresponding to a resource user and determining a target physical address corresponding to a physical address to be accessed of the resource user based on the preset physical address space;
the first data access module is used for returning first target data positioned on the target physical address in the local memory to the resource user if the target physical address is positioned in the local memory;
and the second data access module is used for searching out a target hardware address corresponding to the target physical address if the target physical address is positioned in the pooled memory, and storing second target data on the target hardware address into the cache corresponding to the resource user so that the resource user can acquire the second target data from the cache.
9. An electronic device, comprising:
a memory for storing a computer program;
a processor for executing the computer program to implement the steps of the memory resource access method as claimed in any one of claims 1 to 7.
10. A computer-readable storage medium storing a computer program; wherein the computer program when executed by a processor implements the steps of the memory resource access method according to any of claims 1 to 7.
CN202311696050.3A 2023-12-11 2023-12-11 Memory resource access method, device, equipment and medium Pending CN117707993A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311696050.3A CN117707993A (en) 2023-12-11 2023-12-11 Memory resource access method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311696050.3A CN117707993A (en) 2023-12-11 2023-12-11 Memory resource access method, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN117707993A true CN117707993A (en) 2024-03-15

Family

ID=90159911

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311696050.3A Pending CN117707993A (en) 2023-12-11 2023-12-11 Memory resource access method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN117707993A (en)

Similar Documents

Publication Publication Date Title
US20100180208A1 (en) Server side data cache system
US10129357B2 (en) Managing data storage in distributed virtual environment
KR20120005490A (en) Architectural pattern for persistent web application design
CN111078147A (en) Processing method, device and equipment for cache data and storage medium
CN109144619B (en) Icon font information processing method, device and system
CN109933585A (en) Data query method and data query system
US20210360065A1 (en) Distributed Metadata Management Method for Distributed File System
US20200334168A1 (en) Virtual memory pool within a network which is accessible from multiple platforms
CN113934655B (en) Method and apparatus for solving ambiguity problem of cache memory address
US20170153909A1 (en) Methods and Devices for Acquiring Data Using Virtual Machine and Host Machine
CN111078410B (en) Memory allocation method and device, storage medium and electronic equipment
GB2465773A (en) Data Storage and Access
CN114064524A (en) Server, method and device for improving performance of server and medium
CN107832097A (en) Data load method and device
WO2021110570A1 (en) Providing a dynamic random-access memory cache as second type memory
CN116860447A (en) Task caching method, device, system, equipment and medium
CN117707993A (en) Memory resource access method, device, equipment and medium
CN115562871A (en) Memory allocation management method and device
CN115510016A (en) Client response method, device and medium based on directory fragmentation
CN115002072A (en) JMX-based data acquisition method, device and medium
CN111125257B (en) Dictionary updating method, device, equipment and storage medium
CN113110846A (en) Method and device for acquiring environment variable
CN115470303B (en) Database access method, device, system, equipment and readable storage medium
CN109308247A (en) A kind of log processing method, device, equipment and a kind of network equipment
CN111625192B (en) Metadata object access method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination