WO2022099446A1 - 一种内存管理的方法以及相关装置 - Google Patents

一种内存管理的方法以及相关装置 Download PDF

Info

Publication number
WO2022099446A1
WO2022099446A1 PCT/CN2020/127761 CN2020127761W WO2022099446A1 WO 2022099446 A1 WO2022099446 A1 WO 2022099446A1 CN 2020127761 W CN2020127761 W CN 2020127761W WO 2022099446 A1 WO2022099446 A1 WO 2022099446A1
Authority
WO
WIPO (PCT)
Prior art keywords
memory
bandwidth
memory access
identifier
access
Prior art date
Application number
PCT/CN2020/127761
Other languages
English (en)
French (fr)
Inventor
俞东斌
孔飞
崔永
范团宝
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to CN202080006418.7A priority Critical patent/CN115053211A/zh
Priority to PCT/CN2020/127761 priority patent/WO2022099446A1/zh
Publication of WO2022099446A1 publication Critical patent/WO2022099446A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]

Definitions

  • the embodiments of the present application relate to the technical field of computer storage, and in particular, to a memory management method and related apparatus.
  • DDR double data rate
  • SDRAM Synchronous Dynamic Random Access Memory
  • DDR memory has poor energy efficiency at low frequencies, which will increase the power consumption of the memory and reduce the energy efficiency of the memory.
  • the bandwidth capability of the DDR memory is limited and cannot well meet the memory bandwidth requirements.
  • Embodiments of the present application provide a memory management method and related apparatus, which are used to meet different bandwidth requirements corresponding to different memory accesses.
  • a first aspect of the embodiments of the present application provides a method for memory management.
  • the method may be executed by a terminal device, or may also be executed by a chip configured in the terminal device, which is not limited in this application.
  • the method includes: first acquiring a service request of a target service, where the service request is used to indicate a memory access in the target service, and then determining a target memory identifier corresponding to a memory access, where the target memory identifier may be a high-bandwidth memory identifier or Low-bandwidth memory identifier.
  • the target memory identifier is a low-bandwidth memory identifier or the scene corresponding to a memory access is a low-bandwidth scenario, the first memory is allocated for a memory access.
  • the target memory identifier is a high-bandwidth memory identifier and
  • the scenario corresponding to one memory access is a high-bandwidth scenario
  • a second memory is allocated for one memory access, and the second maximum bandwidth of the second memory is greater than the first maximum bandwidth of the first memory.
  • the target memory identifier corresponding to a memory access can be determined, and when the memory identifier is a memory identifier of different bandwidth, or the scene corresponding to a memory access is a scene with different bandwidth, the memory access is allocated for a memory access. Different memory, so as to meet the different bandwidth requirements corresponding to different memory accesses.
  • a scenario corresponding to a memory access is determined according to a function call relationship corresponding to a memory access.
  • the primary memory After the memory access is allocated to the first memory, the first real bandwidth of a memory access can also be detected, and when the first real bandwidth is greater than the first threshold, the data corresponding to a memory access is migrated from the first memory to the second memory , the first threshold is less than or equal to the first maximum bandwidth.
  • the specific method for detecting the first real bandwidth is time-sharing statistics.
  • the first real bandwidth is detected by means of time division statistics, which can improve the accuracy of the first real bandwidth.
  • the bandwidth provided by the first memory cannot meet the bandwidth requirement of one memory access, or the first memory
  • the bandwidth provided by the memory will not be able to meet the bandwidth requirements of one memory access. Therefore, migrating the data corresponding to one memory access from the first memory to the second memory can ensure the service quality of one memory access for the target business.
  • the next service request of the target service may also be obtained, and the next service request is used for Indicate the next memory access in the target service, and then determine the next memory identifier corresponding to the next memory access, and when the next memory identifier is a low-bandwidth memory identifier or the scenario corresponding to the next memory access is a low-bandwidth scenario, Allocate second memory for the next memory fetch.
  • the memory allocated in the next memory fetch will be stored in one memory fetch.
  • the second memory after data migration shall prevail, that is, regardless of the identifier and the scenario, the second memory is allocated, thereby reducing the possibility of memory allocation errors, improving the accuracy of memory allocation, and thus improving the reliability of memory management.
  • the primary memory After the memory access is allocated to the second memory, the second real bandwidth of one memory access can also be detected, and when the second real bandwidth is less than the second threshold, the data corresponding to one memory access is migrated from the second memory to the first memory , the second threshold is less than or equal to the first maximum bandwidth.
  • the specific method for detecting the first real bandwidth is time-sharing statistics.
  • the second real bandwidth is detected by means of time division statistics, which can improve the accuracy of the second real bandwidth.
  • the second real bandwidth is less than the second threshold, that is, the bandwidth provided by the first memory can meet the bandwidth requirement of one memory access, so the memory management device migrates the data corresponding to one memory access from the second memory to the first memory memory, thereby saving bandwidth resources of the second memory.
  • the next service request of the target service may also be obtained, and the next service request is used for Indicate the next memory access in the target service, and then determine the next memory identifier corresponding to the next memory access, and when the next memory identifier is a high-bandwidth memory identifier and the scenario corresponding to the next memory access is a high-bandwidth scenario, Allocate the first memory for the next memory fetch.
  • the memory allocated in the next memory fetch will be stored in one memory fetch.
  • the first memory after data migration shall prevail, that is, the first memory is allocated regardless of the identifier and the scene, thereby reducing the possibility of memory allocation errors, improving the accuracy of memory allocation, and thus improving the reliability of memory management.
  • the target service includes an application layer business.
  • the service type of the target service is specifically limited to improve the feasibility of this solution.
  • a memory management device in a second aspect, has some or all of the functions for implementing the first aspect and any possible implementation manner of the first aspect.
  • the function of the device may have the function of some or all of the embodiments of the present application, and may also have the function of independently implementing any one of the embodiments of the present application.
  • the functions can be implemented by hardware, or can be implemented by hardware executing corresponding software.
  • the hardware or software includes one or more units or modules corresponding to the above functions.
  • the structure of the memory management apparatus may include an acquisition module, a determination module and an allocation module, and the acquisition module, the determination module and the allocation module are configured to support the memory management apparatus to perform the above method in the corresponding function.
  • the memory management device may further include a storage module, which is used for coupling with the acquisition module, the determination module and the allocation module, and stores necessary program instructions and data of the memory management device.
  • the memory management device includes: an acquisition module for acquiring a service request of a target service, wherein the service request is used to indicate a memory access in the target service; a determination module is used for determining a memory access store the corresponding target memory identifier, where the target memory identifier is a high-bandwidth memory identifier, or a low-bandwidth memory identifier; the allocation module is used when the target memory identifier is a low-bandwidth memory identifier or a scenario corresponding to a memory fetch is low-bandwidth In the scenario, the first memory is allocated for a memory access; the allocation module is also used to allocate a memory access when the target memory identifier is a high-bandwidth memory identifier and the scene corresponding to a memory access is a high-bandwidth scenario The second memory, wherein the second maximum bandwidth of the second memory is greater than the first maximum bandwidth of the first memory.
  • the determining module is further configured to determine a scene corresponding to one memory access according to the function calling relationship corresponding to one memory access.
  • the memory management apparatus further includes: a detection module, configured to detect the first real bandwidth of a memory access after the allocation module allocates the first memory for a memory access; a migration module, configured to When the first real bandwidth is greater than the first threshold, the data corresponding to one memory access is migrated from the first memory to the second memory, where the first threshold is less than or equal to the first maximum bandwidth.
  • the obtaining module is further configured to obtain the next service request of the target service, wherein the next service request is used to indicate the next memory access in the target service; the determining module is further configured to determine The next memory identifier corresponding to the next memory access; the allocation module is also used to allocate the next memory access when the next memory identifier is a low-bandwidth memory identifier or the scenario corresponding to the next memory access is a low-bandwidth scenario second memory.
  • the memory management apparatus further includes: a detection module, further configured to detect the second real bandwidth of one memory access after the allocation module allocates the second memory for one memory access; the migration module, further configured to detect the second real bandwidth of one memory access; When the second real bandwidth is less than the second threshold, the data corresponding to one memory access is migrated from the second memory to the first memory, where the second threshold is less than or equal to the first maximum bandwidth.
  • the obtaining module is further configured to obtain the next service request of the target service, wherein the next service request is used to indicate the next memory access in the target service; the determining module is further configured to determine The next memory identifier corresponding to the next memory access; the allocation module is also used to allocate the next memory access when the next memory identifier is a high-bandwidth memory identifier and the scenario corresponding to the next memory access is a high-bandwidth scenario first memory.
  • the target service includes application layer service.
  • the acquiring module, the determining module and the allocating module may be a processor or a processing unit
  • the storage module may be a memory or a storage unit.
  • a memory management apparatus including a processor.
  • the processor is coupled to the memory and is operable to execute instructions in the memory to implement the method in any of the possible implementations of the first aspect above.
  • the memory management apparatus further includes a memory.
  • the memory management apparatus further includes a communication interface, the processor is coupled to the communication interface, and the communication interface is used for inputting and/or outputting information, and the information includes at least one of instructions and data.
  • the memory management apparatus is a terminal device.
  • the communication interface may be a transceiver, or an input/output interface.
  • the transceiver may be a transceiver circuit.
  • the input/output interface may be an input/output circuit.
  • the memory management apparatus is a chip or a chip system configured in the terminal device.
  • the communication interface may be an input/output interface, an interface circuit, an output circuit, an input circuit, a pin or a related circuit.
  • the processor may also be embodied as a processing circuit or a logic circuit.
  • the processor may be used to perform, for example but not limited to, baseband related processing
  • the transceiver may be used to perform, for example but not limited to, radio frequency transceiving.
  • the above-mentioned devices may be respectively arranged on chips that are independent of each other, or at least part or all of them may be arranged on the same chip.
  • processors can be further divided into analog baseband processors and digital baseband processors.
  • the analog baseband processor can be integrated with the transceiver on the same chip, and the digital baseband processor can be set on a separate chip. With the continuous development of integrated circuit technology, more and more devices can be integrated on the same chip.
  • a digital baseband processor can be integrated with a variety of application processors (such as but not limited to graphics processors, multimedia processors, etc.) on the same chip.
  • application processors such as but not limited to graphics processors, multimedia processors, etc.
  • Such a chip may be called a System on Chip. Whether each device is independently arranged on different chips or integrated on one or more chips often depends on the needs of product design. The embodiments of the present application do not limit the implementation form of the foregoing device.
  • FIG. 1 is a schematic diagram of a system framework of a memory management system in an embodiment of the present application
  • FIG. 2 is a schematic structural diagram of an overall chip in an embodiment of the present application.
  • FIG. 3 is a schematic diagram of an embodiment of generating a bandwidth mapping file in an embodiment of the present application
  • FIG. 4 is a schematic diagram of an embodiment of a method for memory management in an embodiment of the present application.
  • FIG. 5 is a schematic diagram of another embodiment of a method for memory management in an embodiment of the present application.
  • FIG. 6 is a schematic diagram of another embodiment of a memory management method in an embodiment of the present application.
  • FIG. 7 is a schematic diagram of another embodiment of a method for memory management in an embodiment of the present application
  • FIG. 8 is a schematic diagram of another embodiment of a method for memory management in an embodiment of the present application.
  • FIG. 9 is a schematic diagram of an embodiment of a memory management apparatus in an embodiment of the present application.
  • FIG. 10 is a schematic structural diagram of a terminal device provided by an embodiment of the present application.
  • At least one means one or more, and “plurality” means two or more.
  • And/or which describes the association relationship of the associated objects, indicates that there can be three kinds of relationships, for example, A and/or B, which can indicate: the existence of A alone, the existence of A and B at the same time, and the existence of B alone, where A, B can be singular or plural.
  • the character “/” generally indicates that the associated objects are an “or” relationship.
  • At least one item(s) below” or similar expressions thereof refer to any combination of these items, including any combination of single item(s) or plural items(s).
  • At least one (a) of a, b and c can represent: a, or, b, or, c, or, a and b, or, a and c, or, b and c, or, a , b and c.
  • a, b and c can be single or multiple respectively.
  • the embodiments disclosed herein will present various aspects, embodiments or features of the present application in the context of a system including a plurality of devices, components, modules, and the like. It is to be understood and appreciated that the various systems may include additional devices, components, modules, etc., and/or may not include all of the devices, components, modules, etc. discussed in connection with the figures. In addition, combinations of these schemes can also be used.
  • FIG. 1 is a schematic diagram of a system framework of a memory management system in an embodiment of the present application.
  • the memory management system includes a software part and a chip part.
  • the first memory particle and the second memory particle are independently controlled respectively.
  • the channel for the transportation of memory particles The channel for the transportation of memory particles.
  • the first memory controller transports the first memory particles through 2 channels, and operates in a high frequency band, so that the frequency can be increased to meet the bandwidth requirement, that is, the first memory controller can improve energy efficiency, that is, increase the unit time
  • the amount of data transferred within Secondly, the second memory controller transports the second memory particles through 4 channels, and the bit width of each channel is 64 bits. Since the channel bit width is increased, the bandwidth capability of the memory particles can be improved.
  • the maximum value of the bandwidth range corresponding to the first memory is smaller than the maximum value of the bandwidth range corresponding to the second memory.
  • the memory in this embodiment is also called storage.
  • the software part to execute the memory management method introduced in this solution through different modules, it is first necessary to run the application layer business offline to allocate a corresponding scenario for each memory access in the application layer business, that is, Generate a bandwidth mapping file, and then after receiving a business request, allocate different memory for a memory access in different situations. Specifically, in the memory management subsystem, allocate the corresponding memory through the first memory allocation area or the second memory allocation area The controller can meet the different bandwidth requirements corresponding to each memory access of the business.
  • FIG. 2 is a schematic structural diagram of an overall chip in an embodiment of the present application.
  • the memory address mapping module includes an address mapping control module, an address mapping control module.
  • the memory map in uses a fixed mapping method.
  • the chip bandwidth detection module can configure the starting address of the bandwidth detection, which is actually the page frame number, and then the real bandwidth is determined by the chip bandwidth detection module.
  • the chip bandwidth detection module in this embodiment is a register, which should not be understood limitation of this application. Specifically, the chip bandwidth detection module counts the real bandwidth of different memory segments according to preset memory particles, and specifically adopts time-sharing statistics.
  • the chip bandwidth detection module updates the number of visits within a statistical unit, and determines the bandwidth corresponding to the last update of the number of visits within a statistical unit as the real bandwidth, and the aforementioned preset memory is pre-configured.
  • the direct memory access engine (Direct memory access, DMA) is used for copying the memory data when the chip bandwidth detection module detects the real bandwidth change of the memory and needs to migrate the memory data.
  • the address range of the first memory is "0x0” to “0xFEFFFFFFFF”
  • the address range of the second memory is "0xFF00000000” to "0xFFFFFFFFFFFF”. If a memory access requested by the service is at “0x0” ” to “0xFEFFFFFFFF”, then the first memory controller will allocate the memory access to the first memory. Similarly, if the memory access requested by the business is at the address from “0xFF00000000” to “0xFFFFFFFFFFFF” range, the second memory controller will allocate the memory access to the second memory.
  • FIG. 3 is a schematic diagram of an embodiment of generating a bandwidth mapping file in this embodiment of the present application, as shown in FIG. Since the data online and offline analysis and identification tool includes a software bandwidth detection module and a scene slice management module, the software bandwidth detection module obtains the real bandwidth of different memory segments. The actual bandwidth of different memory segments is calculated by time-sharing, which is not repeated here.
  • the scene slice management module identifies the scene, and the generation of the scene can be a clear scene, such as startup, front-end and back-end switching, etc., but there may be ambiguous scenes, and online identification is required at this time, and the ambiguous scenes can be based on the behavior management module.
  • the scene slice management module can use the scene thus determined as data for online and offline analysis and identification
  • the input of the tool, the data online and offline analysis and identification tool can obtain the initial bandwidth mapping file through the scene provided by the scene slice management module and the real bandwidth obtained by the software bandwidth detection module.
  • the initial bandwidth mapping file includes the corresponding data of each scene. Bandwidth. If the bandwidth is within the bandwidth range corresponding to the first memory, the scene is a low bandwidth scene corresponding to the first memory. Similarly, if the bandwidth is within the bandwidth range corresponding to the second memory, the scene is the second memory Corresponding high-bandwidth scenarios. Further, when running the service, the user behavior management module included in the scenario and behavior management module can also obtain user habits through the user behavior record model, and the software bandwidth detection module obtains the data of different memory segments through the software bandwidth detection driver. The real bandwidth adjusts the corresponding bandwidth requirements in each scenario in real time, and updates the corresponding relationship in the bandwidth mapping file, thereby improving the accuracy of the bandwidth mapping file.
  • the bandwidth mapping file includes events such as startup, switching from the background to the foreground, and accepting events, and different events include multiple scenarios.
  • events such as startup, switching from the background to the foreground, and accepting events, and different events include multiple scenarios.
  • it includes anonymous page A, file page A, and graphics processing unit (Graphics Processing Unit, GPU) drawing A, where anonymous page A is a high-bandwidth scenario, file page A is a low-bandwidth scenario, and GPU drawing A is High bandwidth scenarios.
  • it when switching from the background to the foreground, it includes anonymous page B, file page B, and display memory B, among which anonymous page B is a high-bandwidth scenario, file page B is a low-bandwidth scenario, and display memory B is a low-bandwidth scenario.
  • the bandwidth mapping file it can be determined whether the scenario corresponding to a memory access is a low-bandwidth scenario or a high-bandwidth scenario.
  • the scenario and behavior management module shown in FIG. 1 and FIG. 3 can execute the method corresponding to the obtaining module 901 shown in FIG. 9 , and specifically obtain the service request of the target service through the application behavior management module and the user behavior management module . Since the scene slice management module obtains the function call relationship through the application function call record module, the scene slice management module and the application function call record module shown in FIG. 1 and FIG. 3 jointly execute the corresponding determination module 902 shown in FIG. Specifically, the scene corresponding to one memory access can be determined according to the function calling relationship corresponding to one memory access. Next, the first memory allocation area and the second memory allocation area in the memory management subsystem shown in FIG. 1 and FIG.
  • the software bandwidth detection module shown in FIG. 1 and FIG. 3 can perform the method corresponding to the detection module 904 shown in FIG. 9 , and specifically can detect the corresponding real bandwidth of memory access.
  • FIG. 4 is a schematic diagram of an embodiment of a memory management method in an embodiment of the present application.
  • the memory management method includes the following steps . S101.
  • the memory management apparatus acquires a service request of a target service, and the service request is used to indicate a memory access in the target service.
  • the target business includes application layer business.
  • the memory management apparatus can obtain the service request of the target service through two implementation manners, and the two implementation manners are respectively introduced below.
  • the memory management apparatus applies for the required memory for a memory access in the target service through a dynamic memory allocation (memory allocation, malloc) interface, that is, the memory management apparatus obtains the service request of the target service.
  • the memory management device drives a direct memory access requirement (eg, a GPU driver, which accepts a drawing buffer application) according to the memory allocation requirement from the kernel driver, thereby directly acquiring the target service business request.
  • a direct memory access requirement eg, a GPU driver, which accepts a drawing buffer application
  • the manner of acquiring the service request is not limited in this embodiment.
  • the memory management apparatus After determining a memory access in step S101, the memory management apparatus further determines a target memory identifier corresponding to a memory access, and the target memory identifier may be a high-bandwidth memory identifier or a low-bandwidth memory identifier.
  • the high-bandwidth identifier is "1" and the low-bandwidth identifier is "0" for introduction, that is, the target memory identifier is "1", which is the high-bandwidth identifier, and the target memory identifier is "1".
  • a value of "0" is a low bandwidth flag, however, this should not be understood as a limitation of the application embodiment.
  • the memory management apparatus may determine a target memory identifier corresponding to a memory access through two implementations, and the two implementations will be introduced separately below.
  • the memory management device applies the required memory for a memory access in the target service through the malloc interface, that is, the memory management device obtains the service request of the target service, and then the memory management device uses the bandwidth mapping introduced in the foregoing embodiment.
  • the file modifies the memory requirements of the virtual address space unit (Virtual Memory Area, VMA) through the function interface. In the case of triggering a page fault interrupt, it indicates that the bandwidth requirement of a memory access is high bandwidth requirement (corresponding to the second memory).
  • the memory management device converts the virtual address space (Virtual Memory, VM) identification (Flag) into a general framing procedure (Generic Framing Procedure, GFP)_High Bandwidth (HBW) identification, that is, the GFP_HBW obtained by the memory management device
  • the identifier is "1", that is, the high-bandwidth memory identifier in this embodiment, so that the bandwidth requirement of a memory access can be identified as a high-bandwidth requirement, and the memory management device can determine the target memory identifier corresponding to a memory access at this time. Indicates the high-bandwidth memory flag.
  • the memory management device if the page fault interrupt is not triggered, that is, the bandwidth requirement of a memory access is a low bandwidth requirement (corresponding to the first memory), and the memory management device will not perform the conversion step at this time, so there is no GFP_HBW flag for a memory access. Therefore, the target memory identifier corresponding to one memory access is "0", so it can be recognized that the bandwidth requirement of one memory access is low bandwidth. At this time, the memory management device can determine that the target memory identifier corresponding to one memory access is low bandwidth Memory ID.
  • the memory management device drives a memory access request directly according to the memory allocation request from the kernel driver, thereby directly acquiring the service request of the target service, and each different request corresponds to a different type. If it is the type corresponding to the high bandwidth demand, it will include the GFP_HBW flag, and the GFP_HBW flag is "1", so that the bandwidth demand of a memory access can be identified as a high bandwidth demand. At this time, the memory management device can determine a memory access. The target memory corresponding to the storage is marked as a high-bandwidth memory identifier.
  • the GFP_HBW identifier will not be included, that is, the target memory identifier corresponding to a memory access is "0", so it can be identified that the bandwidth requirement of a memory access is a low bandwidth requirement.
  • the memory management apparatus may determine that a target memory identifier corresponding to a memory access is a low-bandwidth memory identifier.
  • the memory management device needs to determine a scene corresponding to a memory access according to a function calling relationship corresponding to a memory access.
  • the target service includes various scenarios including startup, switching from the foreground to the background, and switching from the background to the foreground as examples for description.
  • the memory management apparatus can determine that the scene corresponding to a memory access is anonymous page A according to the function call relationship corresponding to a memory access indicated by the service request.
  • the memory management device can determine the scene corresponding to one memory access according to the function call relationship corresponding to one memory access indicated by the service request. is file page B.
  • file page B is a low-bandwidth scenario.
  • the memory management device records the record of the execution of the function call when the application is running to form a function call relationship.
  • the memory management device captures the function call sequence at this time and the entire function sequence at runtime. match to determine the scene at this time.
  • the function execution sequence for switching the game to the background from the start of the game to entering the game interface and finally switching the game to the background is "A ⁇ B ⁇ C ⁇ E ⁇ D ⁇ G ⁇ J ⁇ K ⁇ X ⁇ A ⁇ C ⁇ D ⁇ B ⁇ C”
  • the game startup stage is a low-bandwidth scenario
  • entering the game is a high-bandwidth scenario
  • the game switches to the background is a low-bandwidth scenario, so when the game applies for a memory access, if the sequence captured by the memory management device is "J ⁇ K ⁇ X", that is, it matches the middle of the function execution sequence, and the memory management device thus determines that the scene corresponding to a memory access is entering the game, and can determine that the scene is a high-bandwidth scene.
  • the memory management device determines that the scene corresponding to a memory access is the game switching to the background, and can Determine that the scene is a low bandwidth scene.
  • step S102 there is no necessary sequence between step S102 and step S103, and step S102 may be performed first, or step S103 may be performed first, or step S102 and step S103 may be performed simultaneously, which is not specifically performed here. limited.
  • Step S104 when the determined target memory identifier is a high-bandwidth memory identifier and a scenario corresponding to one memory access is a high-bandwidth scenario, step S105 is performed.
  • the target memory identifier is a low-bandwidth memory identifier or a scenario corresponding to one memory access is a low-bandwidth scenario
  • the first maximum bandwidth of the first memory is smaller than the second maximum bandwidth of the second memory.
  • the bandwidth corresponding to the first memory is 0 to 16 megabytes (MB), that is, the first maximum bandwidth is 16MB
  • the second memory The corresponding bandwidth can range from 15MB to 64MB, that is, the second maximum bandwidth is 64MB. It should be understood that the foregoing examples are only used to understand this solution.
  • the first maximum bandwidth of the first memory may also be less than or equal to the second maximum bandwidth.
  • the minimum bandwidth of the memory, the specific bandwidth range of the first memory and the second memory is not limited here, and the specific range satisfies that the first maximum bandwidth of the first memory is smaller than the second maximum bandwidth of the second memory.
  • the memory management apparatus allocates the first memory for one memory access.
  • the memory management apparatus determines that a target memory identifier corresponding to a memory access is a low-bandwidth identifier, and determines that a scene corresponding to a memory access is file page A, based on the foregoing embodiments, it can be known that file page A is low-bandwidth.
  • the bandwidth requirement corresponding to the file page A is the bandwidth requirement (low bandwidth) corresponding to the first memory, in this case, the memory management apparatus allocates the first memory for one memory access.
  • the memory management apparatus allocates the first memory for one memory access.
  • the memory management device determines that a target memory identifier corresponding to a memory access is a low-bandwidth identifier, but determines that a scene corresponding to a memory access is an anonymous page A, based on the foregoing embodiments, it can be known that the anonymous page A is a high-bandwidth identifier.
  • the bandwidth requirement corresponding to anonymous page A is the bandwidth requirement (high bandwidth) corresponding to the second memory, but since the target memory identifier is a low bandwidth memory identifier, the memory management device still allocates the first memory for a memory access at this time .
  • the memory management apparatus allocates the first memory for one memory access.
  • the memory management apparatus determines that a target memory identifier corresponding to a memory access is a high-bandwidth identifier, but determines that a scene corresponding to a memory access is file page B, based on the foregoing embodiments, it can be known that file page B is a low-bandwidth identifier.
  • the bandwidth requirement corresponding to the file page B is the bandwidth requirement (low bandwidth) corresponding to the first memory.
  • the memory management apparatus allocates the first memory for a memory access at this time.
  • the address range of the first memory is "0x0” to "0xFEFFFFFFFF”, so after the memory management device allocates the first memory for a memory access, the memory address of a memory access is "0x0". To "0xFEFFFFFFFF”, it is enough to randomly determine the address area that can be allocated, which is not limited here.
  • the target memory identifier is a high-bandwidth memory identifier and the scene corresponding to one memory access is a high-bandwidth scene, allocate a second memory for one memory access, wherein the second maximum bandwidth of the second memory is greater than that of the first memory the first maximum bandwidth.
  • the memory management device determines that a target memory identifier corresponding to a memory access is a high-bandwidth identifier, and determines that a scene corresponding to a memory access is GPU drawing A
  • GPU drawing A is a high-bandwidth identifier.
  • the bandwidth requirement corresponding to GPU drawing A is the bandwidth requirement (high bandwidth) corresponding to the second memory, and at this time, the memory management apparatus allocates the second memory for one memory access.
  • the memory management device can also determine whether the second memory has space to be allocated. If there is no space to be allocated, the memory management device will allocate the first memory for a memory access. If there is space to be allocated, then the memory management device is a primary memory. The memory fetch allocates the second memory.
  • the address range of the second memory is from “0xFF00000000” to "0xFFFFFFFFFF”, so after the memory management device allocates the second memory for one memory access, the memory address of one memory access is "0xFF00000000”. ” to “0xFFFFFFFFFF” to randomly determine the address area that can be allocated, which is not limited here.
  • a kernel thread is created when the kernel is initialized, and the thread will continuously detect the real bandwidth of a memory access. When the real bandwidth is different from the bandwidth corresponding to the allocated memory.
  • the memory management apparatus may allocate the first memory for one memory access, and may also allocate the second memory for one memory access, the following descriptions will be given respectively.
  • FIG. 5 is a schematic diagram of another embodiment of a memory management method according to an embodiment of the present application.
  • the memory management method includes the following steps. S201. Acquire a service request of a target service.
  • the manner in which the memory management apparatus first obtains the service request of the target service is similar to step S101, and details are not described herein again.
  • step S202 Determine a target memory identifier corresponding to one memory access.
  • the manner in which the memory management apparatus determines the target memory identifier corresponding to a memory access is similar to step S102, and details are not described herein again.
  • step S203 Determine a scene corresponding to one memory access according to the function calling relationship corresponding to one memory access.
  • the manner in which the memory management apparatus needs to determine a scene corresponding to a memory access according to a function calling relationship corresponding to a memory access is similar to step S103, and details are not repeated here.
  • step S202 there is no necessary sequence between step S202 and step S203, and step S202 may be performed first, or step S203 may be performed first, or step S202 and step S203 may be performed simultaneously, which is not specifically performed here. limited.
  • the target memory identifier is a low-bandwidth memory identifier or a scenario corresponding to one memory access is a low-bandwidth scenario, allocate a first memory for one memory access.
  • the target memory identifier is a low-bandwidth memory identifier or the scene corresponding to one memory access is a low-bandwidth scene
  • the method of allocating the first memory for one memory access by the memory management apparatus is similar to that in step S104, and is not omitted here. Repeat.
  • step S205 Detect the first real bandwidth of one memory access.
  • the memory management apparatus After the memory management apparatus allocates the first memory for a memory access through step S204, it needs to detect the first real bandwidth of a memory access.
  • the specific method for detecting the first real bandwidth is time-sharing statistics, for example, 10ms
  • the memory management device detects the real bandwidth of a memory access in different memory segments every 10ms, and then takes the average value of the real bandwidths in the memory segments as the first real bandwidth. Since a memory access is allocated to the first memory, when the first real bandwidth is not within the bandwidth range corresponding to the first memory, or is already within the bandwidth range corresponding to the second memory, the bandwidth provided by the first memory cannot be satisfied for one time. If the bandwidth requirement of the memory access, or the bandwidth provided by the first memory cannot meet the bandwidth requirement of one memory access, step S206 is performed.
  • the real bandwidth of memory segment A in 10ms is 20MB
  • the real bandwidth of memory segment B in 10ms is 24MB
  • the real bandwidth of memory segment B in 10ms is 24MB.
  • the real bandwidth of C in 10ms is 22MB, then it can be obtained that the first real bandwidth is 22MB.
  • the memory management device When the first real bandwidth is greater than the first threshold, migrate data corresponding to one memory access from the first memory to the second memory, where the first threshold is less than or equal to the first maximum bandwidth.
  • the first threshold since the first threshold is less than or equal to the first maximum bandwidth, when the first real bandwidth is greater than the first threshold, that is, the bandwidth provided by the first memory cannot meet the bandwidth requirement of one memory access, or the first memory The provided bandwidth will not be able to meet the bandwidth requirements of one memory access. Therefore, in order to ensure the quality of service for one memory access, it is necessary to migrate the data of one memory access. Therefore, the memory management device will store the corresponding data in one memory access. Migrate from the first memory to the second memory.
  • step S204 when the target memory identifier is a low-bandwidth memory identifier and the scene corresponding to one memory access is a high-bandwidth scene, when the first memory is allocated for one memory access, due to the requirement of the scene corresponding to one memory access It is high bandwidth, but is allocated to the first memory, so it may happen that the first real bandwidth is greater than the first threshold.
  • the memory management device determines that the first real bandwidth is greater than the first threshold, that is, it determines that one memory access needs to be reallocated to the second memory, and the memory management device needs to copy the data corresponding to one memory access first, and then The data corresponding to one memory access obtained after copying is migrated to the second memory. If the scene corresponding to one memory fetch in the bandwidth mapping file is a low-bandwidth scene, you need to change the scene corresponding to one memory fetch to a high-bandwidth scene.
  • the first threshold when the first maximum bandwidth is 16MB, the first threshold may be 15MB, 15.5MB, 15.8MB or 16MB, and so on. Further, when the first threshold is 15MB and the first real bandwidth is 15.5MB, the memory management device determines that the first real bandwidth is greater than the first threshold, and at this time, the memory management device will perform a memory access corresponding to data from the first memory. The memory is migrated to the second memory. It should be understood that the foregoing examples are only used for understanding this solution, and should not be understood as a limitation of this embodiment.
  • FIG. 6 is a schematic diagram of another embodiment of a memory management method according to an embodiment of the present application.
  • the memory management method includes the following steps. S301. Obtain a service request of a target service.
  • the manner in which the memory management apparatus first obtains the service request of the target service is similar to step S101, and details are not described herein again.
  • step S302. Determine a target memory identifier corresponding to a memory access.
  • the manner in which the memory management apparatus determines the target memory identifier corresponding to a memory access is similar to step S102, and details are not described herein again.
  • step S303 Determine a scene corresponding to one memory access according to the function calling relationship corresponding to one memory access.
  • the manner in which the memory management apparatus needs to determine a scene corresponding to a memory access according to a function calling relationship corresponding to a memory access is similar to step S103, and details are not repeated here.
  • step S302 there is no necessary sequence between step S302 and step S303, and step S302 may be performed first, or step S303 may be performed first, or step S302 and step S303 may be performed simultaneously, which is not specifically performed here. limited.
  • the target memory identifier is a high-bandwidth memory identifier and the scene corresponding to one memory access is a high-bandwidth scene, allocate a second memory for one memory access.
  • the manner in which the memory management apparatus allocates the second memory for one memory access is similar to that in step S105, and is not used here. Repeat.
  • Step S305 Detect the second real bandwidth of one memory access.
  • the memory management apparatus after the memory management apparatus allocates the second memory for one memory access in step S304, it needs to detect the second real bandwidth of one memory access, and the specific method for detecting the second real bandwidth is time-sharing statistics, for example, 10ms
  • the memory management device detects the real bandwidth of a memory access in different memory segments every 10ms, and then takes the average value of the real bandwidths in the memory segments as the second real bandwidth. Since a memory access is allocated to the second memory, when the second real bandwidth is not in the bandwidth range corresponding to the second memory, or is already in the bandwidth range corresponding to the first memory, in order to save the bandwidth resources of the second memory, at this time Step S306 is executed.
  • the real bandwidth of memory segment A in 10ms is 10MB
  • the real bandwidth of memory segment B in 10ms is 16MB
  • the real bandwidth of memory segment B in 10ms is 16MB
  • the real bandwidth of C in 10ms is 10MB
  • the second real bandwidth can be obtained as 12MB.
  • the memory management device determines that the second real bandwidth is less than the second threshold, that is, it determines that one memory access needs to be reassigned to the first memory, and the memory management device needs to copy the data corresponding to one memory access first, and then Data corresponding to one memory access obtained after copying is migrated to the first memory. If the scene corresponding to one memory fetch in the bandwidth mapping file is a high-bandwidth scene, you need to change the scene corresponding to one memory fetch to a low-bandwidth scene.
  • the second threshold may be 15MB or 16MB or the like. Further, when the first threshold is 15MB and the first real bandwidth is 12MB, the memory management device determines that the second real bandwidth is less than the second threshold, and at this time, the memory management device will store data corresponding to one memory access from the second memory. Migrate to the first memory. It should be understood that the foregoing examples are only used for understanding this solution, and should not be understood as a limitation of this embodiment.
  • a kernel thread is created, and the thread will continue to listen for changes in the scene. If the scene changes, that is, when the next memory access of the target service is started, the next memory access needs to be re-allocated. RAM. Specifically, if data migration is not performed in one memory access, the memory management manner of the next memory access is similar to that in FIG. 4 to FIG. 6 , and details are not repeated here.
  • the memory allocated in the next memory access will be based on the memory allocation after data migration by one memory access.
  • the implementation shown in Figure 5 For example, when the first real bandwidth is greater than the first threshold, the memory management device migrates the data corresponding to one memory access from the first memory to the second memory, then the memory allocated for the next memory access must be the second memory, Next, in the embodiment shown in FIG. 6 , when the second real bandwidth is less than the second threshold, the memory management apparatus migrates the data corresponding to one memory access from the second memory to the first memory, then the next memory access The allocated memory must be the first memory.
  • the memory management method corresponding to the next memory access after data migration is described in detail below.
  • FIG. 7 is a schematic diagram of another embodiment of a memory management method in an embodiment of the present application.
  • the memory management method includes the following steps. S401. Obtain a service request of a target service.
  • the manner in which the memory management apparatus first obtains the service request of the target service is similar to step S101, and details are not described herein again.
  • S402 Determine a target memory identifier corresponding to one memory access.
  • the manner in which the memory management apparatus determines the target memory identifier corresponding to a memory access is similar to step S102, and details are not described herein again.
  • step S403. Determine a scene corresponding to one memory access according to the function calling relationship corresponding to one memory access.
  • the manner in which the memory management device needs to determine the scene corresponding to one memory access according to the function call relationship corresponding to one memory access is similar to step S103, and details are not repeated here.
  • step S403 and step S403 there is no necessary sequence between step S403 and step S403, and step S403 may be performed first, or step S403 may be performed first, or step S403 and step S403 may be performed simultaneously, which is not specifically performed here. limited.
  • the target memory identifier is a low-bandwidth memory identifier or the scene corresponding to one memory access is a low-bandwidth scene
  • allocate the first memory for one memory access when the target memory identifier is a low-bandwidth memory identifier or the scene corresponding to one memory access is a low-bandwidth scene, the method of allocating the first memory for one memory access by the memory management apparatus is similar to that in step S104, and is not omitted here. Repeat.
  • step S405. Detect the first real bandwidth of one memory access.
  • the manner in which the memory management apparatus detects the first real bandwidth of a memory access is similar to step S205, and details are not described herein again.
  • step S406 migrate the data corresponding to one memory access from the first memory to the second memory.
  • the manner in which the memory management apparatus migrates the data corresponding to one memory access from the first memory to the second memory is similar to step S206, which is not repeated here.
  • next service request of the target service where the next service request is used to indicate the next memory access in the target service.
  • the memory management apparatus first obtains the next service request of the target service, and the next service request is used to indicate the next memory access in the target service.
  • the target business includes application layer business.
  • the next service request is similar to the service request, and the manner of acquiring the next service request of the target service is similar to step S101, and details are not repeated here.
  • next memory identifier Determine the next memory identifier corresponding to the next memory access.
  • the memory management apparatus further determines the next memory identifier corresponding to the next memory access, and the next memory identifier may be a high-bandwidth memory identifier or a low-bandwidth memory identifier .
  • the next memory identifier is similar to the target memory identifier, and the manner of determining the next memory identifier corresponding to the next memory access is similar to step S102, and details are not repeated here.
  • next memory identifier is a low-bandwidth memory identifier or the scenario corresponding to the next memory access is a low-bandwidth scenario
  • allocate a second memory for the next memory access may also determine a scene corresponding to the next memory access in a manner similar to step S103.
  • the memory allocation after data migration is performed with one memory access.
  • the memory management apparatus allocates the second memory for the next memory access. It can be considered that this embodiment ignores the judgment of the memory identifier and the scene, and for the same memory identifier and scene as the previous time, in order to prevent a wrong judgment again, the migrated second memory is directly used.
  • the memory management device when the target memory identifier is a low-bandwidth memory identifier, and the scene corresponding to one memory access is a low-bandwidth scene, since the data of one memory access has been migrated to the second memory, the memory management device is the next memory access. Allocate second memory. Secondly, when the target memory identifier is a low-bandwidth memory identifier and the scene corresponding to one memory access is a high-bandwidth scene, since the data of one memory access has been migrated to the second memory, the memory management device allocates the next memory access to the memory second memory.
  • the memory management device allocates the next memory access.
  • the target memory identifier is a high-bandwidth memory identifier and the scene corresponding to one memory access is a low-bandwidth scene
  • the memory management device allocates the next memory access.
  • next memory identification is a high-bandwidth memory identification
  • scene corresponding to the next memory access is a high-bandwidth scene
  • the memory management apparatus adopts a method similar to step S105 to allocate the second memory for the next memory access. It is not repeated here.
  • the data corresponding to the next memory access can be migrated from the second memory to the first memory in a manner similar to step S306, and again not. Repeat.
  • FIG. 8 is a schematic diagram of another embodiment of a memory management method according to an embodiment of the present application.
  • the memory management method includes the following steps. S501. Acquire a service request of a target service.
  • the manner in which the memory management apparatus first obtains the service request of the target service is similar to step S101, and details are not described herein again.
  • S502 Determine a target memory identifier corresponding to one memory access.
  • the manner in which the memory management apparatus determines the target memory identifier corresponding to a memory access is similar to step S102, and details are not described herein again.
  • step S503 Determine a scene corresponding to one memory access according to the function calling relationship corresponding to one memory access.
  • the manner in which the memory management apparatus needs to determine a scene corresponding to a memory access according to a function calling relationship corresponding to a memory access is similar to step S103, and details are not repeated here.
  • step S502 and step S503 there is no necessary sequence between step S502 and step S503, and step S502 may be performed first, or step S503 may be performed first, or step S502 and step S503 may be performed simultaneously, which is not specifically performed here. limited.
  • the target memory identifier is a high-bandwidth memory identifier and the scene corresponding to one memory access is a high-bandwidth scene, allocate a second memory for one memory access.
  • the manner in which the memory management apparatus allocates the second memory for one memory access is similar to that in step S105, and is not used here. Repeat.
  • step S505. Detect the second real bandwidth of one memory access.
  • the manner in which the memory management apparatus detects the second real bandwidth of one memory access is similar to step S305, and details are not described herein again.
  • step S506 migrate the data corresponding to one memory access from the second memory to the first memory.
  • the manner in which the memory management apparatus migrates the data corresponding to one memory access from the second memory to the first memory is similar to step S306, which is not repeated here.
  • next service request of the target service where the next service request is used to indicate the next memory access in the target service.
  • the memory management apparatus first obtains the next service request of the target service, and the next service request is used to indicate the next memory access in the target service.
  • the target business includes application layer business.
  • the next service request is similar to the service request, and the manner of acquiring the next service request of the target service is similar to step S101, and details are not repeated here.
  • next memory identifier Determine the next memory identifier corresponding to the next memory access.
  • the memory management apparatus further determines the next memory identifier corresponding to the next memory access, and the next memory identifier may be a high-bandwidth memory identifier or a low-bandwidth memory identifier .
  • the next memory identifier is similar to the target memory identifier, and the manner of determining the next memory identifier corresponding to the next memory access is similar to step S102, and details are not repeated here.
  • next memory identifier is a high-bandwidth memory identifier and the scenario corresponding to the next memory access is a high-bandwidth scenario, allocate a first memory for the next memory access.
  • the memory management apparatus may also determine a scene corresponding to the next memory access in a manner similar to step S103.
  • the memory allocation after data migration is performed with one memory access.
  • the memory management apparatus allocates the first memory for the next memory access. It can be considered that this embodiment ignores the judgment of the memory identifier and the scene, and for the same memory identifier and scene as the previous time, in order to prevent a wrong judgment again, the migrated first memory is directly used.
  • the memory management apparatus allocates the first memory for the next memory access.
  • the memory management apparatus allocates the first memory for the next memory access.
  • the target memory identifier is a low-bandwidth memory identifier and the scene corresponding to one memory access is a high-bandwidth scene
  • the memory management apparatus allocates the first memory for the next memory access.
  • the target memory identifier is a high-bandwidth memory identifier and the scenario corresponding to one memory access is a low-bandwidth scenario
  • the memory management apparatus allocates the first memory for the next memory access.
  • the data corresponding to the next memory access can be migrated from the first memory to the second memory in a manner similar to step S206, and again not. Repeat.
  • the memory management apparatus includes corresponding hardware structures and/or software modules for executing each function.
  • the present application can be implemented in hardware or in the form of a combination of hardware and computer software. Whether a function is performed by hardware or computer software driving hardware depends on the specific application and design constraints of the technical solution. Skilled artisans may implement the described functionality using different methods for each particular application, but such implementations should not be considered beyond the scope of this application.
  • the embodiments of the present application may divide the memory management apparatus into functional modules based on the foregoing method examples.
  • each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module.
  • the above-mentioned integrated modules can be implemented in the form of hardware, and can also be implemented in the form of software function modules. It should be noted that, the division of modules in the embodiments of the present application is schematic, and is only a logical function division, and there may be other division manners in actual implementation.
  • FIG. 9 is a schematic diagram of an embodiment of the memory management device in the embodiment of the present application.
  • the memory management device 900 includes an acquisition module 901 , the determination module 902 and the allocation module 903 .
  • the obtaining module 901 is used to obtain a service request of the target service, wherein the service request is used to indicate a memory access in the target service; the determining module 902 is used to determine a target memory identifier corresponding to a memory access, wherein the target memory The identifier is a high-bandwidth memory identifier, or a low-bandwidth memory identifier; the allocation module 903 is used to allocate a memory access when the target memory identifier is a low-bandwidth memory identifier or the scene corresponding to a memory access is a low-bandwidth scene The first memory; the allocation module 903 is further configured to allocate a second memory for a memory access when the target memory identifier is a high-bandwidth memory identifier and the scene corresponding to one memory access is a high-bandwidth scene, wherein the second memory The second maximum bandwidth of the first memory is greater than the first maximum bandwidth of the first memory.
  • the determining module 902 is further configured to determine a scene corresponding to one memory access according to the function call relationship corresponding to one memory access.
  • the memory management apparatus 900 further includes a detection module 904 and a migration module 905 .
  • the detection module 904 is used for detecting the first real bandwidth of a memory access after the allocation module 903 allocates the first memory for a memory access; the migration module 905 is used for when the first real bandwidth is greater than the first threshold, then Migrate data corresponding to one memory access from the first memory to the second memory, where the first threshold is less than or equal to the first maximum bandwidth.
  • the obtaining module 901 is further configured to obtain the next service request of the target service, wherein the next service request is used to indicate the next memory access in the target service; the determining module 902 is further configured to determine the next memory access. The next memory identifier corresponding to the memory access; the allocation module 903 is further configured to allocate a second memory access for the next memory access when the next memory accession is a low-bandwidth memory identifier or the scene corresponding to the next memory access is a low-bandwidth scene RAM.
  • the detection module 904 is further configured to detect the second real bandwidth of one memory access after the allocation module allocates the second memory for one memory access; the migration module 905 is further configured to detect the second real bandwidth of one memory access when the second real bandwidth is less than the second real bandwidth.
  • the detection module 904 is further configured to detect the second real bandwidth of one memory access after the allocation module allocates the second memory for one memory access; the migration module 905 is further configured to detect the second real bandwidth of one memory access when the second real bandwidth is less than the second real bandwidth.
  • the data corresponding to one memory access is migrated from the second memory to the first memory, where the second threshold is less than or equal to the first maximum bandwidth.
  • the obtaining module 901 is further configured to obtain the next service request of the target service, wherein the next service request is used to indicate the next memory access in the target service; the determining module 902 is further configured to determine the next memory access. The next memory identifier corresponding to the memory access; the allocation module 903 is further configured to allocate the first memory access for the next memory access when the next memory accession is a high-bandwidth memory identifier and the scene corresponding to the next memory access is a high-bandwidth scene RAM.
  • the target service includes application layer service.
  • the acquisition module 901, the determination module 902, the allocation module 903, the detection module 904 and the migration module 905 in the memory management apparatus 900 may be implemented by at least one processor, for example, it may be Corresponds to the processor 3010 in the terminal device 3000 shown in FIG. 10 .
  • the acquisition module 901 , the determination module 902 , the allocation module 903 , the detection module 904 and the migration module 905 in the memory management apparatus 900 can pass through the A processor, microprocessor or integrated circuit integrated on the chip or chip system is implemented.
  • the method corresponding to the acquisition module 901 shown in FIG. 9 can be executed by the scene and behavior management module shown in FIG. 1 and FIG. 3 .
  • the shown method corresponding to the determination module 902 can be jointly executed by the scene slice management module and the application function call recording module shown in FIG. 1 and FIG. 3
  • the allocation module 903 and the migration module 905 shown in FIG. 9 correspond to The method can be executed by the first memory allocation area and the second memory allocation area in the memory management subsystem shown in FIG. 1 and FIG. 3 .
  • the method corresponding to the detection module 904 shown in FIG. 9 can be executed by the software bandwidth detection module shown in FIG. 1 and FIG. 3 .
  • FIG. 10 is a schematic structural diagram of a terminal device 3000 provided by an embodiment of the present application.
  • the terminal device 3000 can be applied to the system shown in FIG. 1 .
  • the terminal device 3000 includes a processor 3010 and a transceiver 3020 .
  • the terminal device 3000 further includes a memory 3030 .
  • the processor 3010, the transceiver 3020 and the memory 3030 can communicate with each other through an internal connection path to transmit control and/or data signals.
  • the computer program is invoked and executed to control the transceiver 3020 to send and receive signals.
  • the terminal device 3000 may further include an antenna 3040 for sending the uplink data or uplink control signaling output by the transceiver 3020 through wireless signals.
  • the memory 3030 may include read-only memory and random access memory and provide instructions and data to the processor 3010.
  • a portion of the memory may also include non-volatile random access memory.
  • the memory 3030 may be a separate device, or may be integrated in the processor 3010.
  • the processor 3010 may be configured to execute the instructions stored in the memory 3030, and when the processor 3010 executes the instructions stored in the memory, the processor 3010 is configured to execute various steps and steps of the above method embodiments corresponding to the memory management apparatus. / or process.
  • the processor 3010 may correspond to the acquisition module 901, the determination module 902, the allocation module 903, the detection module 904 and the migration module 905 in FIG. 9 .
  • the terminal device 3000 is the memory management apparatus in the foregoing method embodiments, that is, it may correspond to the memory management apparatus in the foregoing method embodiments, and may be used to execute various steps performed by the memory management apparatus in the foregoing method embodiments. and/or process.
  • the terminal device 3000 shown in FIG. 10 can implement various processes related to the memory management apparatus in the method embodiments shown in FIG. 4 , FIG. 5 , FIG. 6 , FIG. 7 and FIG. 8 .
  • the operations and/or functions of each module in the terminal device 3000 are respectively to implement the corresponding processes in the foregoing method embodiments.
  • the transceiver 3020 may include a transmitter and a receiver.
  • the transceiver 3020 may further include antennas, and the number of the antennas may be one or more.
  • the processor 3010, the memory 3030 and the transceiver 3020 may be devices integrated on different chips.
  • the processor 3010 and the memory 3030 may be integrated in the baseband chip, and the transceiver 3020 may be integrated in the radio frequency chip.
  • the processor 3010, the memory 3030 and the transceiver 3020 may also be devices integrated on the same chip. This application does not limit this.
  • the above-mentioned processor 3010 may be configured to perform the actions described in the foregoing method embodiments that are implemented internally by the memory management apparatus. For details, please refer to the descriptions in the foregoing method embodiments, which will not be repeated here.
  • the above-mentioned terminal device 3000 may further include a power supply 3050 for providing power to various devices or circuits in the terminal device.
  • the terminal device 3000 may further include one or more of an input unit 3060, a display unit 3070, an audio circuit 3080, a camera 3090, a sensor 3100, etc., the audio circuit Speakers 3082, microphones 3084, etc. may also be included.
  • the present application also provides a memory management apparatus, including at least one processor, where the at least one processor is configured to execute a computer program stored in a memory, so that the memory management apparatus executes the terminal device or the terminal device in any of the foregoing method embodiments.
  • the method performed by the network device.
  • the above-mentioned memory management device may be one or more chips.
  • the memory management device may be a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), a system on chip (SoC), or a system on chip (SoC). It can be a central processing unit (CPU), a network processor (NP), a digital signal processing circuit (DSP), or a microcontroller (microcontroller). unit, MCU), it can also be a programmable logic device (PLD) or other integrated chips.
  • FPGA field programmable gate array
  • ASIC application specific integrated circuit
  • SoC system on chip
  • SoC system on chip
  • SoC system on chip
  • MCU microcontroller
  • MCU programmable logic device
  • PLD programmable logic device
  • Embodiments of the present application further provide a memory management device, including a processor and a communication interface.
  • the communication interface is coupled with the processor.
  • the communication interface is used to input and/or output information.
  • the information includes at least one of instructions and data.
  • the processor is configured to execute a computer program, so that the memory management apparatus executes the method executed by the memory management apparatus in any of the above method embodiments.
  • Embodiments of the present application further provide a memory management device, including a processor and a memory.
  • the memory is used to store a computer program
  • the processor is used to call and run the computer program from the memory, so that the memory management apparatus executes the method performed by the memory management apparatus in any of the foregoing method embodiments.
  • each step of the above-mentioned method can be completed by a hardware integrated logic circuit in a processor or an instruction in the form of software.
  • the steps of the methods disclosed in conjunction with the embodiments of the present application may be directly embodied as executed by a hardware processor, or executed by a combination of hardware and software modules in the processor.
  • the software modules may be located in random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, registers and other storage media mature in the art.
  • the storage medium is located in the memory, and the processor reads the information in the memory, and completes the steps of the above method in combination with its hardware. To avoid repetition, detailed description is omitted here.
  • the processor in this embodiment of the present application may be an integrated circuit chip, which has a signal processing capability.
  • each step of the above method embodiments may be completed by a hardware integrated logic circuit in a processor or an instruction in the form of software.
  • the aforementioned processors may be general purpose processors, digital signal processors (DSPs), application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components .
  • DSPs digital signal processors
  • ASICs application specific integrated circuits
  • FPGAs field programmable gate arrays
  • the methods, steps, and logic block diagrams disclosed in the embodiments of this application can be implemented or executed.
  • a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the steps of the method disclosed in conjunction with the embodiments of the present application may be directly embodied as executed by a hardware decoding processor, or executed by a combination of hardware and software modules in the decoding processor.
  • the software modules may be located in random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, registers and other storage media mature in the art.
  • the storage medium is located in the memory, and the processor reads the information in the memory, and completes the steps of the above method in combination with its hardware.
  • the memory in this embodiment of the present application may be a volatile memory or a non-volatile memory, or may include both volatile and non-volatile memory.
  • the non-volatile memory may be read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically programmable Erase programmable read-only memory (electrically EPROM, EEPROM) or flash memory.
  • Volatile memory may be random access memory (RAM), which acts as an external cache.
  • RAM random access memory
  • DRAM dynamic random access memory
  • SDRAM synchronous DRAM
  • SDRAM double data rate synchronous dynamic random access memory
  • ESDRAM enhanced synchronous dynamic random access memory
  • SLDRAM synchronous link dynamic random access memory
  • direct rambus RAM direct rambus RAM
  • the present application also provides a computer program product, the computer program product includes: computer program code, when the computer program code is run on a computer, the computer is made to execute the steps shown in FIGS. 4 to 8 .
  • the present application also provides a computer-readable storage medium, where the computer-readable storage medium stores program codes, and when the program codes are executed on a computer, the computer is made to execute FIG. 4 to FIG. 4 .
  • the memory management device in each of the above device embodiments corresponds completely to the memory management device in the method embodiments, and corresponding steps are performed by corresponding modules or units, for example, a communication unit (transceiver) performs the steps of receiving or sending in the method embodiments, Other steps than sending and receiving can be performed by a processing unit (processor).
  • a processing unit for functions of specific units, reference may be made to corresponding method embodiments.
  • the number of processors may be one or more.
  • a component may be, but is not limited to, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer.
  • an application running on a computing device and the computing device may be components.
  • One or more components may reside within a process and/or thread of execution, and a component may be localized on one computer and/or distributed between 2 or more computers.
  • these components can execute from various computer readable media having various data structures stored thereon.
  • a component may, for example, be based on a signal having one or more data packets (eg, data from two components interacting with another component between a local system, a distributed system, and/or a network, such as the Internet interacting with other systems via signals) Communicate through local and/or remote processes.
  • data packets eg, data from two components interacting with another component between a local system, a distributed system, and/or a network, such as the Internet interacting with other systems via signals
  • the disclosed system, apparatus and method may be implemented in other manners.
  • the apparatus embodiments described above are only illustrative.
  • the division of the units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components may be combined or Can be integrated into another system, or some features can be ignored, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the functions, if implemented in the form of software functional units and sold or used as independent products, may be stored in a computer-readable storage medium.
  • the technical solution of the present application can be embodied in the form of a software product in essence, or the part that contributes to the prior art or the part of the technical solution.
  • the computer software product is stored in a storage medium, including Several instructions are used to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program codes .

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Memory System (AREA)

Abstract

一种内存管理的方法以及相关装置,涉及计算机存储技术领域,用于满足不同内存访存所对应的不同带宽需求。该方法包括:先获取目标业务的业务请求,该业务请求用于指示目标业务中的一次内存访存(S101),然后确定一次内存访存对应的目标内存标识,该目标内存标识可以为高带宽内存标识或低带宽内存标识(S102),当目标内存标识为低带宽内存标识或一次内存访存对应的场景为低带宽场景时,为一次内存访存分配第一内存(S104),其次,当目标内存标识为高带宽内存标识且一次内存访存对应的场景为高带宽场景时,为一次内存访存分配第二内存,该第二内存的第二最大带宽大于第一内存的第一最大带宽(S105)。

Description

一种内存管理的方法以及相关装置 技术领域
本申请实施例涉及计算机存储技术领域,尤其涉及一种内存管理的方法以及相关装置。
背景技术
随着移动芯片处理能力越来越强,存储器(Memory),也叫内存,其性能(带宽)的提升跟不上芯片算力对存储器的需求,此外能耗依然是终端设备的关键指标。因此,找到一个既能满足当前芯片算力对存储器带宽的需求也能满足终端设备对低功耗需求的解决方案势在必行。
当前终端设备在大部分场景下,其带宽需求不大而是希望能以较低的功耗就能完成业务功能,因此,目前可以将普通存储器,如双倍速率(double data rate,DDR)同步动态随机存储器(Synchronous Dynamic Random Access Memory,SDRAM),按照不同的频率分为多个档位,并且把不同档位的流量阈值设置到对应的寄存器中,当实际流量超过或下降到某一阈值时则芯片通过中断通知软件进行适当的调频。
然而,DDR存储器在低频时其能效较差,由此会提升存储器的功耗,并且降低存储器的能效,其次,DDR存储器的带宽能力受限,无法很好地满足Memory带宽的需求。
发明内容
本申请实施例提供了一种内存管理的方法以及相关装置,用于满足不同内存访存所对应的不同带宽需求。
本申请实施例的第一方面提供了一种内存管理的方法。该方法可以由终端设备执行,或者也可以由配置于终端设备中的芯片执行,本申请对此不作限定。该方法包括:先获取目标业务的业务请求,该业务请求用于指示目标业务中的一次内存访存,然后确定一次内存访存对应的目标内存标识,该目标内存标识可以为高带宽内存标识或低带宽内存标识,当目标内存标识为低带宽内存标识或一次内存访存对应的场景为低带宽场景时,为一次内存访存分配第一内存,其次,当目标内存标识为高带宽内存标识且一次内存访存对应的场景为高带宽场景时,为一次内存访存分配第二内存,该第二内存的第二最大带宽大于第一内存的第一最大带宽。
在该实施方式中,由于可以确定一次内存访存对应的目标内存标识,并且在内存标识为不同带宽的内存标识,或一次内存访存对应的场景为不同带宽场景时,为一次内存访存分配不同内存,从而满足不同内存访存所对应的不同带宽需求。
结合本申请实施例的第一方面,在本申请实施例的第一方面的第一种实现方式中,根据一次内存访存所对应的函数调用关系确定一次内存访存对应的场景。
在该实施方式中,具体限定通过函数调用关系确定场景,提升所确定场景的可靠性,由此提升本方案的可靠性。
结合本申请实施例的第一方面或本申请实施例的第一方面的第一种实现方式中的任意一种,本申请实施例的第一方面的第二种实现方式中,在为一次内存访存分配第一内存之 后,还可以检测一次内存访存的第一真实带宽,并且当第一真实带宽大于第一阈值时,将一次内存访存对应的数据从第一内存迁移至第二内存,该第一阈值小于或等于第一最大带宽。其中,具体检测第一真实带宽的方式为分时统计。
在该实施方式中,通过分时统计的方式检测第一真实带宽,能够提升第一真实带宽的准确度。其次,当第一真实带宽不处于第一内存对应的带宽范围,或者已处于第二内存对应的带宽范围时,第一内存所提供的带宽已无法满足一次内存访存的带宽需求,或者第一内存所提供的带宽将无法满足一次内存访存的带宽需求,因此将一次内存访存对应的数据从第一内存迁移至第二内存,能够保证对目标业务一次内存访存的服务质量。
结合本申请实施例的第一方面的第二种实现方式,本申请实施例的第一方面的第三种实现方式中,还可以获取目标业务的下一业务请求,该下一业务请求用于指示目标业务中的下一次内存访存,然后确定下一次内存访存对应的下一内存标识,并且当下一内存标识是低带宽内存标识或下一次内存访存对应的场景为低带宽场景时,为下一次内存访存分配第二内存。
在该实施方式中,由于在一次内存访存将数据从第一内存迁移至第二内存,即出现了内存分配错误的情况,因此在下一次内存访存所分配的内存,将以一次内存访存进行数据迁移后的第二内存为准,即无论何种标识以及场景,均分配第二内存,由此减少内存分配错误的可能性,提升内存分配的准确度,从而提升内存管理的可靠性。
结合本申请实施例的第一方面至本申请实施例的第一方面的第三种实现方式中的任意一种,本申请实施例的第一方面的第四种实现方式中,在为一次内存访存分配第二内存之后,还可以检测一次内存访存的第二真实带宽,并且当第二真实带宽小于第二阈值时,将一次内存访存对应的数据从第二内存迁移至第一内存,该第二阈值小于或等于第一最大带宽。其中,具体检测第一真实带宽的方式为分时统计。
在该实施方式中,通过分时统计的方式检测第二真实带宽,能够提升第二真实带宽的准确度。其次,当第二真实带宽小于第二阈值,即第一内存所提供的带宽可以满足一次内存访存的带宽需求,因此内存管理装置将一次内存访存对应的数据从第二内存迁移至第一内存,从而节约了第二内存的带宽资源。
结合本申请实施例的第一方面的第四种实现方式,本申请实施例的第一方面的第五种实现方式中,还可以获取目标业务的下一业务请求,该下一业务请求用于指示目标业务中的下一次内存访存,然后确定下一次内存访存对应的下一内存标识,并且当下一内存标识是高带宽内存标识且下一次内存访存对应的场景为高带宽场景时,为下一次内存访存分配第一内存。
在该实施方式中,由于在一次内存访存将数据从第二内存迁移至第一内存,即出现了内存分配错误的情况,因此在下一次内存访存所分配的内存,将以一次内存访存进行数据迁移后的第一内存为准,即无论何种标识以及场景,均分配第一内存,由此减少内存分配错误的可能性,提升内存分配的准确度,从而提升内存管理的可靠性。
结合本申请实施例的第一方面至本申请实施例的第一方面的第五种实现方式中的任意一种,本申请实施例的第一方面的第六种实现方式中,目标业务包括应用层业务。
在该实施方式中,具体限定目标业务的业务类型,提升本方案可行性。
第二方面,提供了一种内存管理装置。该内存管理装置具有实现上述第一方面以及第一方面中任一种可能实现方式中的部分或全部功能。比如,装置的功能可具备本申请中部分或全部实施例中的功能,也可以具备单独实施本申请中的任一个实施例的功能。所述功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。所述硬件或软件包括一个或多个与上述功能相对应的单元或模块。
在一种可能的设计中,该内存管理装置的结构中可包括获取模块,确定模块和分配模块,所述获取模块,所述确定模块和所述分配模块被配置为支持内存管理装置执行上述方法中相应的功能。所述内存管理装置还可以包括存储模块,所述存储模块用于与获取模块,确定模块和分配模块耦合,其保存内存管理装置必要的程序指令和数据。
一种实施方式中,所述内存管理装置包括:获取模块,用于获取目标业务的业务请求,其中,业务请求用于指示目标业务中的一次内存访存;确定模块,用于确定一次内存访存对应的目标内存标识,其中,目标内存标识为高带宽内存标识,或,低带宽内存标识;分配模块,用于当目标内存标识为低带宽内存标识或一次内存访存对应的场景为低带宽场景时,则为一次内存访存分配第一内存;分配模块,还用于当目标内存标识为高带宽内存标识且一次内存访存对应的场景为高带宽场景时,则为一次内存访存分配第二内存,其中,第二内存的第二最大带宽大于第一内存的第一最大带宽。
在一种可能的实施方式中,确定模块,还用于根据一次内存访存所对应的函数调用关系确定一次内存访存对应的场景。
在一种可能的实施方式中,内存管理装置还包括:检测模块,用于在分配模块为一次内存访存分配第一内存之后,检测一次内存访存的第一真实带宽;迁移模块,用于当第一真实带宽大于第一阈值时,则将一次内存访存对应的数据从第一内存迁移至第二内存,其中,第一阈值小于或等于第一最大带宽。
在一种可能的实施方式中,获取模块,还用于获取目标业务的下一业务请求,其中,下一业务请求用于指示目标业务中的下一次内存访存;确定模块,还用于确定下一次内存访存对应的下一内存标识;分配模块,还用于当下一内存标识是低带宽内存标识或下一次内存访存对应的场景为低带宽场景时,则为下一次内存访存分配第二内存。
在一种可能的实施方式中,内存管理装置还包括:检测模块,还用于在分配模块为一次内存访存分配第二内存之后,检测一次内存访存的第二真实带宽;迁移模块,还用于当第二真实带宽小于第二阈值时,则将一次内存访存对应的数据从第二内存迁移至第一内存,其中,第二阈值小于或等于第一最大带宽。
在一种可能的实施方式中,获取模块,还用于获取目标业务的下一业务请求,其中,下一业务请求用于指示目标业务中的下一次内存访存;确定模块,还用于确定下一次内存访存对应的下一内存标识;分配模块,还用于当下一内存标识是高带宽内存标识且下一次内存访存对应的场景为高带宽场景时,则为下一次内存访存分配第一内存。
在一种可能的实施方式中,目标业务包括应用层业务。
作为示例,获取模块,确定模块以及分配模块可以为处理器或者处理单元,存储模块可以为存储器或存储单元。
第三方面,提供了一种内存管理装置,包括处理器。该处理器与存储器耦合,可用于 执行存储器中的指令,以实现上述第一方面中任一种可能实现方式中的方法。可选地,该内存管理装置还包括存储器。可选地,该内存管理装置还包括通信接口,处理器与通信接口耦合,所述通信接口用于输入和/或输出信息,所述信息包括指令和数据中的至少一项。
在一种实现方式中,该内存管理装置为终端设备。当该内存管理装置为终端设备时,所述通信接口可以是收发器,或,输入/输出接口。
可选地,所述收发器可以为收发电路。可选地,所述输入/输出接口可以为输入/输出电路。
在另一种实现方式中,该内存管理装置为配置于终端设备中的芯片或芯片系统。当该内存管理装置为配置于终端设备中的芯片或芯片系统时,所述通信接口可以是输入/输出接口、接口电路、输出电路、输入电路、管脚或相关电路等。所述处理器也可以体现为处理电路或逻辑电路。
在实现过程中,处理器可用于进行,例如但不限于,基带相关处理,收发器可用于进行,例如但不限于,射频收发。上述器件可以分别设置在彼此独立的芯片上,也可以至少部分的或者全部的设置在同一块芯片上。例如,处理器可以进一步划分为模拟基带处理器和数字基带处理器。其中,模拟基带处理器可以与收发器集成在同一块芯片上,数字基带处理器可以设置在独立的芯片上。随着集成电路技术的不断发展,可以在同一块芯片上集成的器件越来越多。例如,数字基带处理器可以与多种应用处理器(例如但不限于图形处理器,多媒体处理器等)集成在同一块芯片之上。这样的芯片可以称为系统芯片(System on Chip)。将各个器件独立设置在不同的芯片上,还是整合设置在一个或者多个芯片上,往往取决于产品设计的需要。本申请实施例对上述器件的实现形式不做限定。
需要说明的是,本申请第二方面以及第三方面的实施方式所带来的有益效果可以参照第一方面的实施方式进行理解,因此没有重复赘述。
附图说明
图1为本申请实施例中内存管理系统的系统框架示意图;
图2为本申请实施例中整体芯片的结构示意图;
图3为本申请实施例中生成带宽映射文件的实施例示意图;
图4为本申请实施例中内存管理的方法一个实施例的示意图;
图5为本申请实施例中内存管理的方法另一实施例的示意图;
图6为本申请实施例中内存管理的方法另一实施例的示意图
图7为本申请实施例中内存管理的方法另一实施例的示意图
图8为本申请实施例中内存管理的方法另一实施例的示意图;
图9为本申请实施例中内存管理装置的一种实施例示意图;
图10是本申请实施例提供的终端设备的结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。为了便 于理解本申请实施例,作出以下几点说明。
第一,在本申请中,为便于描述,在下文示出的实施例中,对于一种技术特征,通过“第一”、“第二”、“第三”等区分该种技术特征中的技术特征,该“第一”、“第二”、“第三”描述的技术特征间无先后顺序或者大小顺序。
第二,“至少一个”是指一个或者多个,“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B的情况,其中A,B可以是单数或者复数。字符“/”一般表示前后关联对象是一种“或”的关系。“以下至少一项(个)”或其类似表达,是指的这些项中的任意组合,包括单项(个)或复数项(个)的任意组合。例如,a、b和c中的至少一项(个),可以表示:a,或,b,或,c,或,a和b,或,a和c,或,b和c,或,a、b和c。其中a、b和c分别可以是单个,也可以是多个。
第三,本申请公开的实施例将围绕包括多个设备、组件、模块等的系统来呈现本申请的各个方面、实施例或特征。应当理解和明白的是,各个系统可以包括另外的设备、组件、模块等,并且/或者可以并不包括结合附图讨论的所有设备、组件、模块等。此外,还可以使用这些方案的组合。
第四,本申请公开的实施例中,“的(of)”,“相应的(relevant)”和“对应的(corresponding)”有时可以混用,应当指出的是,在不强调其区别时,其所要表达的含义是一致的。
为了更好地理解本申请实施例公开的一种内存管理的方法以及相关装置,下面先对本发明实施例使用的系统框架进行描述。图1为本申请实施例中内存管理系统的系统框架示意图,如图1所示,内存管理系统包括软件部分以及芯片部分。对于芯片部分而言,通过第一内存控制器与对应接口,以及第二内存控制器与对应接口,分别独立对应控制第一内存颗粒以及第二内存颗粒,其次,增加芯片带宽检测模块连续在预设时间段(一个统计单位)内对芯片的运行内存进行检测,增加内存地址映射模块分别对第一内存颗粒以及第二内存颗粒进行统一编址映射,以及增加对应对第一内存颗粒以及第二内存颗粒搬运的通道。具体地,第一内存控制器通过2个通道对第一内存颗粒进行搬运,并且运行在高频段,由此能够提高频率从而满足带宽需求,即第一内存控制器能够提高能效,即提高单位时间内的传输的数据量。其次,第二内存控制器通过4个通道对第二内存颗粒进行搬运,且每个通道位宽为64比特(bit),由于通道位宽增加,因此能够提高内存颗粒带宽能力。其中,第一内存对应的带宽范围的最大值小于第二内存对应的带宽范围的最大值。本实施例的内存也叫存储器。
其次,对于软件部分而言,通过不同的模块分别执行本方案所介绍的内存管理的方法,首先需要通过离线运行应用层业务,为应用层业务中的每一次内存访存分配对应的场景,即生成带宽映射文件,然后再收到业务请求后,对不同情况下的一次内存访存分配不同内存,具体在内存管理子系统中,通过第一内存分配区域或第二内存分配区域分配至对应内存控制器,从而满足业务每次内存访存所对应的不同带宽需求。
为了进一步了解内存管理系统中的芯片部分,请参阅图2,图2为本申请实施例中整体芯片的结构示意图,如图2所示,内存地址映射模块包括地址映射控制模块,地址映射 控制模块中的内存映射(memory map)采用的是固定映射的方式。其次,芯片带宽检测模块可以配置带宽检测的起始地址,该起始地址实际为页框号,然后由芯片带宽检测模块确定真实带宽,本实施例中的芯片带宽检测模块为寄存器,不应理解为本申请的限定。具体地,芯片带宽检测模块按照预设内存颗粒统计不同内存段的真实带宽,具体采用分时统计,例如以10毫秒(ms)为一个统计单位,每10ms对不同内存段内的真实带宽进行检测,具体为芯片带宽检测模块在一个统计单位内更新访问次数,将一个统计单位内最后一次更新访问次数时所对应的带宽确定为真实带宽,前述预设内存为预先配置的。进而,直接存储访问引擎(Direct memory access,DMA)用于芯片带宽检测模块检测到内存的真实带宽变化需要迁移内存数据时,进行内存数据的拷贝。具体地,内存地址映射模块为第一内存的地址范围为“0x0”至“0xFEFFFFFFFF”,而第二内存的地址范围为“0xFF00000000”至“0xFFFFFFFFFF”,若业务请求的一次内存访存处于“0x0”至“0xFEFFFFFFFF”的地址范围,那么第一内存控制器会将该次内存访存分配至第一内存,同理可知,若业务请求的一次内存访存处于“0xFF00000000”至“0xFFFFFFFFFF”的地址范围,那么第二内存控制器会将该次内存访存分配至第二内存。
为了进一步了解内存管理系统中软件部分如何生成带宽映射文件,请参阅图3,图3为本申请实施例中生成带宽映射文件的实施例示意图,如图3所示,首先需要通过离线运行业务,由于数据在线离线分析识别工具中包括软件带宽检测模块以及场景切片管理模块,软件带宽检测模块获取不同内存段的真实带宽,具体获取方式与前述芯片带宽检测模块类似,均为按照预设内存颗粒统计不同内存段的真实带宽,具体采用分时统计,此处不再赘述。然后场景切片管理模块识别场景,场景的产生可以是明确的场景,例如启动,前后台切换等,但可能存在不明确的场景,此时需要进行在线识别,不明确的场景可以基于行为管理模块所包括应用行为管理模块,通过应用分配模型所得到的应用分配行为,以及应用函数调用记录模块获取函数的函数调用关系确定场景,由此场景切片管理模块可以将这样所确定场景作为数据在线离线分析识别工具的输入,由此数据在线离线分析识别工具可以通过场景切片管理模块所提供的场景,以及软件带宽检测模块所得到的真实带宽,得到初始带宽映射文件,初始带宽映射文件包括每个场景对应的带宽,若带宽处于第一内存对应的带宽范围中,即该场景为第一内存对应的低带宽场景,同理可知,若带宽处于第二内存对应的带宽范围中,即该场景为第二内存对应的高带宽场景。进一步地,在此运行该业务时,还可以基于场景与行为管理模块所包括的用户行为管理模块,通过用户行为记录模型获取用户习惯,以及软件带宽检测模块通过软件带宽检测驱动获取不同内存段的真实带宽实时调整每个场景下对应的带宽需求,更新带宽映射文件中的对应关系,由此可以提升带宽映射文件的准确度。
示例性地,图3中所示出带宽映射文件中,带宽映射文件中包括启动,后台切换至前台以及接受事件等事件,且不同事件中包括多种场景。例如,在启动时,包括匿名页A,文件页A以及图形处理器(Graphics Processing Unit,GPU)绘图A等,其中匿名页A为高带宽场景,文件页A为低带宽场景,GPU绘图A为高带宽场景。其次,在后台切换至前台时,包括匿名页B,文件页B以及显示内存B等,其中匿名页B为高带宽场景,文件页B为低带宽场景,显示内存B为低带宽场景。由此可知,通过该带宽映射文件可以确定一次 内存访存所对应的场景具体为低带宽场景还是高带宽场景。
具体地,图1以及图3中所示出的场景与行为管理模块可以执行图9所示出的获取模块901对应的方法,具体通过应用行为管理模块以及用户行为管理模块获取目标业务的业务请求。由于场景切片管理模块通过应用函数调用记录模块获取函数调用关系,因此图1以及图3中所示出的场景切片管理模块以及应用函数调用记录模块共同执行图9所示出的确定模块902对应的方法,具体可根据一次内存访存所对应的函数调用关系确定一次内存访存对应的场景。其次,图1以及图3中所示出的内存管理子系统中的第一内存分配区域以及第二内存分配区域,可执行图9所示出的分配模块903以及迁移模块905对应的方法,具体为内存访存分配第一内存或第二内存,还可以将内存访存对应的数据在第一内存以及第二内存之间迁移。再次,图1以及图3中所示出的软件带宽检测模块可执行图9所示出的检测模块904对应的方法,具体可检测内存访存的对应的真实带宽。
基于此,为了进一步地理解本申请实施例公开的一种内存管理的方法,请参阅图4,图4为本申请实施例中内存管理的方法一个实施例的示意图,内存管理的方法包括如下步骤。S101、内存管理装置获取目标业务的业务请求,且该业务请求用于指示目标业务中的一次内存访存。该目标业务包括应用层业务。
具体地,本实施例中,内存管理装置可以通过两种实施方式获取目标业务的业务请求,下面分别对两种实施方式进行介绍。一种实施方式为,内存管理装置通过动态内存分配(memory allocation,malloc)接口对目标业务中的一次内存访存申请所需内存,即内存管理装置获取目标业务的业务请求。另一种实施方式为,内存管理装置根据来自内核驱动的内存分配需求,驱动直接对一次内存访存的需求(例如GPU驱动,其承接绘图缓冲(buffer)的申请),由此直接获取目标业务的业务请求。本实施例中不对获取业务请求的方式进行限定。
S102、内存管理装置通过步骤S101确定一次内存访存后,进一步地确定一次内存访存对应的目标内存标识,且该目标内存标识可以为高带宽内存标识或者低带宽内存标识。为了便于理解本方案,本申请实施例均中以高带宽标识为“1”,而低带宽标识为“0”进行介绍,即目标内存标识为“1”即为高带宽标识,而目标内存标识为“0”即为低带宽标识,然而这不应该理解为申请实施例的限定。
具体地,本实施例中,内存管理装置可以通过两种实施方式确定一次内存访存对应的目标内存标识,下面分别对两种实施方式进行介绍。
一种实施方式为,内存管理装置通过malloc接口对目标业务中的一次内存访存申请所需内存,即内存管理装置获取目标业务的业务请求,然后内存管理装置根据前述实施例所介绍的带宽映射文件通过函数接口修改虚拟地址空间单元(Virtual Memory Area,VMA)的内存需求,在触发缺页中断的情况下,说明一次内存访存的带宽需求为高带宽需求(对应于第二内存),此时内存管理装置将虚拟地址空间(Virtual Memory,VM)标识(Flag)转化为通用成帧规程(Generic Framing Procedure,GFP)_高带宽(High Bandwidth,HBW)标识,即内存管理装置所得到的GFP_HBW标识为“1”,即为本实施例中的高带宽内存标识,由此可以识别一次内存访存的带宽需求为高带宽需求,此时内存管理装置可以确定一次内存访存对应的目标内存标识为高带宽内存标识。其次,若未触发缺页中断,即一次内存访 存的的带宽需求为低带宽需求(对应于第一内存),此时内存管理装置不会进行转化步骤,因此一次内存访存无GFP_HBW标识,所以一次内存访存对应的目标内存标识为“0”,由此可以识别一次内存访存的带宽需求为低带宽需求,此时内存管理装置可以确定一次内存访存对应的目标内存标识为低带宽内存标识。
另一种实施方式为,内存管理装置根据来自内核驱动的内存分配需求,驱动直接对一次内存访存的需求,由此直接获取目标业务的业务请求,而每个不同的需求对应不同种类。若为高带宽需求所对应的种类,即会包括GFP_HBW标识,且GFP_HBW标识为“1”,由此可以识别一次内存访存的带宽需求为高带宽需求,此时内存管理装置可以确定一次内存访存对应的目标内存标识为高带宽内存标识。若为低带宽需求所对应的种类,即不会包括GFP_HBW标识,即一次内存访存对应的目标内存标识为“0”,由此可以识别一次内存访存的带宽需求为低带宽需求,此时内存管理装置可以确定一次内存访存对应的目标内存标识为低带宽内存标识。
S103、内存管理装置需根据一次内存访存所对应的函数调用关系确定一次内存访存对应的场景。
为了便于理解,以目标业务中包括启动,前台切换至后台,以及后台切换至前台的多种场景作为示例进行说明。若目标业务正在启动,且一次内存访问匿名页A时,则内存管理装置根据业务请求所指示一次内存访存所对应的函数调用关系,可以确定一次内存访存对应的场景为匿名页A。其次,若目标业务正在从后台切换至前台,且一次内存访问文件页B时,则内存管理装置根据业务请求所指示一次内存访存所对应的函数调用关系,可以确定一次内存访存对应的场景为文件页B,具体地,基于前述实施例可知,文件页B为低带宽场景。
进一步地,内存管理装置记录应用运行时函数调用执行的记录从而形成函数调用关系,当应用通过函数申请一次内存访存时,内存管理装置抓取此时的函数调用序列与运行时的整个函数序列匹配,从而确定此时的场景。例如,以游戏从启动到进入游戏界面最后将游戏切到后台的函数执行序列为“A→B→C→E→D→G→J→K→X→A→C→D→B→C”为示例,且游戏启动阶段为低带宽场景,进入游戏为高带宽场景,游戏切到后台为低带宽场景,因此当游戏申请一次内存访存时,若内存管理装置抓取的序列为“J→K→X”,即匹配到函数执行序列的中段,内存管理装置由此确定一次内存访存对应的场景为进入游戏,且可以确定该场景为高带宽场景。其次,若内存管理装置抓取的序列为“D→B→C”,即匹配到函数执行序列的后段,内存管理装置由此确定一次内存访存对应的场景为游戏切到后台,且可以确定该场景为低带宽场景。
应理解,前述示例仅用于理解本方案,具体场景需要根据一次内存访存以及具体函数调用关系灵活确定。
可以理解的是,本实施例中,步骤S102以及步骤S103之间没有必然的先后顺序,可以先执行步骤S102,也可以先执行步骤S103,或者同时执行步骤S102以及步骤S103,具体此处不做限定。
进一步地,通过步骤S102确定目标内存标识以及通过步骤S103确定一次内存访存对应的场景后,当所确定的目标内存标识为低带宽内存标识,或一次内存访存对应的场景为 低带宽场景时,执行步骤S104,当所确定的目标内存标识为高带宽内存标识且一次内存访存对应的场景为高带宽场景时,执行步骤S105。
S104、当目标内存标识为低带宽内存标识或一次内存访存对应的场景为低带宽场景时,则为一次内存访存分配第一内存。其中,第一内存的第一最大带宽小于第二内存的第二最大带宽,例如,第一内存对应的带宽范围为0至16兆(MB),即第一最大带宽为16MB,那么第二内存对应的带宽范围可以为15MB至64MB,即第二最大带宽为64MB,应理解,前述示例仅用于理解本方案,在实际应用中,第一内存的第一最大带宽还可以小于或等于第二内存的第小带宽,具体第一内存以及第二内存的带宽范围此处不做限定,具体范围满足第一内存的第一最大带宽小于第二内存的第二最大带宽即可。
具体地,当目标内存标识为低带宽内存标识,一次内存访存对应的场景为低带宽场景时,内存管理装置为一次内存访存分配第一内存。示例性地,当内存管理装置确定一次内存访存对应的目标内存标识为低带宽标识,且确定一次内存访存对应的场景为文件页A时,基于前述实施例可知,文件页A为低带宽场景,即文件页A对应的带宽的需求为第一内存对应的带宽需求(低带宽),此时内存管理装置为一次内存访存分配第一内存。
其次,当目标内存标识为低带宽内存标识,一次内存访存对应的场景为高带宽场景时,内存管理装置为一次内存访存分配第一内存。示例性地,当内存管理装置确定一次内存访存对应的目标内存标识为低带宽标识,但确定一次内存访存对应的场景为匿名页A时,基于前述实施例可知,匿名页A为高带宽场景,即匿名页A对应的带宽的需求为第二内存对应的带宽需求(高带宽),但由于目标内存标识为低带宽内存标识,此时内存管理装置依旧为一次内存访存分配第一内存。
再次,当目标内存标识为高带宽内存标识,一次内存访存对应的场景为低带宽场景时,内存管理装置为一次内存访存分配第一内存。示例性地,当内存管理装置确定一次内存访存对应的目标内存标识为高带宽标识,但确定一次内存访存对应的场景为文件页B时,基于前述实施例可知,文件页B为低带宽场景,即文件页B对应的带宽的需求为第一内存对应的带宽需求(低带宽),考虑到能耗需求,此时内存管理装置为一次内存访存分配第一内存。
进一步地,通过前述实施例可知,第一内存的地址范围为“0x0”至“0xFEFFFFFFFF”,因此内存管理装置为一次内存访存分配第一内存之后,一次内存访存的内存地址在“0x0”至“0xFEFFFFFFFF”中随机确定可以分配地址区域即可,此处不做限定。
应理解,前述示例均用于理解本方案,不应理解为本实施例的限定。
S105、当目标内存标识为高带宽内存标识且一次内存访存对应的场景为高带宽场景时,则为一次内存访存分配第二内存,其中,第二内存的第二最大带宽大于第一内存的第一最大带宽。示例性地,当内存管理装置确定一次内存访存对应的目标内存标识为高带宽标识,且确定一次内存访存对应的场景为GPU绘图A时,基于前述实施例可知,GPU绘图A为高带宽场景,即GPU绘图A对应的带宽的需求为第二内存对应的带宽需求(高带宽),此时内存管理装置为一次内存访存分配第二内存。
进一步地,内存管理装置还可以判断第二内存是否有待分配空间,若无待分配空间,内存管理装置将为一次内存访存分配第一内存,若存在待分配空间,那么内存管理装置为 一次内存访存分配第二内存。
进一步地,通过前述实施例可知,第第二内存的地址范围为“0xFF00000000”至“0xFFFFFFFFFF”,因此内存管理装置为一次内存访存分配第二内存之后,一次内存访存的内存地址在“0xFF00000000”至“0xFFFFFFFFFF”中随机确定可以分配地址区域即可,此处不做限定。
上面介绍了内存管理的方法中具体进行内存分配的步骤,在实际应用中,内核初始化时创建内核线程,线程会持续检测一次内存访存的真实带宽,当真实带宽与所分配内存对应的带宽不匹配时,为了保证服务质量,需要对一次内存访存的数据进行迁移。由于内存管理装置可以为一次内存访存分配第一内存,也可以为一次内存访存分配第二内存,下面分别进行介绍。
一、为一次内存访存分配第一内存
请参阅图5,图5为本申请实施例中内存管理的方法另一实施例的示意图,内存管理的方法包括如下步骤。S201、获取目标业务的业务请求。本实施例中,内存管理装置先获取目标业务的业务请求的方式与步骤S101类似,在此不再赘述。
S202、确定一次内存访存对应的目标内存标识。本实施例中,内存管理装置确定一次内存访存对应的目标内存标识的方式与步骤S102类似,在此不再赘述。
S203、根据一次内存访存所对应的函数调用关系确定一次内存访存对应的场景。本实施例中,内存管理装置需根据一次内存访存所对应的函数调用关系确定一次内存访存对应的场景的方式与步骤S103类似,在此不再赘述。
可以理解的是,本实施例中,步骤S202以及步骤S203之间没有必然的先后顺序,可以先执行步骤S202,也可以先执行步骤S203,或者同时执行步骤S202以及步骤S203,具体此处不做限定。
S204、当目标内存标识为低带宽内存标识或一次内存访存对应的场景为低带宽场景时,则为一次内存访存分配第一内存。本实施例中,当目标内存标识为低带宽内存标识或一次内存访存对应的场景为低带宽场景时,内存管理装置为一次内存访存分配第一内存的方式与步骤S104类似,在此不再赘述。
S205、检测一次内存访存的第一真实带宽。本实施例中,内存管理装置通过步骤S204为一次内存访存分配第一内存之后,需要检测一次内存访存的第一真实带宽,具体检测第一真实带宽的方式为分时统计,例如以10ms为一个统计单位,内存管理装置每10ms对一次内存访在不同内存段内的真实带宽进行检测,然后取内存段内的真实带宽的平均值为第一真实带宽。由于一次内存访存被分配至第一内存,当第一真实带宽不处于第一内存对应的带宽范围,或者已处于第二内存对应的带宽范围时,第一内存所提供的带宽已无法满足一次内存访存的带宽需求,或者第一内存所提供的带宽将无法满足一次内存访存的带宽需求,此时执行步骤S206。
示例性地,若一次内存访的内存段包括内存段A,内存段B以及内存段C,内存段A在10ms中的真实带宽为20MB,内存段B在10ms中的真实带宽为24MB,内存段C在10ms中的真实带宽为22MB,那么可以得到第一真实带宽为22MB。应理解,前述示例仅用于理解本方案,不应理解为本实施例的限定。
S206、当第一真实带宽大于第一阈值时,则将一次内存访存对应的数据从第一内存迁移至第二内存,其中,第一阈值小于或等于第一最大带宽。本实施例中,由于第一阈值小于或等于第一最大带宽,当第一真实带宽大于第一阈值,即第一内存所提供的带宽已无法满足一次内存访存的带宽需求,或者第一内存所提供的带宽将无法满足一次内存访存的带宽需求,因此为了保证对一次内存访存的服务质量,需要对一次内存访存的数据进行迁移,因此内存管理装置将一次内存访存对应的数据从第一内存迁移至第二内存。
具体地,由于步骤S204当目标内存标识为低带宽内存标识,一次内存访存对应的场景为高带宽场景时,为一次内存访存分配第一内存时,由于一次内存访存对应的场景的需求为高带宽,但分配至第一内存,因此可能出现第一真实带宽大于第一阈值的情况。
具体地,内存管理装置确定确定第一真实带宽大于第一阈值,即确定需要将一次内存访存重新分配至第二内存,内存管理装置需要将一次内存访存对应的数据先进行拷贝,然后将拷贝后得到的一次内存访存对应的数据迁移至第二内存。若带宽映射文件中一次内存访存对应的场景为低带宽场景,则需要将一次内存访存对应的场景更改为高带宽场景。
示例性地,当第一最大带宽为16MB时,第一阈值可以为15MB,15.5MB,15.8MB或者16MB等。进一步地,当第一阈值为15MB,而第一真实带宽为15.5MB时,内存管理装置确定第一真实带宽大于第一阈值,此时内存管理装置会将一次内存访存对应的数据从第一内存迁移至第二内存。应理解,前述示例仅用于理解本方案,不应理解为本实施例的限定。
二、为一次内存访存分配第二内存
请参阅图6,图6为本申请实施例中内存管理的方法另一实施例的示意图,内存管理的方法包括如下步骤。S301、获取目标业务的业务请求。本实施例中,内存管理装置先获取目标业务的业务请求的方式与步骤S101类似,在此不再赘述。
S302、确定一次内存访存对应的目标内存标识。本实施例中,内存管理装置确定一次内存访存对应的目标内存标识的方式与步骤S102类似,在此不再赘述。
S303、根据一次内存访存所对应的函数调用关系确定一次内存访存对应的场景。本实施例中,内存管理装置需根据一次内存访存所对应的函数调用关系确定一次内存访存对应的场景的方式与步骤S103类似,在此不再赘述。
可以理解的是,本实施例中,步骤S302以及步骤S303之间没有必然的先后顺序,可以先执行步骤S302,也可以先执行步骤S303,或者同时执行步骤S302以及步骤S303,具体此处不做限定。
S304、当目标内存标识为高带宽内存标识且一次内存访存对应的场景为高带宽场景时,则为一次内存访存分配第二内存。本实施例中,当目标内存标识为高带宽内存标识且一次内存访存对应的场景为高带宽场景时,内存管理装置为一次内存访存分配第二内存的方式与步骤S105类似,在此不再赘述。
S305、检测一次内存访存的第二真实带宽。本实施例中,内存管理装置通过步骤S304为一次内存访存分配第二内存之后,需要检测一次内存访存的第二真实带宽,具体检测第二真实带宽的方式为分时统计,例如以10ms为一个统计单位,内存管理装置每10ms对一次内存访在不同内存段内的真实带宽进行检测,然后取内存段内的真实带宽的平均值为第二真实带宽。由于一次内存访存被分配至第二内存,当第二真实带宽不处于第二内存对应 的带宽范围,或者已处于第一内存对应的带宽范围时,为了节约第二内存的带宽资源,此时执行步骤S306。
示例性地,若一次内存访的内存段包括内存段A,内存段B以及内存段C,内存段A在10ms中的真实带宽为10MB,内存段B在10ms中的真实带宽为16MB,内存段C在10ms中的真实带宽为10MB,那么可以得到第二真实带宽为12MB。应理解,前述示例仅用于理解本方案,不应理解为本实施例的限定。
S306、当第二真实带宽小于第二阈值时,则将一次内存访存对应的数据从第二内存迁移至第一内存,其中,第二阈值小于或等于第一最大带宽。本实施例中,由于第二阈值小于或等于第一最大带宽,当第二真实带宽小于第二阈值,即第一内存所提供的带宽可以满足一次内存访存的带宽需求,因此为了节约第二内存的带宽资源,需要对一次内存访存的数据进行迁移,因此内存管理装置将一次内存访存对应的数据从第二内存迁移至第一内存。
具体地,内存管理装置确定确定第二真实带宽小于第二阈值,即确定需要将一次内存访存重新分配至第一内存,内存管理装置需要将一次内存访存对应的数据先进行拷贝,然后将拷贝后得到的一次内存访存对应的数据迁移至第一内存。若带宽映射文件中一次内存访存对应的场景为高带宽场景,则需要将一次内存访存对应的场景更改为低带宽场景。
示例性地,当第一最大带宽为16MB时,第二阈值可以为15MB或者16MB等。进一步地,当第一阈值为15MB,而第一真实带宽为12MB时,内存管理装置确定第二真实带宽小于第二阈值,此时内存管理装置会将一次内存访存对应的数据从第二内存迁移至第一内存。应理解,前述示例仅用于理解本方案,不应理解为本实施例的限定。
进一步地,内核初始化时创建内核线程,线程还会持续侦听场景的变化,若出现场景变化的情况,即开始目标业务的下一次的内存访存时,需要重新对下一次的内存访存分配内存。具体地,若在一次内存访存中未进行数据迁移,那么下一次内存访存的内存管理方式与图4至图6类似,在此不再赘述。
其次,若在一次内存访存进行了数据迁移,那么在下一次内存访存所分配的内存,将以一次内存访存进行数据迁移后的内存分配为准,例如,如图5所示出的实施例,由于第一真实带宽大于第一阈值时,内存管理装置将一次内存访存对应的数据从第一内存迁移至第二内存,那么下一次内存访存所分配的内存一定为第二内存,其次,如图6所示出的实施例,由于第二真实带宽小于第二阈值时,内存管理装置将一次内存访存对应的数据从第二内存迁移至第一内存,那么下一次内存访存所分配的内存一定为第一内存。下面具体描述进行数据迁移后下一次内存访存对应的内存管理的方法。
一、将一次内存访存对应的数据从第一内存迁移至第二内存
为了便于理解,请参阅图7,图7为本申请实施例中内存管理的方法另一实施例的示意图,内存管理的方法包括如下步骤。S401、获取目标业务的业务请求。本实施例中,内存管理装置先获取目标业务的业务请求的方式与步骤S101类似,在此不再赘述。
S402、确定一次内存访存对应的目标内存标识。本实施例中,内存管理装置确定一次内存访存对应的目标内存标识的方式与步骤S102类似,在此不再赘述。
S403、根据一次内存访存所对应的函数调用关系确定一次内存访存对应的场景。本实施例中,内存管理装置需根据一次内存访存所对应的函数调用关系确定一次内存访存对应 的场景的方式与步骤S103类似,在此不再赘述。
可以理解的是,本实施例中,步骤S403以及步骤S403之间没有必然的先后顺序,可以先执行步骤S403,也可以先执行步骤S403,或者同时执行步骤S403以及步骤S403,具体此处不做限定。
S404、当目标内存标识为低带宽内存标识或一次内存访存对应的场景为低带宽场景时,则为一次内存访存分配第一内存。本实施例中,当目标内存标识为低带宽内存标识或一次内存访存对应的场景为低带宽场景时,内存管理装置为一次内存访存分配第一内存的方式与步骤S104类似,在此不再赘述。
S405、检测一次内存访存的第一真实带宽。本实施例中,内存管理装置检测一次内存访存的第一真实带宽的方式与步骤S205类似,在此不再赘述。
S406、当第一真实带宽大于第一阈值时,则将一次内存访存对应的数据从第一内存迁移至第二内存。本实施例中,当第一真实带宽大于第一阈值时,内存管理装置将一次内存访存对应的数据从第一内存迁移至第二内存的方式与步骤S206类似,在此不再赘述。
S407、获取目标业务的下一业务请求,其中,下一业务请求用于指示目标业务中的下一次内存访存。本实施例中,内存管理装置先获取目标业务的下一业务请求,且该下一业务请求用于指示目标业务中的下一次内存访存。该目标业务包括应用层业务。下一业务请求与业务请求类似,获取目标业务的下一业务请求的方式业与步骤S101类似,在此不再赘述。
S408、确定下一次内存访存对应的下一内存标识。本实施例中,内存管理装置通过步骤S407确定下一次内存访存后,进一步地确定下一次内存访存对应的下一内存标识,且下一内存标识可以为高带宽内存标识或者低带宽内存标识。下一内存标识与目标内存标识类似,以及确定下一次内存访存对应的下一内存标识的方式与步骤S102类似,在此不再赘述。
S409、当下一内存标识是低带宽内存标识或下一次内存访存对应的场景为低带宽场景时,则为下一次内存访存分配第二内存。本实施例中,内存管理装置还可以通过与步骤S103类似的方式确定下一次内存访存对应的场景。本实施例中,以一次内存访存进行数据迁移后的内存分配为准,因此无论下一内存标识无论是高带宽内存标识或低带宽内存标识,以及下一次内存访存对应的场景为低带宽场景或高带宽场景,由于一次内存访存对应的数据以迁移至第二内存,因此内存管理装置为下一次内存访存分配第二内存。可以认为,本实施例忽略了内存标识和场景的判断,对于与前一次相同的内存标识和场景,为了防止再次判断错误,直接使用迁移后的第二内存。
具体地,当目标内存标识为低带宽内存标识,一次内存访存对应的场景为低带宽场景时,由于一次内存访存的数据已迁移至第二内存,因此内存管理装置为下一次内存访存分配第二内存。其次,当目标内存标识为低带宽内存标识,一次内存访存对应的场景为高带宽场景时,由于一次内存访存的数据已迁移至第二内存,因此内存管理装置为下一次内存访存分配第二内存。再次,当目标内存标识为高带宽内存标识,一次内存访存对应的场景为低带宽场景时,由于一次内存访存的数据已迁移至第二内存,因此内存管理装置为下一次内存访存分配第二内存。
其次,当下一内存标识是高带宽内存标识,且下一次内存访存对应的场景为高带宽场 景时,内存管理装置采用与步骤S105类似的方式为下一次内存访存分配第二内存。在此不再赘述。
进一步地,若下一次内存访存所检测的第二真实带宽小于第二阈值时,可以通过步骤S306类似的方式将下一次内存访存对应的数据从第二内存迁移至第一内存,再次不再赘述。
二、将一次内存访存对应的数据从第二内存迁移至第一内存
为了便于理解,请参阅图8,图8为本申请实施例中内存管理的方法另一实施例的示意图,内存管理的方法包括如下步骤。S501、获取目标业务的业务请求。本实施例中,内存管理装置先获取目标业务的业务请求的方式与步骤S101类似,在此不再赘述。
S502、确定一次内存访存对应的目标内存标识。本实施例中,内存管理装置确定一次内存访存对应的目标内存标识的方式与步骤S102类似,在此不再赘述。
S503、根据一次内存访存所对应的函数调用关系确定一次内存访存对应的场景。本实施例中,内存管理装置需根据一次内存访存所对应的函数调用关系确定一次内存访存对应的场景的方式与步骤S103类似,在此不再赘述。
可以理解的是,本实施例中,步骤S502以及步骤S503之间没有必然的先后顺序,可以先执行步骤S502,也可以先执行步骤S503,或者同时执行步骤S502以及步骤S503,具体此处不做限定。
S504、当目标内存标识为高带宽内存标识且一次内存访存对应的场景为高带宽场景时,则为一次内存访存分配第二内存。本实施例中,当目标内存标识为高带宽内存标识且一次内存访存对应的场景为高带宽场景时,内存管理装置为一次内存访存分配第二内存的方式与步骤S105类似,在此不再赘述。
S505、检测一次内存访存的第二真实带宽。本实施例中,内存管理装置检测一次内存访存的第二真实带宽的方式与步骤S305类似,在此不再赘述。
S506、当第二真实带宽小于第二阈值时,则将一次内存访存对应的数据从第二内存迁移至第一内存。本实施例中,当第二真实带宽小于第二阈值时,内存管理装置将一次内存访存对应的数据从第二内存迁移至第一内存的方式与步骤S306类似,在此不再赘述。
S507、获取目标业务的下一业务请求,其中,下一业务请求用于指示目标业务中的下一次内存访存。本实施例中,内存管理装置先获取目标业务的下一业务请求,且该下一业务请求用于指示目标业务中的下一次内存访存。该目标业务包括应用层业务。下一业务请求与业务请求类似,获取目标业务的下一业务请求的方式业与步骤S101类似,在此不再赘述。
S508、确定下一次内存访存对应的下一内存标识。本实施例中,内存管理装置通过步骤S507确定下一次内存访存后,进一步地确定下一次内存访存对应的下一内存标识,且下一内存标识可以为高带宽内存标识或者低带宽内存标识。下一内存标识与目标内存标识类似,以及确定下一次内存访存对应的下一内存标识的方式与步骤S102类似,在此不再赘述。
S509、当下一内存标识是高带宽内存标识且下一次内存访存对应的场景为高带宽场景时,则为下一次内存访存分配第一内存。本实施例中,内存管理装置还可以通过与步骤S103类似的方式确定下一次内存访存对应的场景。本实施例中,以一次内存访存进行数据迁移后的内存分配为准,因此无论下一内存标识无论是高带宽内存标识或低带宽内存标识,以 及下一次内存访存对应的场景为低带宽场景或高带宽场景,由于一次内存访存对应的数据以迁移至第一内存,因此内存管理装置为下一次内存访存分配第一内存。可以认为,本实施例忽略了内存标识和场景的判断,对于与前一次相同的内存标识和场景,为了防止再次判断错误,直接使用迁移后的第一内存。
具体地,当下一内存标识是高带宽内存标识,且下一次内存访存对应的场景为高带宽场景时,内存管理装置为下一次内存访存分配第一内存。
其次,当目标内存标识为低带宽内存标识,一次内存访存对应的场景为低带宽场景时,内存管理装置为下一次内存访存分配第一内存。其次,当目标内存标识为低带宽内存标识,一次内存访存对应的场景为高带宽场景时,因此内存管理装置为下一次内存访存分配第一内存。再次,当目标内存标识为高带宽内存标识,一次内存访存对应的场景为低带宽场景时,因此内存管理装置为下一次内存访存分配第一内存。
进一步地,若下一次内存访存所检测的第一真实带宽大于第一阈值时,可以通过步骤S206类似的方式将下一次内存访存对应的数据从第一内存迁移至第二内存,再次不再赘述。
上述主要以方法的角度对本申请实施例提供的方案进行了介绍。可以理解的是,内存管理装置为了实现上述功能,其包含了执行各个功能相应的硬件结构和/或软件模块。本领域技术人员应该很容易意识到,结合本文中所公开的实施例描述的各示例的模块及算法步骤,本申请能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
本申请实施例可以基于上述方法示例对内存管理装置进行功能模块的划分,例如,可以对应各个功能划分各个功能模块,也可以将两个或两个以上的功能集成在一个处理模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。需要说明的是,本申请实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
下面对本申请中的内存管理装置进行详细描述,请参阅图9,图9为本申请实施例中内存管理装置的一种实施例示意图,如图9所示,该内存管理装置900包括获取模块901,确定模块902以及分配模块903。
获取模块901,用于获取目标业务的业务请求,其中,业务请求用于指示目标业务中的一次内存访存;确定模块902,用于确定一次内存访存对应的目标内存标识,其中,目标内存标识为高带宽内存标识,或,低带宽内存标识;分配模块903,用于当目标内存标识为低带宽内存标识或一次内存访存对应的场景为低带宽场景时,则为一次内存访存分配第一内存;分配模块903,还用于当目标内存标识为高带宽内存标识且一次内存访存对应的场景为高带宽场景时,则为一次内存访存分配第二内存,其中,第二内存的第二最大带宽大于第一内存的第一最大带宽。
可选地,确定模块902,还用于根据一次内存访存所对应的函数调用关系确定一次内存访存对应的场景。
可选地,该内存管理装置900还包括检测模块904以及迁移模块905。检测模块904, 用于在分配模块903为一次内存访存分配第一内存之后,检测一次内存访存的第一真实带宽;迁移模块905,用于当第一真实带宽大于第一阈值时,则将一次内存访存对应的数据从第一内存迁移至第二内存,其中,第一阈值小于或等于第一最大带宽。
可选地,获取模块901,还用于获取目标业务的下一业务请求,其中,下一业务请求用于指示目标业务中的下一次内存访存;确定模块902,还用于确定下一次内存访存对应的下一内存标识;分配模块903,还用于当下一内存标识是低带宽内存标识或下一次内存访存对应的场景为低带宽场景时,则为下一次内存访存分配第二内存。
可选地,检测模块904,还用于在分配模块为一次内存访存分配第二内存之后,检测一次内存访存的第二真实带宽;迁移模块905,还用于当第二真实带宽小于第二阈值时,则将一次内存访存对应的数据从第二内存迁移至第一内存,其中,第二阈值小于或等于第一最大带宽。
可选地,获取模块901,还用于获取目标业务的下一业务请求,其中,下一业务请求用于指示目标业务中的下一次内存访存;确定模块902,还用于确定下一次内存访存对应的下一内存标识;分配模块903,还用于当下一内存标识是高带宽内存标识且下一次内存访存对应的场景为高带宽场景时,则为下一次内存访存分配第一内存。
可选地,目标业务包括应用层业务。
还应理解,该内存管理装置900为终端设备时,该内存管理装置900中的获取模块901,确定模块902,分配模块903,检测模块904以及迁移模块905可通过至少一个处理器实现,例如可对应于图10中示出的终端设备3000中的处理器3010。
还应理解,该内存管理装置900为配置于终端设备中的芯片或芯片系统时,该内存管理装置900中的获取模块901,确定模块902,分配模块903,检测模块904以及迁移模块905可以通过该芯片或芯片系统上集成的处理器、微处理器或集成电路等实现。
可以理解的是,通过本实施例所介绍的方法可知,图9所示出的获取模块901对应的方法可以被图1以及图3中所示出的场景与行为管理模块所执行,图9所示出的确定模块902对应的方法可以被图1以及图3中所示出的场景切片管理模块以及应用函数调用记录模块共同执行,其次,图9所示出的分配模块903以及迁移模块905对应的方法可以被图1以及图3中所示出的内存管理子系统中的第一内存分配区域以及第二内存分配区域所执行。再次,图9所示出的检测模块904对应的方法可以被图1以及图3中所示出的软件带宽检测模块所执行。
图10是本申请实施例提供的终端设备3000的结构示意图。该终端设备3000可应用于如图1所示的系统中,如图10所示,该终端设备3000包括处理器3010和收发器3020。可选地,该终端设备3000还包括存储器3030。其中,处理器3010、收发器3020和存储器3030之间可以通过内部连接通路互相通信,传递控制和/或数据信号,该存储器3030用于存储计算机程序,该处理器3010用于从该存储器3030中调用并运行该计算机程序,以控制该收发器3020收发信号。可选地,终端设备3000还可以包括天线3040,用于将收发器3020输出的上行数据或上行控制信令通过无线信号发送出去。
可选地,该存储器3030可以包括只读存储器和随机存取存储器,并向处理器3010提供指令和数据。存储器的一部分还可以包括非易失性随机存取存储器。具体实现时,该存 储器3030可以是一个单独的器件,也可以集成在处理器3010中。该处理器3010可以用于执行存储器3030中存储的指令,并且当该处理器3010执行存储器中存储的指令时,该处理器3010用于执行上述与内存管理装置对应的方法实施例的各个步骤和/或流程。该处理器3010可以与图9中的获取模块901,确定模块902,分配模块903,检测模块904以及迁移模块905对应。
可选地,该终端设备3000是前文方法实施例中的内存管理装置,即可以对应于上述方法实施例中的内存管理装置,并且可以用于执行上述方法实施例中内存管理装置执行的各个步骤和/或流程。
应理解,图10所示的终端设备3000能够实现图4,图5,图6,图7以及图8所示方法实施例中涉及内存管理装置的各个过程。终端设备3000中的各个模块的操作和/或功能,分别为了实现上述方法实施例中的相应流程。具体可参见上述方法实施例中的描述,为避免重复,此处适当省略详细描述。
其中,收发器3020可以包括发射机和接收机。收发器3020还可以进一步包括天线,天线的数量可以为一个或多个。该处理器3010和存储器3030与收发器3020可以是集成在不同芯片上的器件。如,处理器3010和存储器3030可以集成在基带芯片中,收发器3020可以集成在射频芯片中。该处理器3010和存储器3030与收发器3020也可以是集成在同一个芯片上的器件。本申请对此不作限定。
上述处理器3010可以用于执行前面方法实施例中描述的由内存管理装置内部实现的动作。具体请见前面方法实施例中的描述,此处不再赘述。
可选地,上述终端设备3000还可以包括电源3050,用于给终端设备中的各种器件或电路提供电源。
除此之外,为了使得终端设备的功能更加完善,该终端设备3000还可以包括输入单元3060、显示单元3070、音频电路3080、摄像头3090和传感器3100等中的一个或多个,所述音频电路还可以包括扬声器3082、麦克风3084等。
本申请还提供了一种内存管理装置,包括至少一个处理器,所述至少一个处理器用于执行存储器中存储的计算机程序,以使得所述内存管理装置执行上述任一方法实施例中终端设备或网络设备所执行的方法。
应理解,上述内存管理装置可以是一个或多个芯片。例如,该内存管理装置可以是现场可编程门阵列(field programmable gate array,FPGA),可以是专用集成芯片(application specific integrated circuit,ASIC),还可以是系统芯片(system on chip,SoC),还可以是中央处理器(central processor unit,CPU),还可以是网络处理器(network processor,NP),还可以是数字信号处理电路(digital signal processor,DSP),还可以是微控制器(micro controller unit,MCU),还可以是可编程控制器(programmable logic device,PLD)或其他集成芯片。
本申请实施例还提供了一种内存管理装置,包括处理器和通信接口。所述通信接口与所述处理器耦合。所述通信接口用于输入和/或输出信息。所述信息包括指令和数据中的至少一项。所述处理器用于执行计算机程序,以使得所述内存管理装置执行上述任一方法实施例中内存管理装置所执行的方法。
本申请实施例还提供了一种内存管理装置,包括处理器和存储器。所述存储器用于存储计算机程序,所述处理器用于从所述存储器调用并运行所述计算机程序,以使得所述内存管理装置执行上述任一方法实施例中内存管理装置所执行的方法。
在实现过程中,上述方法的各步骤可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令完成。结合本申请实施例所公开的方法的步骤可以直接体现为硬件处理器执行完成,或者用处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法的步骤。为避免重复,这里不再详细描述。
应注意,本申请实施例中的处理器可以是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法实施例的各步骤可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器可以是通用处理器、数字信号处理器(DSP)、专用集成电路(ASIC)、现场可编程门阵列(FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法的步骤。
可以理解,本申请实施例中的存储器可以是易失性存储器或非易失性存储器,或可包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(read-only memory,ROM)、可编程只读存储器(programmable ROM,PROM)、可擦除可编程只读存储器(erasable PROM,EPROM)、电可擦除可编程只读存储器(electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(random access memory,RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(static RAM,SRAM)、动态随机存取存储器(dynamic RAM,DRAM)、同步动态随机存取存储器(synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(double data rate SDRAM,DDR SDRAM)、增强型同步动态随机存取存储器(enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(synchlink DRAM,SLDRAM)和直接内存总线随机存取存储器(direct rambus RAM,DR RAM)。应注意,本文描述的系统和方法的存储器旨在包括但不限于这些和任意其它适合类型的存储器。
根据本申请实施例提供的方法,本申请还提供一种计算机程序产品,该计算机程序产品包括:计算机程序代码,当该计算机程序代码在计算机上运行时,使得该计算机执行图4至图8所示实施例中的内存管理装置执行的方法。
根据本申请实施例提供的方法,本申请还提供一种计算机可读存储介质,该计算机可读存储介质存储有程序代码,当该程序代码在计算机上运行时,使得该计算机执行图4至图8所示实施例中的内存管理装置执行的方法。
上述各个装置实施例中内存管理装置和方法实施例中的内存管理装置完全对应,由相 应的模块或单元执行相应的步骤,例如通信单元(收发器)执行方法实施例中接收或发送的步骤,除发送、接收外的其它步骤可以由处理单元(处理器)执行。具体单元的功能可以参考相应的方法实施例。其中,处理器可以为一个或多个。
在本说明书中使用的术语“部件”、“模块”、“系统”等用于表示计算机相关的实体、硬件、固件、硬件和软件的组合、软件、或执行中的软件。例如,部件可以是但不限于,在处理器上运行的进程、处理器、对象、可执行文件、执行线程、程序和/或计算机。通过图示,在计算设备上运行的应用和计算设备都可以是部件。一个或多个部件可驻留在进程和/或执行线程中,部件可位于一个计算机上和/或分布在2个或更多个计算机之间。此外,这些部件可从在上面存储有各种数据结构的各种计算机可读介质执行。部件可例如根据具有一个或多个数据分组(例如来自与本地系统、分布式系统和/或网络间的另一部件交互的二个部件的数据,例如通过信号与其它系统交互的互联网)的信号通过本地和/或远程进程来通信。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟 悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (16)

  1. 一种内存管理的方法,其特征在于,包括:
    获取目标业务的业务请求,其中,所述业务请求用于指示所述目标业务中的一次内存访存;
    确定所述一次内存访存对应的目标内存标识,其中,所述目标内存标识为高带宽内存标识,或,低带宽内存标识;
    当所述目标内存标识为所述低带宽内存标识或所述一次内存访存对应的场景为低带宽场景时,则为所述一次内存访存分配第一内存;
    当所述目标内存标识为所述高带宽内存标识且所述一次内存访存对应的场景为高带宽场景时,则为所述一次内存访存分配所述第二内存,其中,所述第二内存的第二最大带宽大于所述第一内存的第一最大带宽。
  2. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    根据所述一次内存访存所对应的函数调用关系确定所述一次内存访存对应的场景。
  3. 根据权利要求1或2所述的方法,其特征在于,在所述为所述一次内存访存分配第一内存之后,所述方法还包括:
    检测所述一次内存访存的第一真实带宽;
    当所述第一真实带宽大于第一阈值时,则将所述一次内存访存对应的数据从所述第一内存迁移至所述第二内存,其中,所述第一阈值小于或等于所述第一最大带宽。
  4. 根据权利要求3所述的方法,其特征在于,所述方法还包括:
    获取所述目标业务的下一业务请求,其中,所述下一业务请求用于指示所述目标业务中的下一次内存访存;
    确定所述下一次内存访存对应的下一内存标识;
    当所述下一内存标识是所述低带宽内存标识或所述下一次内存访存对应的场景为低带宽场景时,则为所述下一次内存访存分配所述第二内存。
  5. 根据权利要求1至4中任一项所述的方法,其特征在于,在所述为所述一次内存访存分配第二内存之后,所述方法还包括:
    检测所述一次内存访存的第二真实带宽;
    当所述第二真实带宽小于第二阈值时,则将所述一次内存访存对应的数据从所述第二内存迁移至所述第一内存,其中,所述第二阈值小于或等于所述第一最大带宽。
  6. 根据权利要求5所述的方法,其特征在于,所述方法还包括:
    获取所述目标业务的下一业务请求,其中,所述下一业务请求用于指示所述目标业务中的下一次内存访存;
    确定所述下一次内存访存对应的下一内存标识;
    当所述下一内存标识是所述高带宽内存标识且所述下一次内存访存对应的场景为高带宽场景时,则为所述下一次内存访存分配所述第一内存。
  7. 根据权利要求1至6中任一项所述的方法,其特征在于,所述目标业务包括应用层业务。
  8. 一种内存管理装置,其特征在于,包括:
    获取模块,用于获取目标业务的业务请求,其中,所述业务请求用于指示所述目标业务中的一次内存访存;
    确定模块,用于确定所述一次内存访存对应的目标内存标识,其中,所述目标内存标识为高带宽内存标识,或,低带宽内存标识;
    分配模块,用于当所述目标内存标识为所述低带宽内存标识或所述一次内存访存对应的场景为低带宽场景时,则为所述一次内存访存分配第一内存;
    所述分配模块,还用于当所述目标内存标识为所述高带宽内存标识且所述一次内存访存对应的场景为高带宽场景时,则为所述一次内存访存分配所述第二内存,其中,所述第二内存的第二最大带宽大于所述第一内存的第一最大带宽。
  9. 根据权利要求8所述的内存管理装置,其特征在于,所述确定模块,还用于根据所述一次内存访存所对应的函数调用关系确定所述一次内存访存对应的场景。
  10. 根据权利要求8或9所述的内存管理装置,其特征在于,所述内存管理装置还包括:
    检测模块,用于在所述分配模块为所述一次内存访存分配第一内存之后,检测所述一次内存访存的第一真实带宽;
    迁移模块,用于当所述第一真实带宽大于第一阈值时,则将所述一次内存访存对应的数据从所述第一内存迁移至所述第二内存,其中,所述第一阈值小于或等于所述第一最大带宽。
  11. 根据权利要求10所述的内存管理装置,其特征在于,所述获取模块,还用于获取所述目标业务的下一业务请求,其中,所述下一业务请求用于指示所述目标业务中的下一次内存访存;
    所述确定模块,还用于确定所述下一次内存访存对应的下一内存标识;
    所述分配模块,还用于当所述下一内存标识是所述低带宽内存标识或所述下一次内存访存对应的场景为低带宽场景时,则为所述下一次内存访存分配所述第二内存。
  12. 根据权利要求8至11中任一项所述的内存管理装置,其特征在于,所述内存管理装置还包括:
    所述检测模块,还用于在所述分配模块为所述一次内存访存分配第二内存之后,检测所述一次内存访存的第二真实带宽;
    所述迁移模块,还用于当所述第二真实带宽小于第二阈值时,则将所述一次内存访存对应的数据从所述第二内存迁移至所述第一内存,其中,所述第二阈值小于或等于所述第一最大带宽。
  13. 根据权利要求12所述的内存管理装置,其特征在于,所述获取模块,还用于获取所述目标业务的下一业务请求,其中,所述下一业务请求用于指示所述目标业务中的下一次内存访存;
    所述确定模块,还用于确定所述下一次内存访存对应的下一内存标识;
    所述分配模块,还用于当所述下一内存标识是所述高带宽内存标识且所述下一次内存访存对应的场景为高带宽场景时,则为所述下一次内存访存分配所述第一内存。
  14. 根据权利要求8至13中任一项所述的内存管理装置,其特征在于,所述目标业务 包括应用层业务。
  15. 一种芯片,其特征在于,所述芯片包括至少一个处理器,所述至少一个处理器与至少一个存储器通信连接,所述至少一个存储器中存储有指令;所述指令被所述至少一个处理器执行权利要求1至7中任一项所述的方法。
  16. 一种计算机可读存储介质,其中存储有指令,当所述指令在计算机上运行时,使得计算机执行如权利要求1至7任一所述的方法。
PCT/CN2020/127761 2020-11-10 2020-11-10 一种内存管理的方法以及相关装置 WO2022099446A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202080006418.7A CN115053211A (zh) 2020-11-10 2020-11-10 一种内存管理的方法以及相关装置
PCT/CN2020/127761 WO2022099446A1 (zh) 2020-11-10 2020-11-10 一种内存管理的方法以及相关装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/127761 WO2022099446A1 (zh) 2020-11-10 2020-11-10 一种内存管理的方法以及相关装置

Publications (1)

Publication Number Publication Date
WO2022099446A1 true WO2022099446A1 (zh) 2022-05-19

Family

ID=81600714

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/127761 WO2022099446A1 (zh) 2020-11-10 2020-11-10 一种内存管理的方法以及相关装置

Country Status (2)

Country Link
CN (1) CN115053211A (zh)
WO (1) WO2022099446A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103713954A (zh) * 2013-12-25 2014-04-09 华为技术有限公司 一种处理器模块及电子设备
US20140189189A1 (en) * 2012-12-28 2014-07-03 Asmedia Technology Inc. Computer arbitration system, bandwidth, allocation apparatus, and method thereof
CN104750557A (zh) * 2013-12-27 2015-07-01 华为技术有限公司 一种内存管理方法和内存管理装置
CN108780428A (zh) * 2016-03-14 2018-11-09 英特尔公司 不对称存储器管理
CN109388486A (zh) * 2018-10-09 2019-02-26 北京航空航天大学 一种针对异构内存与多类型应用混合部署场景的数据放置与迁移方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140189189A1 (en) * 2012-12-28 2014-07-03 Asmedia Technology Inc. Computer arbitration system, bandwidth, allocation apparatus, and method thereof
CN103713954A (zh) * 2013-12-25 2014-04-09 华为技术有限公司 一种处理器模块及电子设备
CN104750557A (zh) * 2013-12-27 2015-07-01 华为技术有限公司 一种内存管理方法和内存管理装置
CN108780428A (zh) * 2016-03-14 2018-11-09 英特尔公司 不对称存储器管理
CN109388486A (zh) * 2018-10-09 2019-02-26 北京航空航天大学 一种针对异构内存与多类型应用混合部署场景的数据放置与迁移方法

Also Published As

Publication number Publication date
CN115053211A (zh) 2022-09-13

Similar Documents

Publication Publication Date Title
US10534552B2 (en) SR-IOV-supported storage resource access method and storage controller and storage device
CN108243118B (zh) 转发报文的方法和物理主机
US20200278880A1 (en) Method, apparatus, and system for accessing storage device
US9672143B2 (en) Remote memory ring buffers in a cluster of data processing nodes
US20190347212A1 (en) Data Transmission Method, Apparatus, Device, and System
KR102363526B1 (ko) 복수의 엑세스 모드를 지원하는 불휘발성 메모리를 포함하는 시스템 및 그것의 엑세스 방법
US11644994B2 (en) Data migration method, host, and solid state disk
WO2019024828A1 (en) RESOURCE CONFIGURATION METHOD, MOBILE TERMINAL, AND INFORMATION CARRIER
US20200233601A1 (en) Solid-State Disk (SSD) Data Migration
CN108984465B (zh) 一种消息传输方法及设备
US20130151747A1 (en) Co-processing acceleration method, apparatus, and system
EP3377965B1 (en) Data processing method, device, and system
US11829309B2 (en) Data forwarding chip and server
CN113760560A (zh) 一种进程间通信方法以及进程间通信装置
WO2020168522A1 (zh) 一种片上系统、访问命令的路由方法及终端
CN114416630A (zh) 基于pcie的通信方法、装置、计算机设备和可读存储介质
US8984179B1 (en) Determining a direct memory access data transfer mode
WO2022099446A1 (zh) 一种内存管理的方法以及相关装置
US20160162415A1 (en) Systems and methods for providing improved latency in a non-uniform memory architecture
CN116418848A (zh) 网络节点的配置和访问请求的处理方法、装置
WO2022126398A1 (zh) 消息通知方法及装置
CN116601616A (zh) 一种数据处理装置、方法及相关设备
CN112395216A (zh) 用于存储管理的方法、装置、设备和计算机可读存储介质
CN116881191B (zh) 数据处理方法、装置、设备及存储介质
US11847316B2 (en) System and method for managing data storage in network interface controllers

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20961005

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20961005

Country of ref document: EP

Kind code of ref document: A1