WO2023103506A1 - 用于设备的内存管理方法、内存管理设备和计算系统 - Google Patents

用于设备的内存管理方法、内存管理设备和计算系统 Download PDF

Info

Publication number
WO2023103506A1
WO2023103506A1 PCT/CN2022/119004 CN2022119004W WO2023103506A1 WO 2023103506 A1 WO2023103506 A1 WO 2023103506A1 CN 2022119004 W CN2022119004 W CN 2022119004W WO 2023103506 A1 WO2023103506 A1 WO 2023103506A1
Authority
WO
WIPO (PCT)
Prior art keywords
memory
system memory
memory block
hardware unit
physical memory
Prior art date
Application number
PCT/CN2022/119004
Other languages
English (en)
French (fr)
Inventor
艾国
杨作兴
房汝明
向志宏
Original Assignee
深圳比特微电子科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳比特微电子科技有限公司 filed Critical 深圳比特微电子科技有限公司
Publication of WO2023103506A1 publication Critical patent/WO2023103506A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory

Definitions

  • the present disclosure relates to the technical field of storage, and in particular, relates to a memory management method for a device, a memory management device and a computing system.
  • One of the objectives of the present disclosure is to provide a memory management method for a device, a memory management device and a computing system.
  • a memory management method for a device where the device includes a plurality of hardware units, the memory management method includes:
  • the system memory block includes physical memory and the extended memory corresponding to the physical memory
  • each system memory block assigned to all hardware units running in the same application scenario is mapped to a different physical memory block;
  • system memory block of at least one application scenario in the different application scenarios is mapped to the same physical memory block.
  • the memory management device includes a memory, a processor, and instructions stored on the memory, when the instructions are executed by the processor, the above The steps of the memory management method.
  • a computing system includes a computing device and the above-mentioned memory management device, wherein the computing device includes a plurality of hardware units; or the coefficient system includes computing A device, the computing device is provided with the above-mentioned memory management device.
  • a non-transitory computer-readable storage medium stores instructions, and when the instructions are executed by a processor, the above-mentioned The steps of the memory management method.
  • a computer program product includes instructions, and when the instructions are executed by a processor, the steps of the above-mentioned memory management method are implemented.
  • FIG. 1 shows a schematic flowchart of a memory management method for a device according to an exemplary embodiment of the present disclosure
  • Fig. 2 shows a schematic diagram of memory allocation according to a first specific example of the present disclosure
  • Fig. 3 shows a schematic diagram of memory allocation according to a second specific example of the present disclosure
  • FIG. 4 shows a schematic diagram of memory allocation according to a third specific example of the present disclosure
  • Fig. 5 shows a schematic diagram of memory allocation according to a fourth specific example of the present disclosure
  • FIG. 6 shows a schematic diagram of memory allocation according to a fifth specific example of the present disclosure.
  • FIG. 7 shows a schematic diagram of memory allocation according to a sixth specific example of the present disclosure.
  • Fig. 8 shows a schematic diagram of a memory management device according to an exemplary embodiment of the present disclosure
  • FIG. 9 shows a schematic diagram of a computing device according to an exemplary embodiment of the present disclosure.
  • FIG. 10 shows a schematic diagram of a computing device according to another exemplary embodiment of the present disclosure.
  • the present disclosure proposes a memory management method for a device and a corresponding computing system, where the computing system may include a computing device.
  • the computing system may include a computing device.
  • the memory management method of the present disclosure at least some hardware units in the device share physical memory among different application scenarios, so as to realize full utilization of memory.
  • the computing device 800 may include multiple hardware units, such as hardware unit H1 and hardware unit H2, etc., and these hardware units may be connected to the bus of the computing device 800 for data exchange.
  • memory management methods can include:
  • Step S100 determining multiple application scenarios of the device.
  • the multiple application scenarios at least include the situation that two application scenarios are not performed simultaneously.
  • hardware units operating in different application scenarios may share the same part of physical memory.
  • only the running time of the application scenario may be considered, and the specific operations executed in the same application scenario may be related or unrelated.
  • the device is a smart network camera
  • all possible application scenarios of the smart network camera may include recording application scenarios and playback application scenarios that do not run simultaneously. It can be understood that when the device is another type of device, multiple corresponding application scenarios may be determined according to the operating characteristics of the device, which is not limited here.
  • the memory management method may also include:
  • Step S200 according to the number of physical memory blocks required by each hardware unit in each application scenario, and according to the established mapping relationship between the system memory blocks in the system memory and the physical memory blocks in the physical memory, determine the allocation to The block of system memory for the corresponding hardware unit.
  • System memory is the main area where the system temporarily stores program instructions and data, and it can include multiple blocks of system memory of the same size.
  • System memory can include physical memory and extended memory corresponding to the physical memory.
  • the first type of system memory block in the system memory is the real physical memory block.
  • the second type of system memory block is the extended
  • the mapping relationship between them and physical memory blocks can be determined according to requirements.
  • the second type of system memory block is called, the corresponding system memory block is actually called based on the mapping relationship between the corresponding system memory block and the physical memory block.
  • Extended memory is obtained through expansion, which allows the hardware unit to "see" system memory that is larger than the actual physical memory.
  • a system memory block or a group formed by several system memory blocks can be the smallest allocatable unit when allocating system memory, wherein the size of each system memory block can be 32KB, 64KB, or 128KB.
  • a one-to-one corresponding memory address may be assigned to them.
  • the memory addresses in the system memory can be expressed as 00, 01, 02, ..., 31, etc.
  • system memory blocks s00, s01, s02, and s31 represent the system memory blocks with memory addresses 00, 01, 02, and 31, respectively.
  • the first memory block and the last memory block in the continuous memory block group are used to represent the corresponding continuous memory block group
  • the system memory block group from system memory block s00 to system memory block s15 can be represented by Denoted as (s01,s15), which includes system memory blocks s01, s02, s03, s04, s05, s06, s07, s08, s09, s10, s11, s12, s13, s14, and s15.
  • the memory addresses in the physical memory can be expressed as 00, 01, 02, ..., 15, etc., and the letter "p" is added before the memory address to indicate the corresponding physical memory block,
  • physical memory blocks p00, p01, p02, and p15 represent physical memory blocks with memory addresses 00, 01, 02, and 15, respectively.
  • the size of each physical memory block may be the same as that of the corresponding system memory block, such as 32KB, 64KB, or 128KB.
  • the hardware units can also be grouped and/or sorted according to the number of required system memory blocks, etc., so as to facilitate the subsequent steps Allocate memory more scientifically and improve memory utilization.
  • each system memory block allocated to all hardware units operating in the same application scenario is mapped to different physical memory blocks respectively.
  • the operations in the same application scenario may be performed at the same time, and the corresponding system memory (or corresponding physical memory) needs to be called at the same time, so all the operations running in the same application scenario
  • Each system memory block of the hardware unit should be mapped to different physical memory blocks in the physical memory, and cannot be shared, so as to avoid conflicts caused by system memory blocks that may be called at the same time mapped to the same physical memory block, thereby ensuring that the application normal operation of the scene.
  • the size of the physical memory should at least be the size of the system memory required by the application scenario that requires the most system memory blocks among multiple application scenarios, so as to ensure that each application scenario can run normally.
  • the size of the system memory that the corresponding hardware unit needs to call is 82MB in total
  • the size of the system memory that the corresponding hardware unit needs to call is 240MB in total
  • the size of the system memory that the corresponding hardware unit needs to call is 25MB in total.
  • the size of the physical memory should be greater than or equal to 240MB, the largest system memory required, for example, the physical memory can be 256MB.
  • system memory block of at least one application scenario and the system memory block of another application scenario in the different application scenarios are mapped to the same block of physical memory.
  • a physical memory block called by a hardware unit in a certain application scenario can also be called by another hardware unit in another application scenario, and will not cause conflict.
  • At least a part of the system memory blocks in one system memory block group and at least a part of the system memory blocks in another system memory block group A memory block can be mapped to the same physical memory block in the physical memory, so that the same physical memory block can be reused in multiple application scenarios, so as to fully utilize the physical memory and avoid an increase in hardware costs.
  • each hardware unit In the process of allocating memory, it can also be sorted according to factors such as the size of memory that each hardware unit may share with other hardware units, so as to uniformly plan the memory allocation of each hardware unit and realize full utilization of physical memory.
  • each hardware unit After memory allocation is completed, each hardware unit will have a corresponding system memory block group, and the system memory block group is mapped to a corresponding physical memory block in the physical memory. In this way, based on the correspondence or mapping relationship between the hardware unit, the system memory block group and the physical memory block, when the hardware unit is running in a corresponding application scenario, the corresponding physical memory block can be invoked.
  • each system memory block in the system memory may be limited to at most one hardware unit among the plurality of hardware units. That is to say, the same system memory block cannot be shared by two or more hardware units, so as not to cause memory allocation confusion.
  • the size of the system memory can be a multiple of the size of the physical memory, and the multiple is a number greater than 1. Generally speaking, the size of the system memory can be 1.5 to 5 times the size of the physical memory. For example, in the embodiment shown in Fig. 2, Fig. 6 and Fig. 7, the size of the system memory is 2 times of the size of the physical memory; in the embodiment shown in Fig. 3 and Fig. 4, the size of the system memory is the physical memory 3 times the size of the memory; in the embodiment shown in Figure 5, the size of the system memory is 1.5 times the size of the physical memory, and the mapping relationship between the system memory block and the physical memory block will be described in detail later.
  • the size of the system memory may be an integer multiple of the size of the physical memory (such as shown in the specific examples in FIGS. 2 to 4 and FIGS. 6 to 7 ).
  • the multiple of the size of the system memory and the size of the physical memory may be determined according to the number of scenarios of multiple application scenarios. For example, when there are a total of two application scenarios that do not occur simultaneously in the device, the size of the system memory can be twice the size of the physical memory. Alternatively, when there are a total of three or four application scenarios that do not occur simultaneously in the device, the size of the system memory may be three times or four times the size of the physical memory, respectively.
  • the system memory can be divided into multiple groups of system sub-memory.
  • the size of each group of system sub-memory is equal to the size of physical memory, and a fixed one-to-one mapping relationship between system memory blocks in each group of system sub-memory and physical memory blocks in physical memory is established, so that each The blocks of physical memory that a block of system memory may map to will be fixed.
  • the i-th system memory block always corresponds to the i-th physical memory block in the physical memory, that is, in each group of system sub-memory, the first system memory block corresponds to the physical memory The first physical memory block in , the second system memory block corresponds to the second physical memory block in physical memory, and so on.
  • the (i+n*Np)-th extended memory block in the system memory block can be mapped to the i-th physical memory block in the physical memory respectively, where i is an integer greater than zero, and n is an integer greater than or equal to zero Integer, Np is the total number of physical memory blocks in physical memory.
  • the occupancy status of the system memory block occupied by this hardware unit can be marked as "1", otherwise set to "0" .
  • a corresponding memory allocation mapping table can be formed according to the memory address of the corresponding system memory block and its occupancy state, and when the hardware unit is running, the corresponding system memory can be called according to the memory allocation mapping table block, and then call the corresponding physical memory block.
  • the memory allocation mapping table of the hardware unit Ha can be expressed as:
  • the memory allocation mapping table of the hardware unit Hd can be expressed as:
  • system memory blocks allocated to the same hardware unit can also be limited to the same group of system sub-memory, so as to facilitate the management of the correspondence or mapping relationship between hardware units, system memory blocks and physical memory blocks .
  • mapping relationship between the system memory block in the corresponding system memory block group and the physical memory block in the physical memory so as to determine the system memory block group allocated to the corresponding hardware unit in the system memory. That is to say, before the device is put into use, the mapping relationship between the system memory block and the physical memory block can be determined in advance, so as to pre-allocate the corresponding system memory block for all the hardware units therein. Alternatively, before each of the multiple application scenarios is started, the mapping relationship between system memory blocks and physical memory blocks may be determined, so as to allocate corresponding system memory blocks to corresponding hardware units.
  • each hardware unit among the multiple hardware units and the system corresponding to the hardware unit can remain unchanged, and the mapping relationship between the system memory blocks in each system memory block group and the physical memory blocks in the physical memory can remain unchanged, that is, the system memory blocks allocated to the hardware unit and the system memory blocks
  • the overall mapping relationship between the block and the physical memory block may not change. In this way, during the operation of the device, dynamic application and release of memory can no longer be performed, thereby greatly reducing software overhead, helping to improve the operating efficiency of the device, and avoiding a large number of memory fragments.
  • the memory management method may also include:
  • the usage scenario of the device may change fundamentally. For example, a device may be salvaged from another computing system, and a different computing system may be used to handle completely different tasks.
  • the multiple application scenarios and corresponding memory allocation relationships determined by the computing system may no longer be applicable. Therefore, when the set of application scenarios of the device changes, memory can be re-allocated for multiple hardware units in the device.
  • multiple application scenarios of the device can be re-determined, and then based on the number of physical memory blocks required by each hardware unit in each application scenario, and, according to the established system memory blocks and physical memory
  • the mapping relationship of the physical memory block in the system determines the system memory block allocated to the corresponding hardware unit, and the system memory includes the physical memory and the extended memory corresponding to the physical memory.
  • the groups of system memory blocks allocated to all hardware units operating in the same application scenario are respectively mapped to different physical memory blocks in the physical memory.
  • the system memory block of at least one application scenario in different application scenarios is different from the system memory block of another application scenario Blocks are mapped to the same block of physical memory for efficient and flexible utilization of the device.
  • the system memory can be allocated such that in at least one hardware unit among the multiple hardware units, all the system memory in the system memory block group corresponding to the hardware unit Blocks are mapped to contiguously distributed blocks of physical memory in physical memory. This can be achieved by overall planning the allocation of system memory after determining the number of system memory blocks required by each hardware unit in each application scenario.
  • the system memory may also be allocated such that at least one hardware unit corresponding to a contiguously distributed physical memory block in the physical memory is a hardware unit requiring the largest number of system memory blocks.
  • the maximum number of system memory blocks that may be required by a hardware unit in each application scenario in multiple application scenarios may be determined as the number of system memory blocks to be allocated to the hardware unit, and the allocated number of system memory blocks among the multiple hardware units The hardware unit allocated with the largest number of system memory blocks is selected.
  • a system memory block group may be preferentially allocated to this hardware unit, and the corresponding system memory block group may be mapped to contiguously distributed physical memory blocks in the physical memory.
  • system memory may also be allocated such that among the multiple hardware units, the number of hardware units corresponding to consecutively distributed physical memory blocks in the physical memory is the largest. For example, the maximum number of system memory blocks that a hardware unit may need in each of multiple application scenarios may be determined as the number of system memory blocks to be allocated to the hardware unit, and as many hardware units as possible Allocates to contiguously distributed blocks of physical memory.
  • a plurality of hardware units may include a first hardware unit H1 and a second hardware unit H2, and the first hardware unit H1 and the second hardware unit H2 may be in different In this way, the first hardware unit H1 and the second hardware unit H2 can share at least part of the physical memory without causing conflicts.
  • the system memory 610 may include a first system memory block group (s00, s05) allocated to the first hardware unit H1 and a second system memory block group (s08, s11) allocated to the second hardware unit H2, and the first At least a part of the first system memory block in the system memory block group (s00, s05) and at least a part of the second system memory block in the second system memory block group (s08, s11) can be mapped to the same physical memory block in the physical memory 620. block of memory.
  • the first system memory block s00 and the second system memory block s08 are mapped to the same physical memory block p00, and the first system memory block s01 and the second system memory block s09 are mapped to the same physical memory block
  • the memory block p01, the first system memory block s02 and the second system memory block s10 are mapped to the same physical memory block p02, and the first system memory block s03 and the second system memory block s11 are mapped to the same physical memory block p03.
  • the plurality of hardware units may further include a third hardware unit H3, and the first hardware unit H1, the second hardware unit H2, and the third hardware unit H3 are respectively in different In this way, the first hardware unit H1, the second hardware unit H2 and the third hardware unit H3 can share at least part of the physical memory without causing conflicts.
  • the system memory 610 may further include a third system memory block group (s16, s23) allocated to the third hardware unit H3, and at least a part of the first system memory block, the first system memory block group (s00, s05) in the first system memory block group At least a part of the second system memory block in the second system memory block group (s08, s11) and at least a part of the third system memory block in the third system memory block group (s16, s23) are mapped to the same physical memory in the physical memory piece.
  • a third system memory block group (s16, s23) allocated to the third hardware unit H3, and at least a part of the first system memory block, the first system memory block group (s00, s05) in the first system memory block group
  • At least a part of the second system memory block in the second system memory block group (s08, s11) and at least a part of the third system memory block in the third system memory block group (s16, s23) are mapped to the same physical memory
  • the first system memory block s00, the second system memory block s08, and the third system memory block s16 are mapped to the same physical memory block p00
  • the first system memory block s01, the second system memory block Block s09 and the third system memory block s17 are mapped to the same physical memory block p01
  • the first system memory block s02, the second system memory block s10, and the third system memory block s18 are mapped to the same physical memory block p02
  • the first system memory block is mapped to the same physical memory block p02.
  • Memory block s03, the second system memory block s11, and the third system memory block s19 are mapped to the same physical memory block p03
  • the first system memory block s04 and the third system memory block s20 are mapped to the same physical memory block p04
  • the first system memory block is mapped to the same physical memory block p04.
  • the system memory block s05 and the third system memory block s21 are mapped to the same physical memory block p05
  • the third system memory blocks s22 and s23 are respectively mapped to the physical memory blocks p06 and p07.
  • the plurality of hardware units may further include a fourth hardware unit H4, and the fourth hardware unit H4 and the first hardware unit H1 respectively run in different application scenarios, and The fourth hardware unit H4 and the second hardware unit H2 work together in at least one application scenario in multiple application scenarios, so that the first hardware unit H1 and the fourth hardware unit H4 may share at least part of the physical memory, but the second The hardware unit H2 and the fourth hardware unit H4 cannot share physical memory.
  • the system memory 610 may further include a fourth system memory block group (s20, s23) allocated to the fourth hardware unit H4, and at least another part of the first system memory block in the first system memory block group (s00, s05) and at least a portion of the fourth system memory block group (s20, s23) is mapped to the same physical memory block in physical memory, the second system memory block group (s08, s11) and the fourth system memory block group (s20, s23) are respectively mapped to different physical memory blocks in the physical memory.
  • the first system memory block s00 and the second system memory block s08 are mapped to the same physical memory block p00
  • the first system memory block s01 and the second system memory block s09 are mapped to the same physical memory block
  • the physical memory block p01, the first system memory block s02 and the second system memory block s10 are mapped to the same physical memory block p02
  • the first system memory block s03 and the second system memory block s11 are mapped to the same physical memory block p03
  • the fourth system memory block group (s20, s23) cannot be mapped to the physical memory block group (p01, p03) which has been mapped by the second system memory block group (s08, s11).
  • the fourth hardware unit H4 will not be able to occupy the first four memory blocks in the system sub-memory (s16, s23).
  • the system memory blocks (s16, s19) can only occupy the last four system memory blocks (s20, s23) so as not to conflict with the second hardware unit H2.
  • the fourth system memory block s20 and the first system memory block s04 can be mapped to the same physical memory block p04
  • the fourth system memory block s21 can be mapped to the same physical memory block p05 as the first system memory block s05
  • the other fourth system memory blocks s22 and s23 can be mapped to physical memory blocks p06 and p07 respectively.
  • the multiple hardware units may further include a fifth hardware unit H5, at least one application of the fifth hardware unit H5 and the first hardware unit H1 in multiple application scenarios scenarios, and the fifth hardware unit H5 and the second hardware unit H2 jointly operate in at least another application scenario in multiple application scenarios, so that the fifth hardware unit H5 will not be able to cooperate with the first hardware unit H1 and the second hardware unit H1 Any one of the two hardware units H2 shares physical memory.
  • the system memory 610 may include a fifth system memory block group (s16, s23) allocated to the fifth hardware unit H5, and the first system memory block group (s00, s05) and the fifth system memory block group (s16 , s23) are respectively mapped to different physical memory blocks in the physical memory, and the second system memory block group (s08, s11) and the fifth system memory block group (s16, s23) are respectively mapped to different physical memory blocks in the physical memory piece.
  • the fifth system memory block group (s16, s23) may be mapped to a physical memory block group (p06 not occupied by the first hardware unit H1 or the second hardware unit H2) in the physical memory. , p13).
  • the fifth system memory block group (s16, s23) may be mapped to the physical memory block group (p08, p15) not occupied by the first hardware unit H1 or the second hardware unit H2 in the physical memory. ), so that it can be ensured that the i-th system memory block in each system sub-memory can be mapped to the i-th physical memory block in the corresponding physical sub-memory, so as to facilitate memory management.
  • the size of the system memory may be 1.5 times the size of the physical memory.
  • the fifth system memory block group (s06, s07) required by the fifth hardware unit H5 when the fifth system memory block group (s06, s07) required by the fifth hardware unit H5 is sufficiently small, the fifth system memory block group (s06, s07) can be compared with the first The system memory block group (s00, s05) is in the same system sub-memory and is mapped to the corresponding physical memory block in the physical memory one by one, and the second system memory block group (s08, s11) can be compared with the first system memory block group Memory block groups (s00, s05) share a portion of physical memory blocks.
  • the device is the above-mentioned smart network camera, which has two application scenarios of recording and playback.
  • Hardware units Ha and Hc are only used in recording application scenarios
  • hardware units Hd and He are only used in playback application scenarios
  • hardware unit Hb will be used in both recording application scenarios and playback application scenarios, then, you can Allocate memory for the smart IP camera as follows.
  • the hardware units Ha, Hb and Hc are used, so the system memory block group (s00, s03) can be allocated to the hardware unit Ha and mapped to the physical memory block group (p00, p03); the system memory block group (s04, s08) can be assigned to hardware unit Hb and mapped to physical memory block group (p04, p08); system memory block group (s09, s13) can be assigned to hardware unit Hc and mapped to physical memory block group (p09 , p13).
  • hardware units Hb, Hd and He are used, wherein hardware unit Hd can share physical memory with hardware unit Ha and/or Hc, and hardware unit He can share physical memory with hardware unit Ha and/or Hc , but neither hardware unit Hd nor He can share physical memory with hardware unit Hb.
  • the system memory block group (s16, s26) can be allocated to the hardware unit Hd, where (s16, s19) are respectively mapped to the physical memory block group (p00, p03), and because the system memory block group (s20, s24 ) mapped to the physical memory block group (p04, p08) is occupied by the hardware unit Hb, therefore, the system memory block group (s20, s24) cannot be allocated to the hardware unit Hd, that is, the system memory block group (s20, s24) is vacant , are not allocated to any hardware unit to avoid conflicts, but the following system memory block group (s25, s26) can be allocated to hardware unit Hd, and are mapped to physical memory block group (p09, p10) respectively, and are compatible with the hardware Unit Hc achieves sharing.
  • the actually allocated system memory blocks are (s16, s19) and (s25, s26).
  • the system memory block group (s27, s31) can be allocated to the hardware unit He and mapped to the physical memory block (p11, p15) respectively, where the physical memory block (p11, p13) is shared by the hardware unit Hc and the hardware unit He .
  • the disclosed memory management method proposes a mapping relationship and algorithm between system memory and physical memory.
  • the hardware unit in the device can see more system memory without increasing the actual physical memory. Improved utilization of physical memory.
  • the system memory block assigned to the corresponding hardware unit is determined according to the established mapping relationship between the system memory block in the system memory and the physical memory block in the physical memory. Planning is conducive to improving the stability and reliability of the system, and when a problem occurs, it will be very convenient to debug the equipment.
  • Such devices or computing devices may include, for example, Artificial Intelligence Internet of Things (AIoT) devices that can be used with Artificial Intelligence Internet of Things (AIoT) technologies.
  • AIoT technology integrates artificial intelligence (AI) technology and Internet of Things (IoT) technology.
  • AIot technology massive data from different dimensions can be generated and collected through the Internet of Things, and realized through big data analysis, artificial intelligence and other technologies.
  • All things are digitized and all things are intelligently connected, thus forming an intelligent ecosystem and realizing the integration and intercommunication between different smart terminal devices, different system platforms, and different application scenarios.
  • it can be used as an AIot device as an edge device or a mobile device, and perform tasks that used to be performed by cloud devices to provide services nearby, helping to achieve faster service response and Better privacy protection.
  • the device can also be used in other applications and implement corresponding functions, which is not limited here.
  • a device or computing device of the present disclosure may be included in a system on a chip (SoC).
  • SoC can have small size, high speed, low power consumption and rich system functions, and has relatively low cost.
  • physical memory 620 may be included in computing device 800 as shown in FIG. 9 .
  • the physical memory 620 can also be set separately from the computing device 800 , for example, the physical memory 620 and the computing device 800 can be different components on the SoC, and exchange data through a bus or the like.
  • the computing device includes a plurality of hardware units.
  • the memory management device 900 may include a memory 910, a processor 920, and instructions stored in the memory 910. When the instructions are executed by the processor 920, the The steps of the memory management method as described above.
  • a memory management device 900 may be included in computing device 800 .
  • the computing device 800 may be provided separately from the memory management device 900 .
  • the processor 920 may execute various actions and processes according to instructions stored in the memory 910 .
  • the processor 920 may be an integrated circuit chip, which has a signal processing capability.
  • the above-mentioned processor may be a general-purpose processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, and discrete hardware components.
  • DSP digital signal processor
  • ASIC application-specific integrated circuit
  • FPGA off-the-shelf programmable gate array
  • Various methods, steps and logic block diagrams disclosed in the embodiments of the present disclosure may be implemented or executed.
  • the general-purpose processor may be a microprocessor, or the processor may be any conventional processor, etc., and may be an X810 architecture or an ARM architecture, or the like.
  • the memory 910 stores executable instructions, and the instructions are executed by the processor 920 in the memory management method described above.
  • Memory 910 may be volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory.
  • the nonvolatile memory can be read only memory (ROM), programmable read only memory (PROM), erasable programmable read only memory (EPROM), electrically erasable programmable read only memory (EEPROM), or flash memory.
  • Volatile memory can be random access memory (RAM), which acts as external cache memory.
  • RAM Static Random Access Memory
  • DRAM Dynamic Random Access Memory
  • SDRAM Synchronous Dynamic Random Access Memory
  • DDRSDRAM Double Data Rate Synchronous Dynamic Random Access Memory
  • ESDRAM Enhanced Synchronous Dynamic Random Access Memory
  • SLDRAM Synchronous Linked Dynamic Random Access Memory
  • DRRAM Direct Memory Bus Random Access Memory
  • the present disclosure also proposes a non-transitory computer-readable storage medium, where instructions are stored on the non-transitory computer-readable storage medium, and when the instructions are executed by a processor, the steps of the above-mentioned memory management method are realized.
  • a non-transitory computer readable storage medium in embodiments of the present disclosure can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory. It should be noted that the computer-readable storage media described herein are intended to include, but are not limited to, these and any other suitable types of memories.
  • the present disclosure also provides a computer program product, the computer program product may include instructions, and when the instructions are executed by a processor, the steps of the above-mentioned memory management method are realized.
  • the instructions may be any set of instructions to be executed directly by one or more processors, such as machine code, or indirectly, such as a script.
  • the terms "instruction”, “application”, “process”, “step” and “program” are used interchangeably herein. Instructions may be stored in object code format for direct processing by one or more processors, or in any other computer language, including scripts or collections of stand-alone source code modules interpreted on demand or compiled ahead of time. Instructions may include instructions that cause, for example, one or more processors to function as various neural networks herein. The rest of this document explains the functions, methods, and routines of the directives in more detail.
  • implementations of the present disclosure may also include the following examples:
  • a memory management method for a device comprising:
  • each system memory block assigned to all hardware units running in the same application scenario is mapped to a different physical memory block;
  • system memory block of at least one application scenario in the different application scenarios is mapped to the same physical memory block.
  • the system memory block is allocated such that all system memory blocks allocated for at least one hardware unit in the plurality of hardware units are mapped to contiguously distributed physical memory blocks in the physical memory.
  • the system memory is allocated such that at least one hardware unit corresponding to the continuously distributed physical memory blocks in the physical memory is the hardware unit requiring the largest number of system memory blocks.
  • the system memory is allocated such that among the plurality of hardware units, the number of hardware units corresponding to consecutively distributed physical memory blocks in the physical memory is the largest.
  • the size of the physical memory is greater than or equal to the size of the system memory required by the application scenario requiring the largest number of system memory blocks among the multiple application scenarios.
  • the (i+n*Np) extended memory block in the system memory block is respectively mapped to the i-th physical memory block in the physical memory, where i is An integer greater than zero, n is an integer greater than or equal to zero, and Np is the total number of physical memory blocks in the physical memory.
  • the size of the system memory is a multiple of the size of the physical memory, and the multiple is a number greater than 1.
  • the multiple of the size of the system memory and the size of the physical memory is determined according to the number of scenarios of the multiple application scenarios.
  • the memory management method wherein the plurality of hardware units include a first hardware unit and a second hardware unit, and the first hardware unit and the second hardware unit run in different application scenarios respectively,
  • the system memory includes a first system memory block group assigned to the first hardware unit and a second system memory block group assigned to the second hardware unit, and the first system memory block group At least a part of the first system memory block and at least a part of the second system memory block in the second system memory block group are mapped to the same physical memory block in the physical memory.
  • the multiple hardware units further include a fourth hardware unit, the fourth hardware unit and the first hardware unit run in different application scenarios, and the fourth hardware unit
  • the four hardware units and the second hardware unit operate together in at least one application scenario among the plurality of application scenarios
  • the system memory further includes a fourth system memory block group allocated to the fourth hardware unit, And at least another part of the first system memory block in the first system memory block group and at least a part of the fourth system memory block in the fourth system memory block group are mapped to the same physical memory block in the physical memory , the second system memory block group and the fourth system memory block group are respectively mapped to different physical memory blocks in the physical memory.
  • the multiple hardware units further include a fifth hardware unit, and at least one application scenario among the multiple application scenarios of the fifth hardware unit and the first hardware unit and the fifth hardware unit and the second hardware unit run together in at least another application scenario among the plurality of application scenarios, and the system memory also includes The fifth system memory block group of the unit, and the first system memory block group and the fifth system memory block group are respectively mapped to different physical memory blocks in the physical memory, and the second system memory block group and the fifth system memory block group are respectively mapped to different physical memory blocks in the physical memory.
  • a memory management device comprising a memory, a processor, and instructions stored on the memory, when the instructions are executed by the processor, the memory according to any one of 1 to 11 is realized The steps of the memory management method described above.
  • a computing system comprising:
  • a computing device and memory management device wherein said computing device comprises a plurality of hardware units;
  • a computing device on which the memory management device according to 12 is set.
  • the computing system according to 13, the computing device comprising an artificial intelligence Internet of Things device; and/or
  • the computing device is included in a system on a chip.
  • a non-transitory computer-readable storage medium where instructions are stored on the non-transitory computer-readable storage medium, and when the instructions are executed by a processor, the method according to any one of 1 to 11 is realized Steps of the memory management method.
  • a computer program product comprising instructions, when the instructions are executed by a processor, the steps of the memory management method according to any one of 1 to 11 are implemented.
  • the word "exemplary” means “serving as an example, instance, or illustration” rather than as a “model” to be exactly reproduced. Any implementation described illustratively herein is not necessarily to be construed as preferred or advantageous over other implementations. Furthermore, the disclosure is not to be bound by any expressed or implied theory presented in the preceding technical field, background, brief summary or detailed description.
  • the word “substantially” is meant to include any minor variations due to defects in design or manufacturing, device or component tolerances, environmental influences, and/or other factors.
  • the word “substantially” also allows for differences from a perfect or ideal situation due to parasitic effects, noise, and other practical considerations that may exist in an actual implementation.
  • connection means that one element/node/feature is directly connected (either electrically, mechanically, logically, or otherwise) to another element/node/feature. direct communication).
  • coupled means that one element/node/feature can be connected, directly or indirectly, mechanically, electrically, logically, or otherwise with another element/node/feature to Interactions are allowed even though the two features may not be directly connected. That is, “coupled” is intended to encompass both direct and indirect couplings of elements or other features, including connections utilizing one or more intervening elements.

Abstract

一种用于设备的内存管理方法、内存管理设备和计算系统,该方法包括:确定所述设备的多个应用场景;根据在每个应用场景下每个硬件单元所需的物理内存块数目,以及,根据已建立的系统内存中的系统内存块与物理内存中的物理内存块的映射关系,确定分配给相应硬件单元的系统内存块,系统内存包括物理内存以及与物理内存对应的扩展内存;其中,被分配给在同一应用场景下运行的所有硬件单元的各个系统内存块分别映射到不同的物理内存块;或者在被分配给分别在不同应用场景下运行的至少两个硬件单元的各个系统内存块中,所述不同应用场景中至少一个应用场景的系统内存块与另一应用场景的系统内存块映射到相同的物理内存块。

Description

用于设备的内存管理方法、内存管理设备和计算系统
相关申请的交叉引用
本申请要求于2021年12月10日提交的、标题为“用于设备的内存管理方法、内存管理设备和计算系统”的中国专利申请第202111502847.6的优先权,该申请的公开内容通过引用被全部合并于此。
技术领域
本公开涉及存储技术领域,具体而言,涉及一种用于设备的内存管理方法、内存管理设备和计算系统。
背景技术
随着技术的发展,设备的功能日益强大,在实现各种功能的同时所需的内存也相应增多。然而,目前的内存管理方式仍然存在许多问题,例如内存的利用率不高等。因此,存在对于新的内存管理方式的需求。
发明内容
本公开的目的之一在于提供一种用于设备的内存管理方法、内存管理设备和计算系统。
根据本公开的第一方面,提供了一种用于设备的内存管理方法,所述设备包括多个硬件单元,所述内存管理方法包括:
确定所述设备的多个应用场景;
根据在每个应用场景下每个硬件单元所需的物理内存块数目,以及,根据已建立的系统内存中的系统内存块与物理内存中的物理内存块的映射关系,确定分配给相应硬件单元的系统内存块,系统内存包括物理内存以及与物理内存对应的扩展内存;
其中,被分配给在同一应用场景下运行的所有硬件单元的各个系统内存块分别映射到不同的物理内存块;或者
在被分配给分别在不同应用场景下运行的至少两个硬件单元的各个系统内存块中,所述不同应用场景中至少一个应用场景的系统内存块与另一应用场景的系统内存块映射到相同的物理内存块。
根据本公开的第二方面,提供了一种内存管理设备,所述内存管理设备包括存储器、处理器以及存储在所述存储器上的指令,当所述指令被所述处理器执行时,实现如上所述的内存管理方法的步骤。
根据本公开的第三方面,提供了一种计算系统,所述计算系统包括计算设备和如上所述的内存管理设备,其中,所述计算设备包括多个硬件单元;或者所述系数系统包括计算设备,在所述计算设备上设置有如上所述的内存管理设备。
根据本公开的第四方面,提供了一种非暂态计算机可读存储介质,所述非暂态计算机可读存储介质上存储有指令,当所述指令被处理器执行时,实现如上所述的内存管理方法的步骤。
根据本公开的第五方面,提供了一种计算机程序产品,所述计算机程序产品包括指令,当所述指令被处理器执行时,实现如上所述的内存管理方法的步骤。
通过以下参照附图对本公开的示例性实施例的详细描述,本公开的其他特征及其优点将会变得清楚。
附图说明
构成说明书的一部分的附图描述了本公开的实施例,并且连同说明书一起用于解释本公开的原理。
参照附图,根据下面的详细描述,可以更加清楚地理解本公开,其中:
图1示出了根据本公开的一示例性实施例的用于设备的内存管理方法的流程示意图;
图2示出了根据本公开的第一具体示例的内存分配示意图;
图3示出了根据本公开的第二具体示例的内存分配示意图;
图4示出了根据本公开的第三具体示例的内存分配示意图;
图5示出了根据本公开的第四具体示例的内存分配示意图;
图6示出了根据本公开的第五具体示例的内存分配示意图;
图7示出了根据本公开的第六具体示例的内存分配示意图;
图8示出了根据本公开的一示例性实施例的内存管理设备的示意图;
图9示出了根据本公开的一示例性实施例的计算设备的示意图;
图10示出了根据本公开的另一示例性实施例的计算设备的示意图。
注意,在以下说明的实施方式中,有时在不同的附图之间共同使用同一附图标记 来表示相同部分或具有相同功能的部分,而省略其重复说明。在本说明书中,使用相似的标号和字母表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步讨论。
为了便于理解,在附图等中所示的各结构的位置、尺寸及范围等有时不表示实际的位置、尺寸及范围等。因此,所公开的发明并不限于附图等所公开的位置、尺寸及范围等。此外,附图不必按比例绘制,一些特征可能被放大以示出具体组件的细节。
具体实施方式
现在将参照附图来详细描述本公开的各种示例性实施例。应注意到:除非另外具体说明,否则在这些实施例中阐述的部件和步骤的相对布置、数字表达式和数值不限制本公开的范围。
以下对至少一个示例性实施例的描述实际上仅仅是说明性的,决不作为对本公开及其应用或使用的任何限制。也就是说,本文中的用于设备的内存管理方法、内存管理设备和计算系统等是以示例性的方式示出,来说明本公开中的电路或方法的不同实施例,而并非意图限制。本领域的技术人员将会理解,它们仅仅说明可以用来实施本发明的示例性方式,而不是穷尽的方式。
对于相关领域普通技术人员已知的技术、方法和设备可能不作详细讨论,但在适当情况下,所述技术、方法和设备应当被视为授权说明书的一部分。
随着芯片工艺的发展,设备的集成度和计算能力也不断提升。然而,随着设备的功能越来越复杂,其对内存的需求也越来越大。为了满足内存需求,可以为设备提供更大的物理内存,但这通常意味着硬件成本的增加。此外,也可以采用软件动态申请内存的方式来提高内存的利用率,具体而言,在设备的运行过程中,当某个硬件单元需要使用内存时,由软件为该硬件单元申请并分配内存,而当该硬件单元不需要使用内存时,可以释放并回收内存,以供下一次使用。但是,内存的频繁分配和释放通常会导致软件开销的大幅增加,此外还容易形成大量的内存碎片,导致内存的利用率下降。
为了解决上述至少一个问题,本公开提出了一种用于设备的内存管理方法和相应的计算系统,其中计算系统可以包括计算设备。在本公开的内存管理方法中,通过使设备中的至少部分硬件单元在不同的应用场景之间共享物理内存,从而实现内存的充分利用。如图9和图10所示,在本公开的示例性实施例中,计算设备800可以包括多个硬件单元,例如硬件单元H1和硬件单元H2等,这些硬件单元可以连接在计算设备800的总线上,以实现数据 的交换。如图1所示,内存管理方法可以包括:
步骤S100,确定设备的多个应用场景。
其中,为了便于在后续步骤中为硬件单元分配内存,多个应用场景中至少包括两个应用场景都不同时进行的情况。这样,分别在不同的应用场景下运行的硬件单元就有可能共享同一部分物理内存。此外,为了处理的方便,在一些实施例中,可以仅考虑应用场景的运行时间问题,而在同一应用场景中所运行的具体操作可以是相关或者是不相关的。例如,如果设备为智能网络摄像机,那么智能网络摄像机的所有可能的应用场景可以包括不会同时运行的录像应用场景和回放应用场景。可以理解的是,当设备是其他类型的设备时,可以针对设备的运行特点来确定相应的多个应用场景,在此不作限制。
返回图1,内存管理方法还可以包括:
步骤S200,根据在每个应用场景下每个硬件单元所需的物理内存块数目,以及,根据已建立的系统内存中的系统内存块与物理内存中的物理内存块的映射关系,确定分配给相应硬件单元的系统内存块。
系统内存是系统临时存储程序指令和数据的主要区域,其可以包括相同大小的多个系统内存块。系统内存可以包括物理内存和与物理内存对应的扩展内存。换句话说,系统内存中的第一类系统内存块即为真实的物理内存块,当第一类系统内存块被调用时,即调用实际的物理内存块;而第二类系统内存块是扩展而来的,它们与物理内存块之间的映射关系可以根据需求来确定,当第二类系统内存块被调用时,实际上是基于相应的系统内存块到物理内存块的映射关系来调用相应的物理内存块。通过扩展得到扩展内存,可以使硬件单元“看到”大于实际的物理内存的系统内存。一个系统内存块或若干个系统内存块所形成的组可以是在分配系统内存时的最小可分配单元,其中,每个系统内存块的大小可以为32KB、64KB或128KB等。为了标识系统内存中的各个系统内存块,可以为它们分配一一对应的内存地址。如图2至图7所示,系统内存中的内存地址可以被表示为00、01、02、……、31等,在后文中,为了区分系统内存块和物理内存块,将在内存地址前加上字母“s”来表示相应的系统内存块,例如,系统内存块s00、s01、s02和s31分别表示内存地址为00、01、02和31的系统内存块。此外,在本文中,以连续内存块组中的首个内存块和末个内存块来表示相应的连续内存块组,例如,从系统内存块s00到系统内存块s15的系统内存块组可以被表示为(s01,s15),其中包括系统内存块s01、s02、s03、s04、s05、s06、s07、s08、s09、s10、s11、s12、s13、s14和s15。
在设备的运行过程中,当某一组系统内存块被调用时,实际上被调用的可以是物理内 存中的与这组系统内存块相对应的一组物理内存块。也就是说,系统内存块与物理内存块之间可以存在一定的对应关系,即后文中将详细描述的系统内存块与物理内存块之间的映射关系。类似地,为了标识物理内存中的各个物理内存块,也可以为它们分配一一对应的内存地址。如图2至图7所示,物理内存中的内存地址可以被表示为00、01、02、……、15等,并在内存地址前加上字母“p”来表示相应的物理内存块,例如,物理内存块p00、p01、p02和p15分别表示内存地址为00、01、02和15的物理内存块。每个物理内存块的大小可以与对应的系统内存块的大小相同,例如为32KB、64KB或128KB等。
通过确定在每个应用场景下每个硬件单元所需的系统内存块数目,可以为后续的内存分配做好准备。在本公开中,为了使得至少部分硬件单元能够共享至少一部分物理内存,应当满足在多个应用场景中至少存在两个应用场景,其中仅需要多个硬件单元中的一部分硬件单元运行,而另一部分硬件单元不用运行。
此外,在确定了每个应用场景下每个硬件单元所需的系统内存块数目之后,还可以按照所需的系统内存块数目等对硬件单元进行分组和/或排序,以方便在后续步骤中更科学地分配内存,提高内存的利用率。
其中,被分配给在同一应用场景下运行的所有硬件单元的各个系统内存块分别映射到不同的物理内存块。具体而言,在分配系统内存时,考虑到同一应用场景下的操作可能是同时进行的,并需要同时调用相应的系统内存(或者说相应的物理内存),因此在同一应用场景下运行的所有硬件单元的各个系统内存块应该分别映射到物理内存中的不同的物理内存块,而不能共享,以避免可能被同时调用的系统内存块映射到相同的物理内存块而引起冲突,进而保障该应用场景的正常运行。
可以理解的是,物理内存的大小至少应该为多个应用场景中的需要最多的系统内存块数目的应用场景所需的系统内存的大小,以保证每个应用场景都能够正常运行。例如在一具体示例中,假设所确定的某一设备的所有应用场景包括应用场景A1、A2和A3,在应用场景A1中,相应的硬件单元需要调用的系统内存的大小共为82MB,在应用场景A2中,相应的硬件单元需要调用的系统内存的大小共为240MB,以及在应用场景A3中,相应的硬件单元需要调用的系统内存的大小共为25MB,那么,为了保证每个应用场景都能正常地运行,物理内存的大小应大于或等于其中需要的最大的系统内存240MB,例如物理内存可以为256MB。
此外,在被分配给在不同应用场景下运行的至少两个硬件单元的各个系统内存块中,不同应用场景中至少一个应用场景的系统内存块与另一应用场景的系统内存块映射到相 同的物理内存块。具体而言,由于不同的应用场景并不会同时进行,因此在某一个应用场景下被某一个硬件单元调用的物理内存块也可以在另一个应用场景下被另一个硬件单元调用,且不会引起冲突。这样,系统内存中的分别被分配给在不同应用场景下运行的硬件单元的系统内存块组中,一个系统内存块组中的至少一部分系统内存块和另一个系统内存块组中的至少一部分系统内存块可以映射到物理内存中的相同的物理内存块,从而使得同一物理内存块可以在多个应用场景下被重复使用,以充分地利用物理内存,避免硬件成本的升高。
在分配内存的过程中,还可以根据每个硬件单元可能与其他硬件单元共享的内存大小等因素进行排序,以统一规划每个硬件单元的内存分配,实现物理内存的充分利用。在完成内存的分配之后,每个硬件单元将具有与其对应的系统内存块组,且系统内存块组映射到物理内存中的相应的物理内存块。这样,基于硬件单元、系统内存块组和物理内存块之间的对应或映射关系,当硬件单元在相应的应用场景下运行的过程中,可以调用相应的物理内存块。
为了清楚地建立硬件单元与系统内存块之间的对应关系,可以限制系统内存中的每个系统内存块至多被分配给多个硬件单元中的一个硬件单元。也就是说,同一个系统内存块不能被两个或更多个硬件单元共享,以免造成内存分配的混乱。
此外,由于在系统内存中,可以有两个或更多个系统内存块映射到同一物理内存块,因此系统内存的大小通常大于物理内存的大小。系统内存中的大于物理内存的部分可以被称为扩展内存(例如,图2和图6中的系统内存块组(s08,s15)、图3和图4中的系统内存块组(s08,s23)、图5中的系统内存块组(s16,s23)以及图7中的系统内存块组(s16,s31)分别为相应的图中的系统内存的扩展内存的部分)。在设备的运行过程中,当调用了扩展内存时,实际上是根据这部分系统内存块与物理内存中的物理内存块的映射关系,在调用相应的物理内存块。系统内存的大小可以是物理内存的大小的倍数,且倍数为大于1的数。一般而言,系统内存的大小可以是物理内存的大小的1.5~5倍。例如,在图2、图6和图7所示的实施例中,系统内存的大小是物理内存的大小的2倍;在图3和图4所示的实施例中,系统内存的大小是物理内存的大小的3倍;在图5所示的实施例中,系统内存的大小是物理内存的大小的1.5倍,后文中还将详细描述系统内存块与物理内存块之间的映射关系。
进一步地,在一些实施例中,系统内存的大小可以是物理内存的大小的整数倍(例如在图2至图4和图6至图7的具体示例中所示的)。而且,系统内存的大小与物理内存的大小的倍数可以根据多个应用场景的场景数目确定。例如,当设备中存在不会同时发生的共两 个应用场景时,系统内存的大小可以是物理内存的大小的两倍。或者,当设备中存在不会同时发生的共三个或四个应用场景时,系统内存的大小可以分别是物理内存的大小的三倍或四倍。
此外,当系统内存的大小是物理内存的大小的整数倍时,可以将系统内存划分为多组系统子内存。其中,每组系统子内存的大小与物理内存的大小相等,并建立每组系统子内存中的系统内存块与物理内存中的物理内存块之间的固定的一一映射关系,这样,每个系统内存块可能映射到的物理内存块将是固定的。例如,在每组系统子内存中,使得第i个系统内存块始终对应于物理内存中的第i个物理内存块,即在每组系统子内存中,第1个系统内存块对应于物理内存中的第1个物理内存块,第2个系统内存块对应于物理内存中的第2个物理内存块,依此类推。类似地,系统内存块中的第(i+n*Np)个扩展内存块可以分别映射到物理内存中的第i个物理内存块,其中,i为大于零的整数,n为大于或等于零的整数,Np为物理内存中的物理内存块的总数目。在为硬件单元分配系统内存时,如果将某一系统内存分配给该硬件单元时,则这一硬件单元所占用的系统内存块的占用状态可以被标记为“1”,否则置为“0”。进一步地,对于每个硬件单元,可以根据相应的系统内存块的内存地址及其占用状态形成相应的内存分配映射表,在该硬件单元运行时,可以根据内存分配映射表来调用相应的系统内存块,进而调用相应的物理内存块。例如,在图7的具体示例中,硬件单元Ha的内存分配映射表可以被表示为:
s01 s02 s03 s04
1 1 1 1
硬件单元Hd的内存分配映射表可以被表示为:
s16 s17 s18 s19 s20 s21 s22 s23 s24 s25 s26
1 1 1 1 0 0 0 0 0 1 1
其中,“1”表示使用的,“0”表示没有使用的。实际物理内存是p0,p1,p2,p3,p9和p10。
另外,在一些实施例中,还可以将分配给同一个硬件单元的系统内存块限制在同一组系统子内存中,以方便管理硬件单元、系统内存块与物理内存块之间的对应或映射关系。
在一些实施例中,为了避免内存反复被申请和释放所带来的软件开销,并且尽可能避免内存碎片的产生,可以在多个应用场景中的任意一个应用场景的运行之前,或者说在设备的启动之前,确定相应的系统内存块组中的系统内存块与物理内存中的物理内存块的映射关系,以确定系统内存中的分配给相应的硬件单元的系统内存块组。也就是说,可以在设备被投入使用之前,预先确定好系统内存块到物理内存块的映射关系,从而预先为其中 的所有硬件单元分配相应的系统内存块。或者,可以在多个应用场景中的每一应用场景被启动前,确定系统内存块到物理内存块的映射关系,从而为相应的硬件单元分配相应的系统内存块。
进一步地,在多个应用场景中的任意一个应用场景的运行过程中,或者说在设备启动后或在设备的运行过程中,多个硬件单元中的每个硬件单元与该硬件单元相应的系统内存块组的对应关系可以不变,且每个系统内存块组中的系统内存块与物理内存中的物理内存块的映射关系可以不变,即被分配给硬件单元的系统内存块以及系统内存块与物理内存块之间的整体的映射关系可以不再变化。这样,在设备的运行过程中,可以不再进行内存的动态申请和释放,从而大幅减少了软件开销,有助于提高设备的运行效率,也避免了产生大量的内存碎片。
此外,在一些实施例中,内存管理方法还可以包括:
当包括多个应用场景的应用场景集合发生变化时,返回执行确定设备的多个应用场景的步骤。
具体而言,在一些实施例中,设备的使用场景可能发生根本的变化。例如,某一设备可能是从其他计算系统中回收而得到的,而不同的计算系统可能用于处理完全不同的任务,相应地,该设备的应用场景集合将与原先的完全不同,而针对原来的计算系统所确定的多个应用场景以及对应的内存分配关系可能不再适用。因此,当设备的应用场景集合发生变化时,可以重新为设备中的多个硬件单元分配内存。具体而言,可以重新确定设备的多个应用场景,然后根据在每个应用场景下每个硬件单元所需的物理内存块数目,以及,根据已建立的系统内存中的系统内存块与物理内存中的物理内存块的映射关系,确定分配给相应硬件单元的系统内存块,系统内存包括物理内存以及与物理内存对应的扩展内存。在一些情况下,被分配给在同一应用场景下运行的所有硬件单元的各个系统内存块组分别映射到物理内存中的不同的物理内存块。替代地或附加地,在被分配给分别在不同应用场景下运行的至少两个硬件单元的各个系统内存块中,不同应用场景中至少一个应用场景的系统内存块与另一应用场景的系统内存块映射到相同的物理内存块,以实现设备的高效和灵活的利用。
进一步地,为了避免内存碎片化,在一些实施例中,系统内存可以被分配为使得在多个硬件单元中的至少一个硬件单元中,与该硬件单元相应的系统内存块组中的所有系统内存块映射到物理内存中的连续分布的物理内存块。这可以在确定了每个应用场景下每个硬件单元所需的系统内存块数目之后,对系统内存的分配进行整体规划来实现。
在一些实施例中,系统内存还可以被分配为使得与物理内存中的连续分布的物理内存块对应的至少一个硬件单元为需要最大的系统内存块数目的硬件单元。例如,可以将硬件单元在多个应用场景中的每个应用场景下可能需要的最大的系统内存块数目确定为将要分配给该硬件单元的系统内存块数目,并将多个硬件单元中的被分配有最大的系统内存块数目的硬件单元挑选出来。在分配过程中,可以优先为这个硬件单元分配系统内存块组,将相应的系统内存块组映射到物理内存中的连续分布的物理内存块。
在另一些实施例中,系统内存也可以被分配为使得在多个硬件单元中,与物理内存中的连续分布的物理内存块对应的硬件单元的数目最多。例如,可以将硬件单元在多个应用场景中的每个应用场景下可能需要的最大的系统内存块数目确定为将要分配给该硬件单元的系统内存块数目,并尽可能使得较多的硬件单元分配到连续分布的物理内存块。
在本公开的第一具体示例中,如图2所示,多个硬件单元可以包括第一硬件单元H1和第二硬件单元H2,第一硬件单元H1和第二硬件单元H2可以分别在不同的应用场景下运行,这样,第一硬件单元H1和第二硬件单元H2可以共享至少部分物理内存,而不会引起冲突。系统内存610可以包括被分配给第一硬件单元H1的第一系统内存块组(s00,s05)和被分配给第二硬件单元H2的第二系统内存块组(s08,s11),且第一系统内存块组(s00,s05)中的至少一部分第一系统内存块和第二系统内存块组(s08,s11)中的至少一部分第二系统内存块可以映射到物理内存620中的相同的物理内存块。在图2所示的具体示例中,第一系统内存块s00和第二系统内存块s08映射到同一个物理内存块p00,第一系统内存块s01和第二系统内存块s09映射到同一个物理内存块p01,第一系统内存块s02和第二系统内存块s10映射到同一个物理内存块p02,且第一系统内存块s03和第二系统内存块s11映射到同一个物理内存块p03。
在本公开的第二具体示例中,如图3所示,多个硬件单元还可以包括第三硬件单元H3,第一硬件单元H1、第二硬件单元H2和第三硬件单元H3分别在不同的应用场景下运行,这样,第一硬件单元H1、第二硬件单元H2和第三硬件单元H3可以共享至少部分物理内存,而不会引起冲突。系统内存610还可以包括被分配给第三硬件单元H3的第三系统内存块组(s16,s23),且第一系统内存块组(s00,s05)中的至少一部分第一系统内存块、第二系统内存块组(s08,s11)中的至少一部分第二系统内存块和第三系统内存块组(s16,s23)中的至少一部分第三系统内存块映射到物理内存中的相同的物理内存块。在图3所示的具体示例中,第一系统内存块s00、第二系统内存块s08和第三系统内存块s16映射到同一个物理内存块p00,第一系统内存块s01、第二系统内存块s09和第三系统内存块s17映射到同一个物理内存块 p01,第一系统内存块s02、第二系统内存块s10和第三系统内存块s18映射到同一个物理内存块p02,第一系统内存块s03、第二系统内存块s11和第三系统内存块s19映射到同一个物理内存块p03,第一系统内存块s04和第三系统内存块s20映射到同一个物理内存块p04,第一系统内存块s05和第三系统内存块s21映射到同一个物理内存块p05,且第三系统内存块s22和s23分别映射到物理内存块p06和p07。
在本公开的第三具体示例中,如图4所示,多个硬件单元还可以包括第四硬件单元H4,第四硬件单元H4和第一硬件单元H1分别在不同的应用场景下运行,且第四硬件单元H4和第二硬件单元H2在多个应用场景中的至少一个应用场景下共同运行,这样,第一硬件单元H1和第四硬件单元H4有可能共享至少部分物理内存,但是第二硬件单元H2和第四硬件单元H4不能共享物理内存。系统内存610还可以包括被分配给第四硬件单元H4的第四系统内存块组(s20,s23),且第一系统内存块组(s00,s05)中的至少另一部分第一系统内存块和第四系统内存块组(s20,s23)中的至少一部分系统第四内存块映射到物理内存中的相同的物理内存块,第二系统内存块组(s08,s11)和第四系统内存块组(s20,s23)分别映射到物理内存中的不同的物理内存块。在图4所示的具体示例中,由于第一系统内存块s00和第二系统内存块s08映射到同一个物理内存块p00,第一系统内存块s01和第二系统内存块s09映射到同一个物理内存块p01,第一系统内存块s02和第二系统内存块s10映射到同一个物理内存块p02,以及第一系统内存块s03和第二系统内存块s11映射到同一个物理内存块p03,因此第四系统内存块组(s20,s23)不能映射到已被第二系统内存块组(s08,s11)映射到的物理内存块组(p01,p03)。在每组系统子内存中的系统内存块与物理内存中的物理内存块的一一映射关系固定的情况下,第四硬件单元H4将不能占用系统子内存(s16,s23)中的前四个系统内存块(s16,s19),以免与第二硬件单元H2发生冲突,而仅能够占用后四个系统内存块(s20,s23)。也就是说,第四系统内存块s20可以与第一系统内存块s04映射到同一个物理内存块p04,第四系统内存块s21可以与第一系统内存块s05映射到同一个物理内存块p05,而其他的第四系统内存块s22和s23可以分别映射到物理内存块p06和p07。
在本公开的第四具体示例中,如图5所示,多个硬件单元还可以包括第五硬件单元H5,第五硬件单元H5和第一硬件单元H1在多个应用场景中的至少一个应用场景下共同运行,且第五硬件单元H5和第二硬件单元H2在多个应用场景中的至少另一个应用场景下共同运行,这样,第五硬件单元H5将不能与第一硬件单元H1和第二硬件单元H2中的任何一者共享物理内存。相应地,系统内存610可以包括被分配给第五硬件单元H5的第五系统内存块组(s16,s23),且第一系统内存块组(s00,s05)和第五系统内存块组(s16,s23)分别映射到物理内 存中的不同的物理内存块,第二系统内存块组(s08,s11)和第五系统内存块组(s16,s23)分别映射到物理内存中的不同的物理内存块。在图5所示的具体示例中,第五系统内存块组(s16,s23)可以被映射到物理内存中的未被第一硬件单元H1或第二硬件单元H2占用的物理内存块组(p06,p13)。或者,在其他具体示例中,第五系统内存块组(s16,s23)可以被映射到物理内存中的未被第一硬件单元H1或第二硬件单元H2占用的物理内存块组(p08,p15),这样,可以确保每个系统子内存中的第i个系统内存块可以被映射到相应的物理子内存中的第i个物理内存块,以更方便内存的管理。在图5所示的实施例中,系统内存的大小可以为物理内存的大小的1.5倍。
在图6所示的第五具体示例中,当第五硬件单元H5所需的第五系统内存块组(s06,s07)足够小时,第五系统内存块组(s06,s07)可以和第一系统内存块组(s00,s05)处于同一个系统子内存中,并一一映射到物理内存中的相应的物理内存块,且其中第二系统内存块组(s08,s11)可以与第一系统内存块组(s00,s05)共享一部分物理内存块。
在本公开的第六具体示例中,如图7所示,假设设备为上文所述的智能网络摄像机,其中具有录像应用场景和回放应用场景共两个应用场景。硬件单元Ha和Hc仅在录像应用场景中被使用,硬件单元Hd和He仅在回放应用场景中被使用,而硬件单元Hb在录像应用场景和回放应用场景中都将被使用到,那么,可以如下为智能网络摄像机分配内存。在录像应用场景下,硬件单元Ha、Hb和Hc被使用,因此系统内存块组(s00,s03)可以分配给硬件单元Ha,并映射到物理内存块组(p00,p03);系统内存块组(s04,s08)可以分配给硬件单元Hb,并映射到物理内存块组(p04,p08);系统内存块组(s09,s13)可以分配给硬件单元Hc,并映射到物理内存块组(p09,p13)。而在回放应用场景下,硬件单元Hb、Hd和He被使用,其中硬件单元Hd可以与硬件单元Ha和/或Hc共享物理内存,且硬件单元He可以与硬件单元Ha和/或Hc共享物理内存,但硬件单元Hd和He均不能与硬件单元Hb共享物理内存。在分配时,可以将系统内存块组(s16,s26)分配给硬件单元Hd,其中(s16,s19)分别映射到物理内存块组(p00,p03),而由于系统内存块组(s20,s24)映射到的物理内存块组(p04,p08)被硬件单元Hb占用,因此,系统内存块组(s20,s24)不能被分配给硬件单元Hd,即系统内存块组(s20,s24)被空置,不被分配给任何硬件单元,以避免冲突,但后面的系统内存块组(s25,s26)可以被分配给硬件单元Hd,并分别映射到物理内存块组(p09,p10),且与硬件单元Hc实现了共享。也就是说,在硬件单元Hd的运行过程中,其实际被分配的系统内存块为(s16,s19)和(s25,s26)。进一步地,可以为硬件单元He分配系统内存块组(s27,s31),并分别映射到物理内存块(p11,p15),其中物理内存块(p11,p13)被硬件单元Hc和硬件单元He共享。
本公开的内存管理方法提出了系统内存和物理内存之间的映射关系和算法,通过扩大系统内存,使得设备中的硬件单元可以看到更多的系统内存,而无需增大实际的物理内存,提高了物理内存的利用率。在系统启动或者在相应的应用场景启动时,根据已建立的系统内存中的系统内存块与物理内存中的物理内存块的映射关系,确定分配给相应的硬件单元的系统内存块,这样的预先规划有利于提高系统的稳定性和可靠性,且当出现问题时,对设备进行调试也将变得十分方便。
本公开的内存管理方法尤其适用于当设备的应用场景较简单,且其中的内存使用与应用场景的关联性较强、在不同的应用场景下至少部分内存不会被同时使用的情况。这样的设备或者计算设备例如可以包括能够用于人工智能物联网(AIoT)技术的人工智能物联网(AIot)设备。AIoT技术融合了人工智能(AI)技术和物联网(IoT)技术,在AIot技术中,可以通过物联网产生、收集来自不同维度的、海量的数据,并通过大数据分析、人工智能等技术实现万物数据化、万物智联化,从而形成智能化生态体系,实现不同智能终端设备之间、不同系统平台之间、不同应用场景之间的互融互通。随着设备的性能的提升,它可以用作作为边缘端设备或移动端设备的AIot设备,并执行过去通常需要由云端设备来执行的任务,以就近提供服务,帮助实现更快的服务响应和更好的隐私保护。可以理解的是,设备也可以用在其他应用中,并实现相应的功能,在此不作限制。
在一些实施例中,本公开的设备或者计算设备可以被包括在片上系统(SoC)中。SoC可以具有小体积、高速度、低能耗和丰富的系统功能,且具有相对较低的成本。在一些实施例中,如图9所示,物理内存620可以被包括在计算设备800中。或者,如图10所示,物理内存620也可以与计算设备800分开设置,例如物理内存620和计算设备800可以分别是SoC上的不同部件,并通过总线等进行数据的交换。所述计算设备包括多个硬件单元。
本公开还提出了一种内存管理设备900,如图8所示,该内存管理设备900可以包括存储器910、处理器920以及存储在存储器910上的指令,当指令被处理器920执行时,实现如上所述的内存管理方法的步骤。如图9所示,内存管理设备900可以被包括在计算设备800中。或者,如图10所示,计算设备800可以与内存管理设备900分开设置。
其中,处理器920可以根据存储在存储器910中的指令执行各种动作和处理。具体地,处理器920可以是一种集成电路芯片,具有信号的处理能力。上述处理器可以是通用处理器、数字信号处理器(DSP)、专用集成电路(ASIC)、现成可编程门阵列(FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。 可以实现或者执行本公开实施例中公开的各种方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等,可以是X810架构或者是ARM架构等。
存储器910存储有可执行指令,该指令在被处理器920执行上文所述的内存管理方法。存储器910可以是易失性存储器或非易失性存储器,或可包括易失性和非易失性存储器两者。非易失性存储器可以是只读存储器(ROM)、可编程只读存储器(PROM)、可擦除可编程只读存储器(EPROM)、电可擦除可编程只读存储器(EEPROM)或闪存。易失性存储器可以是随机存取存储器(RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、同步动态随机存取存储器(SDRAM)、双倍数据速率同步动态随机存取存储器(DDRSDRAM)、增强型同步动态随机存取存储器(ESDRAM)、同步连接动态随机存取存储器(SLDRAM)和直接内存总线随机存取存储器(DR RAM)。应注意,本文描述的方法的存储器旨在包括但不限于这些和任意其他适合类型的存储器。
本公开还提出了一种非暂态计算机可读存储介质,该非暂态计算机可读存储介质上存储有指令,当指令被处理器执行时,实现如上所述的内存管理方法的步骤。
类似地,本公开实施例中的非暂态计算机可读存储介质可以是易失性存储器或非易失性存储器,或可包括易失性和非易失性存储器两者。应注意,本文描述的计算机可读存储介质旨在包括但不限于这些和任意其他适合类型的存储器。
本公开还提供了一种计算机程序产品,该计算机程序产品可以包括指令,当指令被处理器执行时,实现如上所述的内存管理方法的步骤。
指令可以是将由一个或多个处理器直接地执行的任何指令集,诸如机器代码,或者间接地执行的任何指令集,诸如脚本。本文中的术语“指令”、“应用”、“过程”、“步骤”和“程序”在本文中可以互换使用。指令可以存储为目标代码格式以便由一个或多个处理器直接处理,或者存储为任何其他计算机语言,包括按需解释或提前编译的独立源代码模块的脚本或集合。指令可以包括引起诸如一个或多个处理器来充当本文中的各神经网络的指令。本文其他部分更加详细地解释了指令的功能、方法和例程。
另外,本公开的实施方式还可以包括以下示例:
1.一种用于设备的内存管理方法,所述设备包括多个硬件单元,所述内存管理方法包括:
确定所述设备的多个应用场景;
根据在每个应用场景下每个硬件单元所需的物理内存块数目,以及,根据已建立的系统内存中的系统内存块与物理内存中的物理内存块的映射关系,确定分配给相应硬件单元的系统内存块,所述系统内存包括物理内存以及与该物理内存块对应的扩展内存;
其中,被分配给在同一应用场景下运行的所有硬件单元的各个系统内存块分别映射到不同的物理内存块;或者
在被分配给分别在不同应用场景下运行的至少两个硬件单元的各个系统内存块中,所述不同应用场景中至少一个应用场景的系统内存块与另一应用场景的系统内存块映射到相同的物理内存块。
2.根据1所述的内存管理方法,确定分配给相应硬件单元的系统内存块,包括:
所述系统内存块被分配为使得在为所述多个硬件单元中的至少一个硬件单元分配的所有系统内存块映射到所述物理内存中的连续分布的物理内存块。
3.根据2所述的内存管理方法,所述系统内存被分配为使得与所述物理内存中的连续分布的物理内存块对应的至少一个硬件单元为需要最大的系统内存块数目的硬件单元。
4.根据2所述的内存管理方法,所述系统内存被分配为使得在所述多个硬件单元中,与所述物理内存中的连续分布的物理内存块对应的硬件单元的数目最多。
5.根据1所述的内存管理方法,所述物理内存的大小大于或等于所述多个应用场景中的需要最多的系统内存块数目的应用场景所需的系统内存的大小。
6.根据1所述的内存管理方法,所述系统内存块中的第(i+n*Np)个扩展内存块分别映射到所述物理内存中的第i个物理内存块,其中,i为大于零的整数,n为大于或等于零的整数,Np为所述物理内存中的物理内存块的总数目。
7.根据1所述的内存管理方法,所述系统内存的大小为所述物理内存的大小的倍数,所述倍数为大于1的数。
8.根据7所述的内存管理方法,所述系统内存的大小与所述物理内存的大小的倍数根据所述多个应用场景的场景数目确定。
9.根据1所述的内存管理方法,所述多个硬件单元包括第一硬件单元和第二硬件单元,所述第一硬件单元和所述第二硬件单元分别在不同的应用场景下运行,所述系统内存包括被分配给所述第一硬件单元的第一系统内存块组和被分配给所述第二硬件单元的第二系统内存块组,且所述第一系统内存块组中的至少一部分第一系统内存块和所述第二系统内存块组中的至少一部分第二系统内存块映射到物理内存中的相同的物理内存块。
10.根据9所述的内存管理方法,所述多个硬件单元还包括第四硬件单元,所述第四硬件单元和所述第一硬件单元分别在不同的应用场景下运行,且所述第四硬件单元和所述第二硬件单元在所述多个应用场景中的至少一个应用场景下共同运行,所述系统内存还包括被分配给所述第四硬件单元的第四系统内存块组,且所述第一系统内存块组中的至少另一部分第一系统内存块和所述第四系统内存块组中的至少一部分第四系统内存块映射到所述物理内存中的相同的物理内存块,所述第二系统内存块组和所述第四系统内存块组分别映射到所述物理内存中的不同的物理内存块。
11.根据9所述的内存管理方法,所述多个硬件单元还包括第五硬件单元,所述第五硬件单元和所述第一硬件单元在所述多个应用场景中的至少一个应用场景下共同运行,且所述第五硬件单元和所述第二硬件单元在所述多个应用场景中的至少另一个应用场景下共同运行,所述系统内存还包括被分配给所述第五硬件单元的第五系统内存块组,且所述第一系统内存块组和所述第五系统内存块组分别映射到所述物理内存中的不同的物理内存块,所述第二系统内存块组和所述第五系统内存块组分别映射到所述物理内存中的不同的物理内存块。
12.一种内存管理设备,所述内存管理设备包括存储器、处理器以及存储在所述存储器上的指令,当所述指令被所述处理器执行时,实现根据1至11中任一项所述的内存管理方法的步骤。
13.一种计算系统,所述计算系统包括:
计算设备和根据12所述的内存管理设备,其中,所述计算设备包括多个硬件单元;或者
计算设备,在所述计算设备上设置有根据12所述的内存管理设备。
14.根据13所述的计算系统,所述计算设备还包括物理内存。
15.根据13所述的计算系统,所述计算设备包括人工智能物联网设备;和/或
所述计算设备被包括在片上系统中。
16.一种非暂态计算机可读存储介质,所述非暂态计算机可读存储介质上存储有指令,当所述指令被处理器执行时,实现根据1至11中任一项所述的内存管理方法的步骤。
17.一种计算机程序产品,所述计算机程序产品包括指令,当所述指令被处理器执行时,实现根据1至11中任一项所述的内存管理方法的步骤。
在这里示出和讨论的所有示例中,任何具体值应被解释为仅仅是示例性的,而不是作为限制。因此,示例性实施例的其他示例可以具有不同的值。
在说明书及权利要求中的词语“前”、“后”、“顶”、“底”、“之上”、“之下”等,如果存在的话,用于描述性的目的而并不一定用于描述不变的相对位置。应当理解,这样使用的词语在适当的情况下是可互换的,使得在此所描述的本公开的实施例,例如,能够在与在此所示出的或另外描述的那些取向不同的其他取向上操作。
如在此所使用的,词语“示例性的”意指“用作示例、实例或说明”,而不是作为将被精确复制的“模型”。在此示例性描述的任意实现方式并不一定要被解释为比其他实现方式优选的或有利的。而且,本公开不受在上述技术领域、背景技术、发明内容或具体实施方式中所给出的任何所表述的或所暗示的理论所限定。
如在此所使用的,词语“基本上”意指包含由设计或制造的缺陷、器件或元件的容差、环境影响和/或其他因素所致的任意微小的变化。词语“基本上”还允许由寄生效应、噪音以及可能存在于实际的实现方式中的其他实际考虑因素所致的与完美的或理想的情形之间的差异。
上述描述可以指示被“连接”或“耦合”在一起的元件或节点或特征。如在此所使用的,除非另外明确说明,“连接”意指一个元件/节点/特征与另一种元件/节点/特征在电学上、机械上、逻辑上或以其他方式直接地连接(或者直接通信)。类似地,除非另外明确说明,“耦合”意指一个元件/节点/特征可以与另一元件/节点/特征以直接的或间接的方式在机械上、电学上、逻辑上或以其他方式连结以允许相互作用,即使这两个特征可能并没有直接连接也是如此。也就是说,“耦合”意图包含元件或其他特征的直接连结和间接连结,包括利用一个或多个中间元件的连接。
还应理解,“包括/包含”一词在本文中使用时,说明存在所指出的特征、整体、步骤、操作、单元和/或组件,但是并不排除存在或增加一个或多个其他特征、整体、步骤、操作、单元和/或组件以及/或者它们的组合。
本领域技术人员应当意识到,在上述操作之间的边界仅仅是说明性的。多个操作可以结合成单个操作,单个操作可以分布于附加的操作中,并且操作可以在时间上至少部分重叠地执行。而且,另选的实施例可以包括特定操作的多个实例,并且在其他各种实施例中可以改变操作顺序。但是,其他的修改、变化和替换同样是可能的。因此,本说明书和附图应当被看作是说明性的,而非限制性的。
虽然已经通过示例对本公开的一些特定实施例进行了详细说明,但是本领域的技术人员应该理解,以上示例仅是为了进行说明,而不是为了限制本公开的范围。在此公开的各实施例可以任意组合,而不脱离本公开的精神和范围。本领域的技术人员还 应理解,可以对实施例进行多种修改而不脱离本公开的范围和精神。本公开的范围由所附权利要求来限定。

Claims (16)

  1. 一种用于设备的内存管理方法,其中,所述设备包括多个硬件单元,所述内存管理方法包括:
    确定所述设备的多个应用场景;
    根据在每个应用场景下每个硬件单元所需的物理内存块数目,以及,根据已建立的系统内存中的系统内存块与物理内存中的物理内存块的映射关系,确定分配给相应硬件单元的系统内存块,所述系统内存包括物理内存以及与该物理内存块对应的扩展内存;
    其中,被分配给在同一应用场景下运行的所有硬件单元的各个系统内存块分别映射到不同的物理内存块;或者
    在被分配给分别在不同应用场景下运行的至少两个硬件单元的各个系统内存块中,所述不同应用场景中至少一个应用场景的系统内存块与另一应用场景的系统内存块映射到相同的物理内存块。
  2. 根据权利要求1所述的内存管理方法,其中,确定分配给相应硬件单元的系统内存块,包括:
    所述系统内存块被分配为使得在为所述多个硬件单元中的至少一个硬件单元分配的所有系统内存块映射到所述物理内存中的连续分布的物理内存块。
  3. 根据权利要求2所述的内存管理方法,其中,所述系统内存被分配为使得与所述物理内存中的连续分布的物理内存块对应的至少一个硬件单元为需要最大的系统内存块数目的硬件单元。
  4. 根据权利要求2所述的内存管理方法,其中,所述系统内存被分配为使得在所述多个硬件单元中,与所述物理内存中的连续分布的物理内存块对应的硬件单元的数目最多。
  5. 根据权利要求1所述的内存管理方法,其中,所述物理内存的大小大于或等于所述多个应用场景中的需要最多的系统内存块数目的应用场景所需的系统内存的大小。
  6. 根据权利要求1所述的内存管理方法,其中,所述系统内存块中的第(i+n*Np)个扩展内存块分别映射到所述物理内存中的第i个物理内存块,其中,i为大于零的整数,n为大于或等于零的整数,Np为所述物理内存中的物理内存块的总数目。
  7. 根据权利要求1所述的内存管理方法,其中,所述系统内存的大小为所述物理内存的大小的倍数,所述倍数为大于1的数。
  8. 根据权利要求7所述的内存管理方法,其中,所述系统内存的大小与所述物理内 存的大小的倍数根据所述多个应用场景的场景数目确定。
  9. 根据权利要求1所述的内存管理方法,其中,所述多个硬件单元包括第一硬件单元和第二硬件单元,所述第一硬件单元和所述第二硬件单元分别在不同的应用场景下运行,所述系统内存包括被分配给所述第一硬件单元的第一系统内存块组和被分配给所述第二硬件单元的第二系统内存块组,且所述第一系统内存块组中的至少一部分第一系统内存块和所述第二系统内存块组中的至少一部分第二系统内存块映射到物理内存中的相同的物理内存块。
  10. 根据权利要求9所述的内存管理方法,其中,所述多个硬件单元还包括第四硬件单元,所述第四硬件单元和所述第一硬件单元分别在不同的应用场景下运行,且所述第四硬件单元和所述第二硬件单元在所述多个应用场景中的至少一个应用场景下共同运行,所述系统内存还包括被分配给所述第四硬件单元的第四系统内存块组,且所述第一系统内存块组中的至少另一部分第一系统内存块和所述第四系统内存块组中的至少一部分第四系统内存块映射到所述物理内存中的相同的物理内存块,所述第二系统内存块组和所述第四系统内存块组分别映射到所述物理内存中的不同的物理内存块。
  11. 根据权利要求9所述的内存管理方法,其中,所述多个硬件单元还包括第五硬件单元,所述第五硬件单元和所述第一硬件单元在所述多个应用场景中的至少一个应用场景下共同运行,且所述第五硬件单元和所述第二硬件单元在所述多个应用场景中的至少另一个应用场景下共同运行,所述系统内存还包括被分配给所述第五硬件单元的第五系统内存块组,且所述第一系统内存块组和所述第五系统内存块组分别映射到所述物理内存中的不同的物理内存块,所述第二系统内存块组和所述第五系统内存块组分别映射到所述物理内存中的不同的物理内存块。
  12. 一种内存管理设备,包括存储器、处理器以及存储在所述存储器上的指令,当所述指令被所述处理器执行时,实现根据权利要求1至11中任一项所述的内存管理方法的步骤。
  13. 一种计算系统,包括:
    计算设备和根据权利要求12所述的内存管理设备,其中,所述计算设备包括多个硬件单元;或者
    计算设备,在所述计算设备上设置有根据权利要求12所述的内存管理设备。
  14. 根据权利要求13所述的计算系统,其中,所述计算设备还包括物理内存。
  15. 根据权利要求13所述的计算系统,其中,所述计算设备包括人工智能物联网设 备;和/或
    所述计算设备被包括在片上系统中。
  16. 一种非暂态计算机可读存储介质,其中,所述非暂态计算机可读存储介质上存储有指令,当所述指令被处理器执行时,实现根据权利要求1至11中任一项所述的内存管理方法的步骤。
PCT/CN2022/119004 2021-12-10 2022-09-15 用于设备的内存管理方法、内存管理设备和计算系统 WO2023103506A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111502847.6A CN113900829B (zh) 2021-12-10 2021-12-10 用于设备的内存管理方法、内存管理设备和计算系统
CN202111502847.6 2021-12-10

Publications (1)

Publication Number Publication Date
WO2023103506A1 true WO2023103506A1 (zh) 2023-06-15

Family

ID=79025459

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/119004 WO2023103506A1 (zh) 2021-12-10 2022-09-15 用于设备的内存管理方法、内存管理设备和计算系统

Country Status (2)

Country Link
CN (1) CN113900829B (zh)
WO (1) WO2023103506A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113900829B (zh) * 2021-12-10 2022-04-12 深圳比特微电子科技有限公司 用于设备的内存管理方法、内存管理设备和计算系统
CN114579319B (zh) * 2022-05-07 2022-07-19 北京象帝先计算技术有限公司 显存管理方法、显存管理模块、soc及电子设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106502918A (zh) * 2016-09-19 2017-03-15 上海华为技术有限公司 一种内存调度方法及装置
CN113076231A (zh) * 2021-03-26 2021-07-06 山东英信计算机技术有限公司 服务器应用场景设置方法、系统、终端及存储介质
CN113485835A (zh) * 2021-07-14 2021-10-08 深圳大趋智能科技有限公司 一种多场景下共享内存的实现方法、系统、设备及介质
CN113900829A (zh) * 2021-12-10 2022-01-07 深圳比特微电子科技有限公司 用于设备的内存管理方法、内存管理设备和计算系统

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101546292B (zh) * 2008-03-25 2010-12-08 北京恒光创新科技股份有限公司 一种内存存取方法及装置
US8239938B2 (en) * 2008-12-08 2012-08-07 Nvidia Corporation Centralized device virtualization layer for heterogeneous processing units
CN102541619B (zh) * 2010-12-23 2015-09-16 国际商业机器公司 虚拟机管理装置和方法
CN103077120B (zh) * 2012-12-31 2016-01-27 东软集团股份有限公司 程序共享内存的地址转换方法和装置
US9734088B2 (en) * 2015-08-12 2017-08-15 International Business Machines Corporation Memory management unit and method for accessing data
CN107038128B (zh) * 2016-02-03 2020-07-28 华为技术有限公司 一种执行环境的虚拟化、虚拟执行环境的访问方法及装置
CN109783220B (zh) * 2017-11-10 2020-12-11 安徽寒武纪信息科技有限公司 内存分配方法、装置、计算机系统及存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106502918A (zh) * 2016-09-19 2017-03-15 上海华为技术有限公司 一种内存调度方法及装置
CN113076231A (zh) * 2021-03-26 2021-07-06 山东英信计算机技术有限公司 服务器应用场景设置方法、系统、终端及存储介质
CN113485835A (zh) * 2021-07-14 2021-10-08 深圳大趋智能科技有限公司 一种多场景下共享内存的实现方法、系统、设备及介质
CN113900829A (zh) * 2021-12-10 2022-01-07 深圳比特微电子科技有限公司 用于设备的内存管理方法、内存管理设备和计算系统

Also Published As

Publication number Publication date
CN113900829A (zh) 2022-01-07
CN113900829B (zh) 2022-04-12

Similar Documents

Publication Publication Date Title
WO2023103506A1 (zh) 用于设备的内存管理方法、内存管理设备和计算系统
US11467864B2 (en) Unified resource scheduling coordinator, method for creating a virtual machine and/or container, and unified resource scheduling system
US20180239726A1 (en) Data transmission method, device, and system
JP6880402B2 (ja) メモリアクセス制御装置及びその制御方法
US10223253B2 (en) Allocation systems and method for partitioning lockless list structures
CN108243079B (zh) 一种基于vpc进行网络访问的方法与设备
WO2022120522A1 (zh) 内存空间的分配方法、装置及存储介质
CN111130838A (zh) 一种进程级服务实例动态扩展及网络带宽限制方法及装置
US20170344297A1 (en) Memory attribution and control
US7849272B2 (en) Dynamic memory management in an RDMA context
CN112052100A (zh) 基于共享内存的虚拟机通信方法及设备
CN104506669A (zh) 一种面向分布式网络仿真平台的ip地址分配系统及方法
CN106940712B (zh) 序列生成方法与设备
CN116049085A (zh) 一种数据处理系统及方法
WO2020253407A1 (zh) 一种执行写操作、读操作的方法及装置
WO2017143929A1 (zh) 云数据库资源扩展和服务扩展的方法和系统
US10452295B1 (en) Data routing in information processing system utilizing persistent memory
CN114217956A (zh) 容器集群的部署方法、装置及计算机设备
CN111988446B (zh) 一种报文处理方法、装置、电子设备及存储介质
WO2021249146A1 (zh) 小区管理模型和小区管理方法
CN114979286A (zh) 容器服务的访问控制方法、装置、设备及计算机存储介质
CN110784335B (zh) 一种云场景下的网元资源预留系统
CN114827079A (zh) 网络地址转换网关的扩容方法、设备及存储介质
US20240012750A1 (en) Memory expansion method and related device
CN110543351B (zh) 数据处理方法以及计算机设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22902931

Country of ref document: EP

Kind code of ref document: A1