WO2019028682A1 - 一种多系统共享内存的管理方法及装置 - Google Patents

一种多系统共享内存的管理方法及装置 Download PDF

Info

Publication number
WO2019028682A1
WO2019028682A1 PCT/CN2017/096480 CN2017096480W WO2019028682A1 WO 2019028682 A1 WO2019028682 A1 WO 2019028682A1 CN 2017096480 W CN2017096480 W CN 2017096480W WO 2019028682 A1 WO2019028682 A1 WO 2019028682A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
data block
written
shared memory
size
Prior art date
Application number
PCT/CN2017/096480
Other languages
English (en)
French (fr)
Inventor
温燕飞
Original Assignee
深圳前海达闼云端智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳前海达闼云端智能科技有限公司 filed Critical 深圳前海达闼云端智能科技有限公司
Priority to CN201780002588.6A priority Critical patent/CN108064377B/zh
Priority to PCT/CN2017/096480 priority patent/WO2019028682A1/zh
Publication of WO2019028682A1 publication Critical patent/WO2019028682A1/zh
Priority to US16/784,613 priority patent/US11281388B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/064Management of blocks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0659Command handling arrangements, e.g. command buffers, queues, command scheduling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device

Definitions

  • the present application relates to the field of virtualization technologies, and in particular, to a method and apparatus for managing multi-system shared memory.
  • the main idea of a multi-operating system architecture is to distribute different loads to different operating system kernels to improve the processing power of the system and to be compatible with applications on different operating systems.
  • all cores are computationally independent, that is, independently processing the load, but in terms of resource usage, each operating system shares resources of the entire multi-operating system structure, such as input/output, memory, and the like.
  • a typical design of the prior art is: if the shared memory is abstracted into a "memory band", the shared memory management service program will block the memory of the "memory band" from the heading in a fixed order.
  • the number is the kernel unit allocated to each sub-operating system, that is, the order of the memory occupied by the kernel area of each sub-operating system is fixedly arranged from the heading to the rear, and the memory allocated by the kernel area is not changed during the operation of the entire multi-operating system. .
  • the embodiment of the present application mainly solves how to fully utilize shared memory when sharing memory in multiple systems.
  • a technical solution adopted by the embodiment of the present application is to provide a multi-system shared memory management method, including:
  • the first data block whose memory size is larger than the data size and is free is obtained, so that the to-be-written data is written into the first data block;
  • a new data block is generated based on the remaining free space.
  • a multi-system shared memory management device including:
  • a first acquiring module configured to acquire a data size of the data to be written to be written into the shared memory when receiving a data write instruction for writing data to the shared memory
  • a first determining module configured to determine whether the shared memory includes a data block that matches the size of the data and is idle
  • a first processing module configured to acquire a first data block whose memory size is larger than the data size and is free if the data block that matches the size of the data is not included, so that the to-be-written data is written The first data block;
  • a second acquiring module configured to acquire, after the data to be written is written into the first data block, a remaining free space of the first data block;
  • a generating module configured to generate a new data block based on the remaining free space.
  • an electronic device including: at least one processor; and a memory communicably connected to the at least one processor; wherein the memory An instruction program executable by the at least one processor is stored, the instruction program being executed by the at least one processor to cause the at least one processor to perform the method as described above.
  • Another technical solution adopted by the embodiment of the present application is to provide a A non-transitory computer readable storage medium storing computer executable instructions for causing a computer to perform the method as described above.
  • another technical solution adopted by the embodiment of the present application is to provide a computer program product, the computer program product comprising: a non-transitory computer readable storage medium and embedded in the nonvolatile Computer program instructions of a computer readable storage medium; the computer program instructions comprising instructions to cause a processor to perform the method as described above.
  • the multi-system shared memory management method and device provided by the embodiment of the present application allocates suitable shared memory according to the data size of the data to be written, and the memory size of the allocated data block is larger than When the data size of the data is written, the remaining free space of the data block is obtained, and the remaining free space is generated into a new data block.
  • This embodiment improves the usage and flexibility of the shared memory, and improves the data transmission efficiency to a certain extent, thereby improving the performance of the system as a whole.
  • FIG. 1 is a schematic structural diagram of a virtualization solution based on QEMU-KVM technology provided by an embodiment of the present application
  • FIG. 2 is a schematic flowchart of a method for managing multi-system shared memory according to an embodiment of the present application
  • FIG. 3 is a schematic diagram of a state in a shared memory allocation method in a multi-system shared memory management method according to an embodiment of the present application
  • FIG. 4 is a schematic diagram of another state in a shared memory allocation method in a multi-system shared memory management method according to an embodiment of the present application
  • FIG. 5 is a schematic structural diagram of a data block in a method for managing multi-system shared memory according to an embodiment of the present application
  • FIG. 6 is a schematic structural diagram of a control information header portion of a data block in a method for managing a multi-system shared memory according to an embodiment of the present application
  • FIG. 7 is a schematic flowchart of another multi-system shared memory management method provided by an embodiment of the present application.
  • FIG. 8 is a schematic flowchart of a method for acquiring a storage address after writing the to-be-written data into the first data block in another multi-system shared memory management method according to an embodiment of the present disclosure
  • FIG. 9 is a schematic flowchart of a method for managing multi-system shared memory according to another embodiment of the present application.
  • FIG. 10 is a schematic diagram of a state before and after a process of releasing a data block in a method for managing a multi-system shared memory according to another embodiment of the present application;
  • FIG. 11 is a schematic structural diagram of a multi-system shared memory management apparatus according to an embodiment of the present application.
  • FIG. 12 is a schematic structural diagram of a multi-system shared memory management apparatus according to another embodiment of the present disclosure.
  • FIG. 13 is a schematic diagram of a hardware structure of an electronic device for performing a multi-system shared memory management method according to an embodiment of the present application.
  • Virtualization is the abstraction and transformation of computer physical resources such as servers, networks, memory and storage, so that users can apply these resources in a better way than the original configuration.
  • the new virtual part of these resources is not The way in which existing resources are erected is limited by geographic or physical configuration.
  • the real physical environment is generally called the host, and the environment that is built through virtualization is called the guest.
  • the system running on the host machine is called the host operating system. (Host OS), while the operating system running on the client machine is called the guest operating system (Guest OS), and the layer responsible for virtualization is generally called the Virtual Machine Monitor (VMM).
  • Host OS the host operating system
  • Guest OS guest operating system
  • VMM Virtual Machine Monitor
  • KVM Kernel-based Virtual Machine
  • KVM Kernel-based Virtual Machine
  • the idea of KVM is to add a virtual machine management module based on the Linux kernel, and reuse the already completed process scheduling, memory management and I/O management in the Linux kernel to make it a virtual machine management program that can support virtual machine running. Therefore, KVM is not a complete simulator. It can be considered as a kernel plug-in that provides virtualization functions.
  • the specific simulator work needs to be done with the virtual operating system simulator (QEMU).
  • QEMU has been implemented in the industry. Combined with KVM as a common virtualization implementation architecture. Among them, QEMU is mainly used as an emulator.
  • QEMU-KVM is based on hardware virtualization technology and combines QEMU to provide device virtualization functions to realize the entire system virtualization.
  • the management method of the multi-system shared memory provided by the embodiment of the present application is implemented based on the QEMU-KVM virtualization architecture.
  • a virtualization solution based on QEMU-KVM technology is provided, which is composed of a Host OS and a plurality of virtual guest OSs running on the same set of hardware processor chips. On top, share processor and peripheral resources.
  • the ARM processor supporting this virtualization architecture contains at least three modes, running the hypervisor in the first mode, the Linux kernel in the second mode, and the userspace program in the third mode.
  • the virtual hardware platform is responsible for managing hardware resources such as CPU, memory, timers, and interrupts.
  • the CPU, memory, timer, and interrupted virtualization resources can load different operating systems into the physical processor for time-sharing.
  • the function of implementing system virtualization is responsible for managing hardware resources such as CPU, memory, timers, and interrupts.
  • the KVM virtual machine monitor/virtual hypervisor layer spans the Linux host kernel and the virtual hardware platform. It provides a driver node for QEMU on one hand, allowing QEMU to create virtual CPUs through KVM nodes and manage virtualized resources.
  • KVM virtual machines The monitor/virtual machine management program implements switching the Host Linux system from the physical CPU, then loads the Guest Linux system onto the physical processor, and finally processes the subsequent transactions of the Guest Linux system exiting abnormally.
  • QEMU provides virtual hardware device resources for the operation of Guest Linux.
  • KVM node of the KVM virtual machine monitor/virtual hypervisor a virtual CPU is created, and physical hardware resources are allocated to implement an un The modified Guest Linux is loaded into physical hardware processing to run.
  • the virtual operating system can also use real hardware devices, such as using memory, interrupting resources, Peripheral resources such as timers, networks, multimedia, cameras, and displays. Because the efficient data transmission method can achieve the ideal device virtualization effect, the memory sharing method between multiple systems is usually adopted to solve the virtualization problem of all hardware devices.
  • the following embodiments provide a multi-system shared memory management method and apparatus.
  • the method is applied to the QEMU-KVM virtualization architecture, and the multi-system shared memory management method can fully utilize shared memory and improve data. Transmission efficiency to achieve the desired virtualization effect of various intelligent terminal devices.
  • FIG. 2 is a schematic flowchart diagram of a method for managing multi-system shared memory according to an embodiment of the present application. As shown in Figure 2, the method includes:
  • Step 11 When receiving a data write instruction for writing data to the shared memory, acquiring a data size of data to be written to be written into the shared memory;
  • the virtual operating system of the data to be transmitted when the virtual operating system of the data to be transmitted needs to transmit data to the target virtual system hosted on the same physical machine, the virtual operating system of the data to be transmitted sends a data write command to the host operating system to write data to the shared memory.
  • the data size of the data to be written to be written to the shared memory may be acquired according to the data write instruction.
  • the virtual operating system A transfers data to the virtual operating system B (ie, the target virtual system)
  • the virtual operating system A sends a data write command to the host OS
  • the shared memory management application included in the host OS receives the data according to the
  • the data is written to the instruction to obtain the data size of the data to be written.
  • the data size of the data to be written is calculated by the virtual operating system A.
  • Step 12 Determine whether the shared memory includes a data block that matches the size of the data and is idle;
  • Step 13 If the data block that matches the size of the data and is free is not included, obtain a first data block whose memory size is larger than the data size and is idle, so that the to-be-written data is written into the first data block;
  • Step 14 After the data to be written is written into the first data block, acquire remaining free space of the first data block.
  • Step 15 Generate a new data block based on the remaining free space.
  • the above process is a shared memory allocation process.
  • Figure 3 based on two-way
  • the data structure of the linked list provides a schematic diagram of the state of the shared memory allocation.
  • the shared memory has only one data block.
  • a shared memory allocation is made, in which case the shared memory contains at least two data blocks.
  • the appropriate data block is matched. If the memory size of one data block in the shared memory is exactly the same as the data size of the data to be written and the data block is free, it means that the shared memory contains A data block whose data size matches and is free. At this time, the data block is directly acquired, and the data to be written is controlled to be written to the data block.
  • the data block whose memory size is larger than the data size and is free is obtained, and the data to be written is controlled to be written to the data block.
  • the data block must contain the remaining free space, and the free space generates a new data block. This new block of data can be used by other processes to ensure that shared memory is not wasted.
  • the size of the generated new data block is determined by the manner in which the data to be written is written to the first data block that is larger than the data size and is free.
  • the process of writing the data to be written to the first data block that is larger than the data size and is idle includes: controlling, according to the data size of the data to be written, that the data to be written is greater than the The contiguous address space of the first data block of data size and free.
  • the data block size is 10 KB
  • the data to be written is 8 KB
  • the 8 KB data is written into the contiguous address space of the first data block
  • the free space is 2 KB, when the data to be written is from the address space of the first data block.
  • the first address or the last address begins, and when the address space of the first data block is written, the generated new data block size is 2 KB.
  • the data to be written is controlled to be written into the address space of the first data block.
  • each data block node is as shown in FIG. 5, and the data block node includes three segments, which are:
  • Control the information header portion which is used to record information such as the status and size of the shared memory data block;
  • the application data area which is the area that is actually allocated to the Guest OS for data reading and writing;
  • Control information tail part which is used to mark the end of a data node, that is to say, the Guest OS can not write this part of the area, mainly used to monitor the Guest OS write out of bounds.
  • the main information field of the control information header portion includes: a node offset value for recording an offset of the data block node from the shared memory start position to the start position of the data block node.
  • the pre-node offset value is used to record the offset value of the previous data block node of the data block node in the doubly linked list of the data block node.
  • the post-node offset value is used to record the offset value of the next data block node of the data block node in the doubly linked list of the data block node.
  • the node status is used to identify that the current state of the data block node is idle or in use. Other information is used to record length information, synchronization information, and the like of the data block node.
  • the shared memory can also manage the data block nodes of the shared memory by using a circular linked list or other data structure when allocating memory, and is not limited to the form shown in FIG. 3 and FIG. 4 .
  • the format of each data block node matches the current data structure.
  • the method further includes:
  • Step 16 Acquire a storage address after the data to be written is written into the first data block, and send the storage address.
  • the storage address that is, the location where the data to be written is stored in the first data block.
  • the shared memory management application acquires the to-be-written data and writes the storage address of the first data block
  • the storage address is sent to the target virtual system, and the target virtual system reads the to-be-written data from the first data block according to the storage address. To complete the transfer of data.
  • the storage address after the data to be written is written into the first data block is as follows:
  • Step 161 Obtain an offset value of the first data block, where the offset value is an offset of the first data block from a start position of the shared memory to a start position of the first data block. the amount;
  • Step 162 Calculate a first address of the first data block according to the offset value.
  • Step 163 Acquire the storage address according to the first address of the first data block.
  • the offset value can be obtained according to the node offset value recorded in the control information header portion.
  • the start node address of the shared memory and the offset are obtained.
  • the first address of the first data block can be obtained by adding the values.
  • the storage address is the address of the application data area in a data block node, which may be its first address or its first address to the last address.
  • the storage address may be acquired according to information such as a first address of the first data block and a size of a data block recorded in the control information header portion.
  • the embodiment of the present application provides a multi-system shared memory management method, which allocates suitable shared memory according to the data size of the data to be written.
  • the remaining free space of the data block is obtained, and the remaining free space is generated into a new data block.
  • This embodiment improves the usage and flexibility of the shared memory, and improves the transmission efficiency of the data by fully increasing the usage rate of the shared memory, thereby improving the system performance as a whole.
  • FIG. 9 is a schematic flowchart diagram of a method for managing multi-system shared memory according to another embodiment of the present application. As shown in FIG. 9, the method includes:
  • Step 21 When receiving a data write instruction for writing data to the shared memory, acquiring a data size of data to be written to be written into the shared memory;
  • Step 22 Determine whether the shared memory includes a data block that matches the size of the data and is idle;
  • Step 23 If a data block that matches the size of the data and is free is included, acquire a second data block whose memory size matches the data size, so that the to-be-written data is written into the second data Piece;
  • Step 24 If a data block that matches the size of the data and is free is not included, obtain a first data block whose memory size is larger than the data size and is idle, so that the to-be-written data is written into the first data block;
  • Step 25 After the data to be written is written into the first data block that is larger than the data size and is idle, acquiring remaining free space of the first data block;
  • Step 26 Generate a new data block based on the remaining free space
  • Step 27 When receiving a data release instruction for releasing the data to be written from the shared memory, releasing a data block storing the data to be written;
  • the shared memory management application may control the shared memory to release the data to be written after receiving the data read completion instruction sent by the target virtual system, or the shared memory management application may actively control the shared memory to release the data to be written.
  • the data block is the first data block or the second data block.
  • Step 28 Determine whether a previous data block of the data block and a subsequent data block of the data block are idle;
  • Step 29 Combine the data block determined to be idle with the data block releasing the data to be written.
  • the previous data block and the latter data block are both idle, the previous data block and the subsequent data block are simultaneously merged with the data block to generate a new data block. As shown in Figure 10, the process before and after the release of the data block is shown.
  • multi-system shared memory management method provided by the embodiment of the present application can be applied to a shared memory data transmission process.
  • the process is illustrated by way of example below.
  • virtual operating system A transfers data to virtual operating system B.
  • the virtual operating system A sends a data transfer request instruction to the host operating system, where the host operating system includes a shared memory management application, and the shared memory management application acquires the data size of the data to be written according to the received data transfer request instruction. .
  • the shared memory is allocated according to the data size of the data to be written, the appropriate data block is obtained, and the data to be written is controlled to be written into the data block. In this process, if the allocated data block has remaining free space, the remaining data is left. Free space generates new data blocks.
  • the shared memory management application calls the synchronization interface to notify the virtual operating system B that the data has been written.
  • the virtual operating system B receives the notification and reads the data in the data block, and after the reading is completed, sends an acknowledgement message to the host operating system, according to which the shared memory management application calls the memory release function to release the data stored in the data.
  • Block in this process, if the previous data block or the latter data block of the released data block is in an idle state, the free data block is merged with the data block from which the data is released.
  • the data structure of the shared memory may be a doubly linked list, or a circular linked list, or other data structure.
  • a suitable shared memory is allocated according to the data size of the data to be written.
  • the remaining free space of the data block is obtained, and the remaining space is obtained.
  • the remaining free space generates a new data block; when the shared memory releases the data, the data block that releases the data is also merged with its adjacent free data block.
  • the implementation improves the usage and flexibility of the shared memory and improves the data transmission efficiency.
  • the time of the next memory allocation can be reduced, and the efficiency of memory allocation is improved.
  • FIG. 11 is a schematic structural diagram of a multi-system shared memory management apparatus according to an embodiment of the present application.
  • the device 30 includes: a first obtaining module 31, and a first determining mode Block 32, first processing module 33, second acquisition module 34, and generation module 35.
  • the first obtaining module 31 is configured to acquire a data size of the data to be written to be written into the shared memory when receiving a data write instruction for writing data to the shared memory; the first determining module 32, And configured to determine whether the shared memory includes a data block that matches the size of the data and is idle; and the first processing module 33 is configured to: if the data block that matches the data size and is idle is not included, obtain a first data block whose memory size is larger than the data size and is free, so that the data to be written is written into the first data block; and a second obtaining module 34, configured to write the data to be written After the first data block, the remaining free space of the first data block is acquired; and the generating module 35 is configured to generate a new data block based on the remaining free space.
  • the data to be written is written into an address space of the first data block, and the address space is a continuous address space.
  • the first address of the consecutive address space is the same as the first address of the address space of the first data block, or the tail address of the consecutive address space is the same as the tail address of the address space of the first data block. .
  • the apparatus 30 further includes: a second processing module 36, a release module 37, a second judging module 38, and a merging module 39.
  • the second processing module 36 is configured to: if a data block that matches the size of the data and is free, obtain a second data block whose memory size matches the data size, so that the data to be written is written The second data block is entered.
  • the release module 37 is configured to release the data block storing the data to be written when receiving the data release instruction for releasing the data to be written from the shared memory.
  • the second determining module 38 is configured to determine whether the previous data block of the data block and the next data block of the data block are free.
  • the merging module 39 is configured to merge the data block that is determined to be idle with the data block that releases the data to be written.
  • the embodiment provides a multi-system shared memory management device, which allocates suitable shared memory according to the data size of the data to be written, and obtains when the memory size of the allocated data block is larger than the data size of the data to be written.
  • the remaining free space of the data block, and the remaining free space generates a new data block; when the shared memory releases the data, the data block of the data is also released and its adjacent idle
  • the data blocks are merged.
  • the implementation improves the usage and flexibility of the shared memory and improves the data transmission efficiency.
  • the time of the next memory allocation can be reduced, and the efficiency of memory allocation is improved.
  • FIG. 13 is a schematic diagram of a hardware structure of an electronic device for performing a multi-system shared memory management method according to an embodiment of the present disclosure.
  • the electronic device 40 is capable of executing the foregoing multi-system shared memory management method, which may be any suitable Smart terminal devices, such as: intelligent robots, robot assistants, PDAs, personal computers, tablets, smart phones, wearable smart devices, etc.
  • the electronic device 40 includes: one or more processors 41 and a memory 42, and one processor 41 is taken as an example in FIG.
  • the processor 41 and the memory 42 may be connected by a bus or other means, as exemplified by a bus connection in FIG.
  • the memory 42 is a non-volatile computer readable storage medium, and can be used for storing a non-volatile software program, a non-volatile computer executable program, and a module, such as a multi-system shared memory management method in the embodiment of the present application.
  • Corresponding program instructions/modules for example, the first acquisition module 31, the first determination module 32, the first processing module 33, the second acquisition module 34, and the generation module 35 shown in FIG. 11).
  • the processor 41 executes various functional applications and data processing of the server by running non-volatile software programs, instructions, and modules stored in the memory 42, that is, a method for managing multi-system shared memory in the above method embodiments.
  • the memory 42 may include a storage program area and an storage data area, wherein the storage program area may store an operating system, an application required for at least one function; and the storage data area may store data created according to use of the management device of the multi-system shared memory. Wait.
  • memory 42 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device.
  • memory 42 may optionally include memory remotely located relative to processor 41, which may be connected to a multi-system shared memory management device over a network. Examples of such networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
  • the one or more modules are stored in the memory 42 and, when executed by the one or more processors 41, perform a management method of multi-system shared memory in any of the above method embodiments, for example, performing the above description Method step 11 to step 15 in FIG. 2, method step 11 to step 16 in FIG. 7, method step 11 to step 13 in FIG. 8, method step 21 to step 29 in FIG.
  • the embodiment of the present application further provides a non-transitory computer readable storage medium storing computer executable instructions executed by an electronic device by any of the above methods.
  • the multi-system shared memory management method in the example for example, performing the method steps 11 to 15 in FIG. 2, the method steps 11 to 16 in FIG. 7, and the method steps 11 to 13 in FIG.
  • the method steps 21 to 29 in Fig. 9 implement the functions of the modules 31-35 in Fig. 11, and the modules 31-39 in Fig. 12.
  • the embodiment of the present application further provides a computer program product, including a computing program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions, when the program instructions are executed by a computer,
  • the computer performs the management method of the multi-system shared memory in any of the foregoing method embodiments, for example, performs the method steps 11 to 15 in FIG. 2 described above, the method steps 11 to 16 in FIG. 7, and the method in FIG. Method steps 11 to 13, and method steps 21 to 29 of FIG. 9, implement the functions of modules 31-35 in FIG. 11, modules 31-39 in FIG.
  • the device embodiments described above are merely illustrative, wherein the units described as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, ie may be located A place, or it can be distributed to multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • the storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), or a random access memory (RAM).

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

一种多系统共享内存的管理方法及装置,该方法包括:当接收到向所述共享内存写入数据的数据写入指令时,获取待写入所述共享内存的待写数据的数据大小(11);判断所述共享内存中是否包含与所述数据大小相匹配的并且空闲的数据块(12);如果不包含与所述数据大小相匹配的并且空闲的数据块,获取内存大小大于所述数据大小并且空闲的第一数据块,以使所述待写数据被写入所述第一数据块(13);在所述待写数据被写入所述第一数据块之后,获取所述第一数据块的剩余空闲空间(14);基于所述剩余空闲空间,生成新数据块(15)。该方法提高了共享内存的使用率和灵活性,一定程度上提升了数据的传输效率,从而在整体上提升了系统的性能。

Description

一种多系统共享内存的管理方法及装置 技术领域
本申请涉及虚拟化技术领域,特别是涉及一种多系统共享内存的管理方法及装置。
背景技术
随着数据量和数据处理需求的进一步提升,负载对操作系统的性能要求越来越高,因而,在面临着大数据处理的今天,各种多操作系统也逐渐步入人们的视线,从传统的计算机集群系统到如今比较热门的异构操作系统都是人们在这方面的尝试。
多操作系统结构的主要思想是将不同的负载分配到不同的操作系统内核上执行,以提高系统的处理能力,并且兼容不同操作系统上的应用。采用这种设计,所有的内核在计算上是独立的即独立处理负载,但在资源使用上各个操作系统共享整个多操作系统结构的资源,例如输入/输出、内存等。
就内存这一多操作系统结构的资源而言,现存的多操作系统结构一般都采用共享内存的方式,即,允许两个不相关的进程访问同一个逻辑内存。
对于共享内存而言,现有技术的一种典型设计为:如果将共享内存抽象成一条“内存带”,则由共享内存管理服务程序按照固定顺序从前往后将“内存带”的内存以块数为基本单位分配给各个子操作系统的内核区,即各个子操作系统的内核区占用内存的顺序是从前往后固定排列,内核区分配的内存在整个多操作系统的运行过程中不会更改。
上述现有技术存在的缺陷为:各个子操作系统缺乏主动内存感知的能力,所有的共享内存管理工作都交由共享内存管理服务程序处理,从而造成共享内存分配不够灵活,并且容易造成部分共享内存的浪费,不能充分有效的利用共享内存。
发明内容
本申请实施例主要解决在多系统共享内存时,如何充分利用共享内存。
为解决上述技术问题,本申请实施例采用的一个技术方案是:提供一种多系统共享内存的管理方法,包括:
当接收到向所述共享内存写入数据的数据写入指令时,获取待写入所述共享内存的待写数据的数据大小;
判断所述共享内存中是否包含与所述数据大小相匹配的并且空闲的数据块;
如果不包含与所述数据大小相匹配的并且空闲的数据块,获取内存大小大于所述数据大小并且空闲的第一数据块,以使所述待写数据被写入所述第一数据块;
在所述待写数据被写入所述第一数据块之后,获取所述第一数据块的剩余空闲空间;
基于所述剩余空闲空间,生成新数据块。
为解决上述技术问题,本申请实施例采用的另一个技术方案是:提供一种多系统共享内存的管理装置,包括:
第一获取模块,用于当接收到向所述共享内存写入数据的数据写入指令时,获取待写入所述共享内存的待写数据的数据大小;
第一判断模块,用于判断所述共享内存中是否包含与所述数据大小相匹配的并且空闲的数据块;
第一处理模块,用于如果不包含与所述数据大小相匹配的并且空闲的数据块,获取内存大小大于所述数据大小并且空闲的第一数据块,以使所述待写数据被写入所述第一数据块;
第二获取模块,用于在所述待写数据被写入所述第一数据块之后,获取所述第一数据块的剩余空闲空间;
生成模块,用于基于所述剩余空闲空间,生成新数据块。
为解决上述技术问题,本申请实施例采用的又一个技术方案是:提供一种电子设备,包括:至少一个处理器;以及,与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有可被所述至少一个处理器执行的指令程序,所述指令程序被所述至少一个处理器执行,以使所述至少一个处理器执行如上所述的方法。
为解决上述技术问题,本申请实施例采用的再一个技术方案是:提供一种 非易失性计算机可读存储介质,所述计算机可读存储介质存储有计算机可执行指令,所述计算机可执行指令用于使计算机执行如上所述的方法。
为解决上述技术问题,本申请实施例采用的还一个技术方案是:提供一种计算机程序产品,所述计算机程序产品包括:非易失性计算机可读存储介质以及内嵌于所述非易失性计算机可读存储介质的计算机程序指令;所述计算机程序指令包括用以使处理器执行如上所述的方法的指令。
本申请实施例的有益效果在于:本申请实施例提供的多系统共享内存的管理方法及装置,根据待写数据的数据大小为其分配适合的共享内存,在分配的数据块的内存大小大于待写数据的数据大小时,获取该数据块剩余的空闲空间,并且将该剩余的空闲空间生成新的数据块。该实施方式提高了共享内存的使用率和灵活性,一定程度上提升了数据的传输效率,从而在整体上提升了系统的性能。
附图说明
一个或多个实施例通过与之对应的附图中的图片进行示例性说明,这些示例性说明并不构成对实施例的限定,附图中具有相同参考数字标号的元件表示为类似的元件,除非有特别申明,附图中的图不构成比例限制。
图1是本申请实施例提供的基于QEMU-KVM技术的虚拟化方案的结构示意图;
图2是本申请实施例提供的一种多系统共享内存的管理方法的流程示意图;
图3是本申请实施例提供的一种多系统共享内存的管理方法中共享内存分配时的一种状态示意图;
图4是本申请实施例提供的一种多系统共享内存的管理方法中共享内存分配时的另一种状态示意图;
图5是本申请实施例提供的一种多系统共享内存的管理方法中一个数据块的结构示意图;
图6是本申请实施例提供的一种多系统共享内存的管理方法中一个数据块的控制信息头部分的结构示意图;
图7是本申请实施例提供的另一种多系统共享内存的管理方法的流程示意 图;
图8是本申请实施例提供的另一种多系统共享内存的管理方法中获取所述待写数据写入第一数据块后的存储地址的方法的流程示意图;
图9是本申请另一实施例提供的一种多系统共享内存的管理方法的流程示意图;
图10是本申请另一实施例提供的一种多系统共享内存的管理方法中释放数据块的前后过程的状态示意图;
图11是本申请实施例提供的一种多系统共享内存的管理装置的结构示意图;
图12是本申请另一实施例提供的一种多系统共享内存的管理装置的结构示意图;
图13是本申请实施例提供的执行多系统共享内存的管理方法的电子设备的硬件结构示意图。
具体实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本申请,并不用于限定本申请。
需要说明的是,如果不冲突,本申请实施例中的各个特征可以相互结合,均在本申请的保护范围之内。另外,虽然在装置示意图中进行了功能模块划分,在流程图中示出了逻辑顺序,但是在某些情况下,可以以不同于装置示意图中的模块划分,或流程图中的顺序执行所示出或描述的步骤。
为使本领域技术人员更好地理解本申请,以下对本申请所涉及的相关技术进行简单说明。
虚拟化是将计算机物理资源如服务器、网络、内存及存储等予以抽象、转换后呈现出来,使用户可以比原本的组态更好的方式来应用这些资源,这些资源的新虚拟部分是不受现有资源的架设方式,地域或物理组态所限制。在虚拟化中,真实的物理环境一般称为宿主机器(Host),而经过虚拟化被建立起来的环境被称为客户机器(Guest)。在宿主机器上运行的系统被称为宿主机操作系统 (Host OS),而运行在客户机器上的操作系统被称为客户机操作系统(Guest OS),负责虚拟化的一层一般被称为虚拟机监视器(Virtual Machine Monitor,VMM)。
KVM(Kernel-based Virtual Machine)就是一种开源的VMM。KVM的思想是在Linux内核的基础上添加虚拟机管理模块,重用Linux内核中已经完善的进程调度、内存管理和I/O管理等部分,使之成为一个可以支持虚拟机运行的虚拟机管理程序,所以KVM并不是一个完整的模拟器,可以将其认为是一个提供了虚拟化功能的内核插件,其具体的模拟器工作需要借助虚拟操作系统模拟器(QEMU)来完成,在业界已经将QEMU与KVM的结合作为一种常用的虚拟化实施架构。其中,QEMU主要作为模拟器来使用,QEMU-KVM是基于硬件虚拟化技术,并结合了QEMU提供设备虚拟化功能,来实现整个系统虚拟化。
在本申请实施例中,基于该QEMU-KVM虚拟化架构实施本申请实施例提供的多系统共享内存的管理方法。
具体地,如图1所示,提供了一种基于QEMU-KVM技术的虚拟化方案,该方案由一个Host OS,若干个虚拟出来的Guest OS组成,这些操作系统运行在同一套硬件处理器芯片上,共享处理器及外设资源。支持该虚拟化架构的ARM处理器至少包含三种模式,第一种模式下运行虚拟机管理程序,第二种模式下运行Linux内核程序,第三种模式下运行用户空间程序。
其中,虚拟硬件平台负责管理CPU、内存、定时器、中断等硬件资源,通过CPU、内存、定时器、中断的虚拟化资源,可以把不同的操作系统分时加载到物理处理器上运行,从而实现系统虚拟化的功能。
KVM虚拟机监视器/虚拟机管理程序层跨越Linux主机内核和虚拟硬件平台两层,一方面为QEMU提供驱动节点,允许QEMU通过KVM节点创建虚拟CPU,管理虚拟化资源;另一方面KVM虚拟机监视器/虚拟机管理程序实现了把Host Linux系统从物理CPU上切换出去,然后把Guest Linux系统加载到物理处理器上运行,最后处理Guest Linux系统异常退出的后续事务。
QEMU作为Host Linux的一个应用运行,为Guest Linux的运行提供虚拟的硬件设备资源,通过KVM虚拟机监视器/虚拟机管理程序的设备KVM节点,创建虚拟CPU,分配物理硬件资源,实现把一个未经修改的Guest Linux加载到物理硬件处理上去运行。
为了在机器人、手机或者平板等智能终端设备上实现上述虚拟化方案,需要解决所有硬件设备的虚拟化问题,即允许虚拟出来的操作系统也能使用真实的硬件设备,比如使用内存、中断资源、定时器、网络、多媒体、摄像头、显示等外设资源。由于高效的数据传输方法能够达到理想的设备虚拟化效果,通常采用多个系统之间共享内存的方法来解决所有硬件设备的虚拟化问题。
因此,下述实施例提供了一种多系统共享内存的管理方法及装置,该方法应用在上述QEMU-KVM虚拟化架构上,该多系统共享内存的管理方法能够充分利用共享内存,提升数据的传输效率,以使各种智能终端设备达到理想的虚拟化效果。
请参考图2,图2是本申请实施例提供的一种多系统共享内存的管理方法的流程示意图。如图2所示,该方法包括:
步骤11、当接收到向所述共享内存写入数据的数据写入指令时,获取待写入所述共享内存的待写数据的数据大小;
其中,当待传输数据的虚拟操作系统需要向寄宿于同一物理机的目标虚拟系统传输数据时,该待传输数据的虚拟操作系统向宿主机操作系统发送向共享内存写入数据的数据写入指令,此时,可以根据该数据写入指令获取待写入共享内存的待写数据的数据大小。
例如,当虚拟操作系统A向虚拟操作系统B(即目标虚拟系统)传输数据时,虚拟操作系统A向Host OS发送数据写入指令,由Host OS中所包含的共享内存管理应用程序根据接收到的数据写入指令,获取待写数据的数据大小。该待写数据的数据大小由虚拟操作系统A计算得到。
步骤12、判断所述共享内存中是否包含与所述数据大小相匹配的并且空闲的数据块;
步骤13、如果不包含与所述数据大小相匹配的并且空闲的数据块,获取内存大小大于所述数据大小并且空闲的第一数据块,以使所述待写数据被写入所述第一数据块;
步骤14、在所述待写数据被写入所述第一数据块之后,获取所述第一数据块的剩余空闲空间;
步骤15、基于所述剩余空闲空间,生成新数据块。
在本实施例中,上述过程即共享内存的分配过程。如图3所示,基于双向 链表的数据结构形式,提供了一种共享内存分配时的状态示意图。
在初始化状态时,如图4所示,该共享内存只有一个数据块。同样如图4所示,在运行阶段时,进行共享内存分配,此时该共享内存包含至少两个数据块。在这里,根据待写数据的数据大小匹配合适的数据块,如果共享内存中存在一个数据块的内存大小恰好与待写数据的数据大小相同并且空闲的数据块,则表示共享内存中包含与该数据大小相匹配的并且空闲的数据块,此时,直接获取该数据块,并且控制待写数据写入该数据块。如果共享内存中不存在一个数据块的内存大小与待写数据的数据大小相同并且空闲的数据块,则获取内存大小大于该数据大小并且空闲的数据块,并且控制待写数据写入该数据块,此时,该数据块必定包含剩余的空闲空间,将该空闲空间生成新的数据块。该新的数据块可以被其他进程利用,以保证共享内存不被浪费。
其中,生成的新的数据块的大小由待写数据写入大于所述数据大小并且空闲的第一数据块时的方式来决定。在本实施例中,该待写数据写入大于所述数据大小并且空闲的第一数据块的过程具体包括:根据所述待写数据的数据大小,控制所述待写数据写入大于所述数据大小并且空闲的第一数据块的连续地址空间。在这里,保证写入第一数据块的数据是连续的存储在第一数据块的地址空间中的,从而使生成的新数据块的内存大小尽可能的大,以减少存储碎片的产生。例如,数据块大小为10KB,待写数据为8KB,将该8KB数据写入第一数据块的连续地址空间,则空闲的空间为2KB,当待写数据从所述第一数据块的地址空间的首地址或者尾地址开始,写入所述第一数据块的地址空间时,该生成的新数据块大小为2KB。
因此,优选地,从所述数据块的地址空间的首地址或者尾地址开始,控制所述待写数据写入所述第一数据块的地址空间。从而充分的提高共享内存的利用率。
其中,以双向链表管理共享内存数据块节点时,每一个数据块节点的格式如图5所示,该数据块节点包含三段区域,分别是:
控制信息头部分,该部分用于记录共享内存数据块的状态、大小等信息;
应用程序数据区,该部分是真正分配给Guest OS可以进行数据读写的区域;
控制信息尾部分,该部分用于标记一个数据节点结尾,也就是说Guest OS不可以写越这部分区,主要用于监控Guest OS写越界。
其中,如图6所示,该控制信息头部分的主要信息域包含:节点偏移值,用于记录数据块节点从共享内存起始位置到该数据块节点起始位置的偏移量。前节点偏移值,用于记录数据块节点双向链表中该数据块节点的前一个数据块节点的偏移值。后节点偏移值,用于记录数据块节点双向链表中该数据块节点的下一个数据块节点的偏移值。节点状态,用于标识该数据块节点当前状态处于空闲或者使用状态。其它信息,用于记录该数据块节点的长度信息、同步信息等。
需要说明的是,该共享内存在分配内存时还可以用循环链表或者其他数据结构来管理共享内存的数据块节点,并不仅限于图3、图4所示的形式。当采用其他数据结构时,每个数据块节点的格式与当前的数据结构相匹配。
在一些实施例中,如图7所示,当所述待写数据写入数据块后,该方法还包括:
步骤16、获取所述待写数据写入所述第一数据块后的存储地址,并发送所述存储地址。
该存储地址,即待写数据在第一数据块中存储的位置。共享内存管理应用程序获取待写数据写入第一数据块的存储地址后,将该存储地址发送给目标虚拟系统,目标虚拟系统根据该存储地址从第一数据块中读取所述待写数据,从而完成数据的传输。
其中,如图8所示,获取所述待写数据写入所述第一数据块后的存储地址,具体包括:
步骤161、获取所述第一数据块的偏移值,所述偏移值为所述第一数据块从所述共享内存的起始位置到所述第一数据块的起始位置的偏移量;
步骤162、根据所述偏移值计算所述第一数据块的首地址;
步骤163、根据所述第一数据块的首地址获取所述存储地址。
根据上述图5中一个数据块节点的格式可知,该偏移值可以根据控制信息头部分记录的节点偏移值而得到,获取偏移值后,将共享内存的起始节点地址与该偏移值相加即可获取所述第一数据块的首地址。该存储地址即一个数据块节点中应用程序数据区的地址,可以是其首地址,也可以是其首地址到尾地址这个存储区间。根据第一数据块的首地址,以及控制信息头部分记录的数据块的大小等信息,可以获取所述存储地址。
本申请实施例提供了一种多系统共享内存的管理方法,该方法根据待写数据的数据大小为其分配适合的共享内存,在分配的数据块的内存大小大于待写数据的数据大小时,获取该数据块剩余的空闲空间,并且将该剩余的空闲空间生成新的数据块。该实施方式提高了共享内存的使用率和灵活性,通过充分提升共享内存的使用率,从而提升了数据的传输效率,整体上提升了系统性能。
请参考图9,图9是本申请另一实施例提供的一种多系统共享内存的管理方法的流程示意图。如图9所示,该方法包括:
步骤21、当接收到向所述共享内存写入数据的数据写入指令时,获取待写入所述共享内存的待写数据的数据大小;
步骤22、判断所述共享内存中是否包含与所述数据大小相匹配的并且空闲的数据块;
步骤23、如果包含与所述数据大小相匹配的并且空闲的数据块,获取内存大小与所述数据大小相匹配的第二数据块,以使所述待写数据被写入所述第二数据块;
步骤24、如果不包含与所述数据大小相匹配的并且空闲的数据块,获取内存大小大于所述数据大小并且空闲的第一数据块,以使所述待写数据被写入所述第一数据块;
步骤25、在所述待写数据被写入大于所述数据大小并且空闲的所述第一数据块之后,获取所述第一数据块的剩余空闲空间;
步骤26、基于所述剩余空闲空间,生成新数据块;
上述步骤21-步骤26可参考上述方法实施例中的叙述,在此不再赘述。
步骤27、当接收到从所述共享内存释放所述待写数据的数据释放指令时,释放存储了所述待写数据的数据块;
其中,可以是在共享内存管理应用程序接收到目标虚拟系统发送的数据读取完成指令后控制所述共享内存释放待写数据,也可以是共享内存管理应用程序主动控制共享内存释放待写数据。该数据块为上述第一数据块或者第二数据块。
步骤28、判断所述数据块的前一数据块以及所述数据块的后一数据块是否空闲;
步骤29、将判断为空闲的数据块与释放所述待写数据的数据块进行合并。
在释放数据块的存储空间的同时,还判断该数据块的前一数据块的状态是否空闲,以及判断该数据块的后一数据块的状态是否空闲,如果前一数据块空闲而后一数据块不空闲,则将前一数据块与该数据块进行合并,合并为一个数据块;如果后一数据块空闲而前一数据块不空闲,则将后一数据块与该数据块进行合并,合并为一个数据块;如果前一数据块和后一数据块均空闲,则同时将前一数据块、后一数据块与该数据块进行合并,生成一个新的数据块。如图10所示,示出了该数据块释放的前后过程。
需要说明的是,本申请实施例提供的多系统共享内存的管理方法可以应用于共享内存的数据传输过程,下面通过举例来说明该过程。
例如,当虚拟操作系统A向虚拟操作系统B传输数据时。首先,虚拟操作系统A向宿主操作系统发送数据传输请求指令,该宿主操作系统中包含共享内存管理应用程序,由共享内存管理应用程序根据接收到的数据传输请求指令,获取待写数据的数据大小。然后,根据待写数据的数据大小分配共享内存,获取合适的数据块,并控制待写数据写入该数据块,在这个过程中,如果分配的数据块有剩余空闲空间,则将该剩余的空闲空间生成新的数据块。在待写数据写入数据块后,共享内存管理应用程序调用同步接口,通知虚拟操作系统B数据已经写入。虚拟操作系统B接收通知并读取数据块中的数据,并且在读取完成后,向宿主操作系统发送确认消息,根据该确认消息,共享内存管理应用程序调用内存释放函数释放存储了数据的数据块,在这个过程中,如果该释放的数据块的前一数据块或者后一数据块为空闲状态,则将空闲的数据块与释放了数据的数据块进行合并。其中,共享内存的数据结构可以是双向链表,或者循环链表,或者其他数据结构。
在本实施例中,根据待写数据的数据大小为其分配适合的共享内存,在分配的数据块的内存大小大于待写数据的数据大小时,获取该数据块剩余的空闲空间,并且将该剩余的空闲空间生成新的数据块;在共享内存释放数据时,还将释放数据的数据块与其临近的空闲数据块进行合并。一方面,该实施方式提高了共享内存的使用率和灵活性,提升了数据的传输效率,另一方面,通过合并释放的数据块可以减少下一次内存分配的时间,提高了内存分配的效率。
请参考图11,图11是本申请实施例提供的一种多系统共享内存的管理装置的结构示意图。如图11所示,该装置30包括:第一获取模块31、第一判断模 块32、第一处理模块33、第二获取模块34以及生成模块35。
其中,第一获取模块31,用于当接收到向所述共享内存写入数据的数据写入指令时,获取待写入所述共享内存的待写数据的数据大小;第一判断模块32,用于判断所述共享内存中是否包含与所述数据大小相匹配的并且空闲的数据块;第一处理模块33,用于如果不包含与所述数据大小相匹配的并且空闲的数据块,获取内存大小大于所述数据大小并且空闲的第一数据块,以使所述待写数据被写入所述第一数据块;第二获取模块34,用于在所述待写数据被写入所述第一数据块之后,获取所述第一数据块的剩余空闲空间;生成模块35,用于基于所述剩余空闲空间,生成新数据块。
其中,所述待写数据被写入所述第一数据块的地址空间,并且该地址空间是连续的地址空间。
其中,所述连续的地址空间的首地址与所述第一数据块的地址空间的首地址相同,或者所述连续的地址空间的尾地址与所述第一数据块的地址空间的尾地址相同。
在一些实施例中,如图12所示,该装置30还包括:第二处理模块36、释放模块37、第二判断模块38以及合并模块39。
其中,第二处理模块36,用于如果包含与所述数据大小相匹配的并且空闲的数据块,获取内存大小与所述数据大小相匹配的第二数据块,以使所述待写数据写入所述第二数据块。释放模块37,用于当接收到从所述共享内存释放所述待写数据的数据释放指令时,释放存储了所述待写数据的数据块。第二判断模块38,用于判断所述数据块的前一数据块以及所述数据块的后一数据块是否空闲。合并模块39,用于将判断为空闲的数据块与释放所述待写数据的数据块进行合并。
需要说明的是,本申请实施例中的多系统共享内存的管理装置中的各个模块之间的信息交互、执行过程等内容,由于与本申请方法实施例基于同一构思,具体内容同样适用于多系统共享内存的管理装置。
本实施例提供了一种多系统共享内存的管理装置,该装置根据待写数据的数据大小为其分配适合的共享内存,在分配的数据块的内存大小大于待写数据的数据大小时,获取该数据块剩余的空闲空间,并且将该剩余的空闲空间生成新的数据块;在共享内存释放数据时,还将释放数据的数据块与其临近的空闲 数据块进行合并。一方面,该实施方式提高了共享内存的使用率和灵活性,提升了数据的传输效率,另一方面,通过合并释放的数据块可以减少下一次内存分配的时间,提高了内存分配的效率。
请参考图13,图13是本申请实施例提供的执行多系统共享内存的管理方法的电子设备的硬件结构示意图,该电子设备40能够执行上述多系统共享内存的管理方法,其可以是任何合适的智能终端设备,比如:智能机器人、机器人助手、PDA、个人电脑、平板电脑、智能手机、可穿戴智能设备等。
具体地,如图13所示,该电子设备40包括:一个或多个处理器41以及存储器42,图12中以一个处理器41为例。
处理器41和存储器42可以通过总线或者其他方式连接,图13中以通过总线连接为例。
存储器42作为一种非易失性计算机可读存储介质,可用于存储非易失性软件程序、非易失性计算机可执行程序以及模块,如本申请实施例中的多系统共享内存的管理方法对应的程序指令/模块(例如,附图11所示的第一获取模块31、第一判断模块32、第一处理模块33、第二获取模块34以及生成模块35)。处理器41通过运行存储在存储器42中的非易失性软件程序、指令以及模块,从而执行服务器的各种功能应用以及数据处理,即实现上述方法实施例多系统共享内存的管理方法。
存储器42可以包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需要的应用程序;存储数据区可存储根据多系统共享内存的管理装置的使用所创建的数据等。此外,存储器42可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。在一些实施例中,存储器42可选包括相对于处理器41远程设置的存储器,这些远程存储器可以通过网络连接至多系统共享内存的管理装置。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
所述一个或者多个模块存储在所述存储器42中,当被所述一个或者多个处理器41执行时,执行上述任意方法实施例中多系统共享内存的管理方法,例如,执行以上描述的图2中的方法步骤11至步骤15,图7中的方法步骤11至步骤16,图8中的方法步骤11至步骤13,图9中的方法步骤21至步骤29,实现图 11中的模块31-35,图12中的模块31-39的功能。
上述产品可执行本申请实施例所提供的方法,具备执行方法相应的功能模块和有益效果。未在本实施例中详尽描述的技术细节,可参见本申请实施例所提供的方法。
本申请实施例还提供了一种非易失性计算机可读存储介质,所述非易失性计算机可读存储介质存储有计算机可执行指令,该计算机可执行指令被电子设备执行上述任意方法实施例中的多系统共享内存的管理方法,例如,执行以上描述的图2中的方法步骤11至步骤15,图7中的方法步骤11至步骤16,图8中的方法步骤11至步骤13,图9中的方法步骤21至步骤29,实现图11中的模块31-35,图12中的模块31-39的功能。
本申请实施例还提供了一种计算机程序产品,包括存储在非易失性计算机可读存储介质上的计算程序,所述计算机程序包括程序指令,当所述程序指令被计算机执行时时,使所述计算机执行上述任意方法实施例中的多系统共享内存的管理方法,例如,执行以上描述的图2中的方法步骤11至步骤15,图7中的方法步骤11至步骤16,图8中的方法步骤11至步骤13,图9中的方法步骤21至步骤29,实现图11中的模块31-35,图12中的模块31-39的功能。
以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。
通过以上的实施方式的描述,本领域普通技术人员可以清楚地了解到各实施方式可借助软件加通用硬件平台的方式来实现,当然也可以通过硬件。本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于一计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,所述的存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)或随机存储记忆体(Random Access Memory,RAM)等。
最后应说明的是:以上实施例仅用以说明本申请的技术方案,而非对其限制;在本申请的思路下,以上实施例或者不同实施例中的技术特征之间也可以 进行组合,步骤可以以任意顺序实现,并存在如上所述的本申请的不同方面的许多其它变化,为了简明,它们没有在细节中提供;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。

Claims (18)

  1. 一种多系统共享内存的管理方法,其特征在于,包括:
    当接收到向所述共享内存写入数据的数据写入指令时,获取待写入所述共享内存的待写数据的数据大小;
    判断所述共享内存中是否包含与所述数据大小相匹配的并且空闲的数据块;
    如果不包含与所述数据大小相匹配的并且空闲的数据块,获取内存大小大于所述数据大小并且空闲的第一数据块,以使所述待写数据被写入所述第一数据块;
    在所述待写数据被写入所述第一数据块之后,获取所述第一数据块的剩余空闲空间;
    基于所述剩余空闲空间,生成新数据块。
  2. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    如果包含与所述数据大小相匹配的并且空闲的数据块,获取内存大小与所述数据大小相匹配的第二数据块,以使所述待写数据被写入所述第二数据块。
  3. 根据权利要求1所述的方法,其特征在于,所述待写数据被写入所述第一数据块的地址空间,所述地址空间是连续的地址空间。
  4. 根据权利要求3所述的方法,其特征在于,所述连续的地址空间的首地址与所述第一数据块的地址空间的首地址相同,或者所述连续的地址空间的尾地址与所述第一数据块的地址空间的尾地址相同。
  5. 根据权利要求1所述的方法,其特征在于,当所述共享内存包含至少两个数据块时,通过循环链表或者双向链表管理所述共享内存的数据块。
  6. 根据权利要求5所述的方法,其特征在于,当通过所述双向链表管理所述共享内存的数据块时,所述数据块的控制信息头部分包括所述数据块的偏移值、所述数据块的前一数据块的偏移值、所述数据块的后一数据块的偏移值、所述数据块的状态信息以及所述数据块的长度信息和同步信息。
  7. 根据权利要求1至6任一项所述的方法,其特征在于,所述方法还包括:
    当接收到从所述共享内存释放所述待写数据的数据释放指令时,释放存储了所述待写数据的数据块。
  8. 根据权利要求7所述的方法,其特征在于,所述方法还包括:
    判断所述数据块的前一数据块以及所述数据块的后一数据块是否空闲;
    将判断为空闲的数据块与释放所述待写数据的数据块进行合并。
  9. 一种多系统共享内存的管理装置,其特征在于,包括:
    第一获取模块,用于当接收到向所述共享内存写入数据的数据写入指令时,获取待写入所述共享内存的待写数据的数据大小;
    第一判断模块,用于判断所述共享内存中是否包含与所述数据大小相匹配的并且空闲的数据块;
    第一处理模块,用于如果不包含与所述数据大小相匹配的并且空闲的数据块,获取内存大小大于所述数据大小并且空闲的第一数据块,以使所述待写数据被写入所述第一数据块;
    第二获取模块,用于在所述待写数据被写入所述第一数据块之后,获取所述第一数据块的剩余空闲空间;
    生成模块,用于基于所述剩余空闲空间,生成新数据块。
  10. 根据权利要求9所述的装置,其特征在于,所述装置还包括:
    第二处理模块,用于如果包含与所述数据大小相匹配的并且空闲的数据块,获取内存大小与所述数据大小相匹配的第二数据块,以使所述待写数据被写入所述第二数据块。
  11. 根据权利要求9所述的装置,其特征在于,所述待写数据被写入所述第一数据块的地址空间,所述地址空间是连续的地址空间。
  12. 根据权利要求11所述的装置,其特征在于,所述连续的地址空间的首地址与所述第一数据块的地址空间的首地址相同,或者所述连续的地址空间的尾地址与所述第一数据块的地址空间的尾地址相同。
  13. 根据权利要求9所述的装置,其特征在于,当所述共享内存包含至少两个数据块时,所述共享内存中的数据块通过循环链表或者双向链表的形式连接。
  14. 根据权利要求9至13任一项所述的装置,其特征在于,所述装置还包括:
    释放模块,用于当接收到从所述共享内存释放所述待写数据的数据释放指令时,释放存储了所述待写数据的数据块。
  15. 根据权利要求14所述的装置,其特征在于,所述装置还包括:
    第二判断模块,用于判断所述数据块的前一数据块以及所述数据块的后一数据块是否空闲;
    合并模块,用于将判断为空闲的数据块与释放所述待写数据的数据块进行合并。
  16. 一种电子设备,其特征在于,包括:
    至少一个处理器;以及,
    与所述至少一个处理器通信连接的存储器;其中,
    所述存储器存储有可被所述至少一个处理器执行的指令程序,所述指令程序被所述至少一个处理器执行,以使所述至少一个处理器执行如权利要求1至8任一项所述的方法。
  17. 一种非易失性计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有计算机可执行指令,所述计算机可执行指令用于使计算机执行权利要求1至8任一项所述的方法。
  18. 一种计算机程序产品,其特征在于,所述计算机程序产品包括:非易失性计算机可读存储介质以及内嵌于所述非易失性计算机可读存储介质的计算机程序指令;所述计算机程序指令包括用以使处理器执行权利要求1至8任一项所述的方法的指令。
PCT/CN2017/096480 2017-08-08 2017-08-08 一种多系统共享内存的管理方法及装置 WO2019028682A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201780002588.6A CN108064377B (zh) 2017-08-08 2017-08-08 一种多系统共享内存的管理方法及装置
PCT/CN2017/096480 WO2019028682A1 (zh) 2017-08-08 2017-08-08 一种多系统共享内存的管理方法及装置
US16/784,613 US11281388B2 (en) 2017-08-08 2020-02-07 Method for managing a multi-system shared memory, electronic device and non-volatile computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/096480 WO2019028682A1 (zh) 2017-08-08 2017-08-08 一种多系统共享内存的管理方法及装置

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/784,613 Continuation US11281388B2 (en) 2017-08-08 2020-02-07 Method for managing a multi-system shared memory, electronic device and non-volatile computer-readable storage medium

Publications (1)

Publication Number Publication Date
WO2019028682A1 true WO2019028682A1 (zh) 2019-02-14

Family

ID=62142063

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/096480 WO2019028682A1 (zh) 2017-08-08 2017-08-08 一种多系统共享内存的管理方法及装置

Country Status (3)

Country Link
US (1) US11281388B2 (zh)
CN (1) CN108064377B (zh)
WO (1) WO2019028682A1 (zh)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110618883B (zh) * 2019-09-26 2022-09-13 迈普通信技术股份有限公司 一种用于共享内存链表的方法、装置、设备及存储介质
CN111506426B (zh) * 2020-04-17 2021-05-04 翱捷科技(深圳)有限公司 内存管理方法、装置及电子设备
CN111745650B (zh) * 2020-06-15 2021-10-15 哈工大机器人(合肥)国际创新研究院 一种机器人操作系统的运行方法和机器人的控制方法
CN112015522B (zh) * 2020-11-02 2021-02-05 鹏城实验室 系统功能扩展方法、装置及计算机可读存储介质
CN113626214B (zh) * 2021-07-16 2024-02-09 浪潮电子信息产业股份有限公司 一种信息传输方法、系统、电子设备及存储介质
CN115086001B (zh) * 2022-06-10 2024-04-09 杭州安恒信息技术股份有限公司 采样数据缓存方法、装置及存储介质
CN115344226B (zh) * 2022-10-20 2023-03-24 亿咖通(北京)科技有限公司 一种虚拟化管理下的投屏方法、装置、设备及介质
CN116737409A (zh) * 2023-05-22 2023-09-12 晶诺微(上海)科技有限公司 超大数据流的实时处理方法以及数据处理系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102981964A (zh) * 2012-11-01 2013-03-20 华为技术有限公司 数据存储空间的管理方法及系统
US20160011972A1 (en) * 2009-09-09 2016-01-14 Intelligent Intellectual Property Holdings 2 Llc Apparatus, system, and method for a storage layer
CN106339258A (zh) * 2016-08-10 2017-01-18 西安诺瓦电子科技有限公司 可编程逻辑器件与微处理器共享内存的管理方法及装置
CN106547625A (zh) * 2016-11-04 2017-03-29 深圳市证通电子股份有限公司 金融终端的内存分配方法及装置
CN106980551A (zh) * 2017-03-24 2017-07-25 山东浪潮商用系统有限公司 一种进程通信方法及装置

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103425435B (zh) * 2012-05-15 2016-01-20 深圳市腾讯计算机系统有限公司 磁盘存储方法及磁盘存储系统
CN102929976B (zh) * 2012-10-17 2016-06-15 华为技术有限公司 备份数据访问方法及装置
US10157146B2 (en) * 2015-02-12 2018-12-18 Red Hat Israel, Ltd. Local access DMA with shared memory pool

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160011972A1 (en) * 2009-09-09 2016-01-14 Intelligent Intellectual Property Holdings 2 Llc Apparatus, system, and method for a storage layer
CN102981964A (zh) * 2012-11-01 2013-03-20 华为技术有限公司 数据存储空间的管理方法及系统
CN106339258A (zh) * 2016-08-10 2017-01-18 西安诺瓦电子科技有限公司 可编程逻辑器件与微处理器共享内存的管理方法及装置
CN106547625A (zh) * 2016-11-04 2017-03-29 深圳市证通电子股份有限公司 金融终端的内存分配方法及装置
CN106980551A (zh) * 2017-03-24 2017-07-25 山东浪潮商用系统有限公司 一种进程通信方法及装置

Also Published As

Publication number Publication date
US20200174669A1 (en) 2020-06-04
CN108064377B (zh) 2023-01-24
US11281388B2 (en) 2022-03-22
CN108064377A (zh) 2018-05-22

Similar Documents

Publication Publication Date Title
US10884799B2 (en) Multi-core processor in storage system executing dynamic thread for increased core availability
WO2019028682A1 (zh) 一种多系统共享内存的管理方法及装置
EP3069263B1 (en) Session idle optimization for streaming server
US10768960B2 (en) Method for affinity binding of interrupt of virtual network interface card, and computer device
KR101952795B1 (ko) 자원 프로세싱 방법, 운영체제, 및 장치
US10191759B2 (en) Apparatus and method for scheduling graphics processing unit workloads from virtual machines
CN107273199B (zh) 用于管理虚拟化环境中的中断的体系结构和方法
US11093297B2 (en) Workload optimization system
US20110113426A1 (en) Apparatuses for switching the running of a virtual machine between multiple computer devices belonging to the same computer platform and the associated switching methods
US20140095769A1 (en) Flash memory dual in-line memory module management
US10552080B2 (en) Multi-target post-copy guest migration
US20200053022A1 (en) Network-accessible data volume modification
US9201823B2 (en) Pessimistic interrupt affinity for devices
WO2019127191A1 (zh) 一种多操作系统共享文件系统的方法、装置和电子设备
US9003094B2 (en) Optimistic interrupt affinity for devices
US9715403B2 (en) Optimized extended context management for virtual machines
US20210342173A1 (en) Dynamic power management states for virtual machine migration
US10873630B2 (en) Server architecture having dedicated compute resources for processing infrastructure-related workloads
CN106815067B (zh) 带i/o虚拟化的虚拟机在线迁移方法、装置
US9569241B2 (en) Sharing devices assigned to virtual machines using runtime exclusion
WO2022083158A1 (zh) 数据处理的方法、实例以及系统
US11853798B2 (en) Disaggregated memory pool assignment
US20210157626A1 (en) Prioritizing booting of virtual execution environments
KR20130104958A (ko) 다중 운영체제들을 실행하는 장치 및 방법
US20220058062A1 (en) System resource allocation for code execution

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17920864

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 25.06.2020)

122 Ep: pct application non-entry in european phase

Ref document number: 17920864

Country of ref document: EP

Kind code of ref document: A1