WO2023010879A1 - Memory management method and apparatus, and computer device - Google Patents

Memory management method and apparatus, and computer device Download PDF

Info

Publication number
WO2023010879A1
WO2023010879A1 PCT/CN2022/085854 CN2022085854W WO2023010879A1 WO 2023010879 A1 WO2023010879 A1 WO 2023010879A1 CN 2022085854 W CN2022085854 W CN 2022085854W WO 2023010879 A1 WO2023010879 A1 WO 2023010879A1
Authority
WO
WIPO (PCT)
Prior art keywords
buffer
data
memory
application
read
Prior art date
Application number
PCT/CN2022/085854
Other languages
French (fr)
Chinese (zh)
Inventor
王义彬
田雨露
林鑫翔
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2023010879A1 publication Critical patent/WO2023010879A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0866Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
    • G06F12/0871Allocation or management of cache space
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • an embodiment of the present application provides a memory management method, the method comprising: receiving a buffer allocation instruction from an application, where the buffer allocation instruction is used to request to allocate a buffer in the memory for the application , after the data stored in the buffer reaches a certain amount, the application obtains the data in the buffer at one time; allocates a buffer for the application in the buffer area of the memory; returns the buffer The address of the region in said memory.
  • the sixth possible implementation of the memory management method when the amount of data stored in the buffer is greater than a second preset threshold, the storing the data to be written from the buffer into the memory; or, when all the data to be written required by the application has been stored in the buffer, storing the data to be written from the buffer to the memory or, when receiving a write completion instruction, store the data to be written from the buffer into the memory; or, when receiving a write pause instruction, store the data to be written from the buffer to in memory.
  • the device further includes: a second receiving module, configured to receive the The memory release instruction of the application, the memory release instruction is used to request to release the buffer of the application, and the memory release instruction includes the address of the buffer in the memory; the release module is used to release the buffer buffer.
  • FIG. 1 shows a schematic structural diagram of a storage system in the related art
  • FIG. 2 shows a schematic structural diagram of a storage system in an embodiment of the present application
  • FIG. 8a shows a schematic diagram of interface redirection in an embodiment of the present application
  • FIG. 8b shows a related technology and an interface implementation diagram in the embodiment of the present application.
  • FIG. 9 shows a schematic diagram of data interaction in the embodiment of the present application in a big data scenario
  • FIG. 2 shows a schematic structural diagram of a storage system in an embodiment of the present application.
  • the storage system 20 includes a memory manager 21 , a memory 22 and a storage 23 .
  • the memory 22 may be used for temporarily storing data
  • the memory 23 may be used for persistently storing data.
  • memory 22 may be used to store pre-stored, intermediate, and outcome data during sample collection, modeling, and prediction.
  • Memory 22 may include read-only memory and random-access memory, and may be either volatile memory or non-volatile memory, or may include both volatile and non-volatile memory.
  • the non-volatile memory can be read-only memory (read-only memory, ROM), programmable read-only memory (programmable ROM, PROM), erasable programmable read-only memory (erasable PROM, EPROM), electronic Erasable programmable read-only memory (electrically EPROM, EEPROM) or flash memory.
  • the volatile memory can be random access memory (RAM), which acts as external cache memory.
  • FIG. 3 shows a schematic flowchart of the memory management method provided by the present application.
  • the method may be executed by the memory manager 21 in the storage system 20 shown in FIG. 2 .
  • the method may include:
  • Step S401 the memory manager 21 receives a buffer allocation instruction from the application 10 .
  • the buffer allocation instruction may include the size (or capacity) of the storage space allocated for the application 10, that is, the size of the buffer, and the buffer allocation instruction is used to request for the application 10 Allocate storage of that size in the space.
  • the memory release instruction may be used to request to release the buffer of the application, and the memory release instruction may include the address of the buffer in the memory.
  • the application 10 may submit a memory release instruction.
  • the memory manager 21 may release the application buffer according to the address indicated in the memory release instruction, thereby saving memory resources.
  • the application's buffer exists in the buffer until the storage space occupied by the buffer in memory is released.
  • the data enters the buffer sequentially according to the application requirements.
  • the amount of data in the buffer reaches the size of the buffer, or approaches the size of the buffer (for example, reaching 90%, 95%, or 99% of the size of the buffer, etc.)
  • the new data entering the buffer will overwrite the previous
  • the overwriting position can be determined by the elimination mechanism of the buffer.
  • the data in the buffer may be overwritten in descending order of the duration stored in the buffer.
  • FIG. 5 shows a schematic flowchart of a memory management method provided by an embodiment of the present application, and the method may be executed by the memory manager 21 in the storage system 20 shown in FIG. 2 .
  • the memory management method includes:
  • step S603 when the data to be read is located in the buffer area other than the buffer area of the application, the memory manager 21 reads the data to be read from the buffer area into the buffer area of the application, and then executes step S605.
  • step S604 when the data to be read is outside the buffer area, the memory manager 21 reads the data to be read from the memory 23 into the application buffer, and then executes step S605.
  • satisfying the first preset condition includes: the amount of data stored in the buffer is greater than a first preset threshold.
  • the first preset threshold can be set as required.
  • the first preset threshold may be 1024 bytes, 100M, and so on.
  • the amount of data stored in the buffer is greater than the first preset threshold, indicating that there is more data in the buffer.
  • the amount of data stored in the buffer is less than or equal to the first preset threshold indicates that there is less data in the buffer.
  • satisfying the first preset condition may include: all data to be read required by the application has been read into the buffer. At this time, the processor does not need to wait for more data, which means that the processor will not consume more time and resources for waiting for more data when it immediately starts processing these data-related services. Therefore, when all the data to be read required by the application has been read into the buffer, the storage system can transmit the data in the buffer of the application to the application.
  • FIG. 6 shows a schematic diagram of a process of reading data by an application in an embodiment of the present application.
  • FIG. 6 shows the process of reading data by the application in the above three cases.
  • the data to be read is located in an area of the buffer area other than the buffer area.
  • the cache has already prefetched the data to be read. Therefore, the memory manager 21 can read the data to be read from an area other than the buffer area in the buffer area into the buffer area, and then can read the data to be read under the condition that the first preset condition is satisfied.
  • the data is sent to the application 10 .
  • the data copying time will not exceed the data copying time from mutually independent buffer areas to the buffer area in the related art.
  • the pointer to read the data will move along with the progress of copying.
  • the memory manager 21 knows whether the data to be read has been copied from the memory 23 , and to which location in the memory 22 it has been specifically copied. Therefore, according to the location of the file where the data to be read is located in the memory, the address of the buffer in the memory, and the length of the data to be read, it is possible to determine the part of the data to be read that is located in the buffer area, the portion located in the cache area except the buffer The portion of the area outside the zone, and the portion located in memory.
  • Step S802 the memory manager 21 stores the data to be written into the buffer of the application 10 .
  • Step S803 if the second preset condition is satisfied, the memory manager 21 stores the data to be written from the buffer of the application 10 into the memory 23 .
  • the data writing instruction in step S801 may include the address of the buffer in the memory and the length of the data to be written. Based on this, in step S802, in the case of satisfying the second preset condition, the memory manager 21 can determine the target memory address according to the address of the buffer in the memory 22 and the length of the data to be written; The data to be written is copied from the memory address of the buffer to the target memory address; finally, the data in the target memory address is stored in the memory 23 .
  • the storage system can obtain the location of the file where the data to be written is located in the memory, the address of the buffer in the memory, and the offset of the data to be written in the file as a synchronization information, and then writes the data at the target memory address into the memory based on the synchronization information.
  • the offset of the data to be written in the file will change, and the location of the subsequent data to be written in the memory will also change synchronously.
  • Fig. 8a shows a schematic diagram of interface redirection in the embodiment of the present application.
  • the original interface is still called, that is, the malloc interface is called to allocate a buffer, the free interface is called to release the buffer, the read interface is called to read data, and the write interface is called to write data.
  • the call will be redirected to the adca_malloc interface, adca_free interface, adca_read interface and adca_write interface provided by the embodiment of the present application respectively.
  • the upper-layer calls do not change, but the allocation and release of the storage space corresponding to the buffer is handled by the buffer in the storage system, and the data to be written and read by the application are also stored in the buffer in the buffer.
  • the application calls the fd.open interface to open the file.
  • the parameters of the fd.open interface are information such as file path and file name. Based on the file path and file name, it can be determined which file the data to be read is read from, and which file the data to be written is to be written into, and information such as the file path and file name needs to be used in the adca_read interface as a file handle. After that, the application can open the file to read or write data. Comparing the standard interface shown in Figure 8b with the adca interface, the parameters required by both the read interface and the adca_read interface are file handle (fd), buffer pointer (*buf) and the number of bytes read (eg: 1024).
  • the data reading instruction submitted by the application includes the location of the file in the memory, the address of the buffer in the memory, and the length of the data to be read.
  • the location of the file in the memory corresponds to the parameter "file handle”
  • the address of the buffer in memory corresponds to the parameter "buffer pointer”
  • the length of the data to be read corresponds to the parameter "number of bytes read”.
  • the parameters of write include: file handle (fd), buffer pointer (*buf) and the number of bytes written (such as: 1024), and the actual adca_write interface execution It is a memcpy (memory copy) interface, its parameters are source memory address (*src), target memory address (*dst) and the number of bytes to be copied (eg: 1024), the function of the interface is from the source memory address
  • the starting position begins to copy multiple bytes to the target memory address, that is, to copy multiple bytes from the source memory address to the target memory address.
  • sync.data shown in Figure 8b is used to transfer synchronization information.
  • the application When the application performs read and write operations, it will automatically save the real-time read and write positions in the operating system, and the adca interface will intercept this information, and transfer this information to the buffer through the sync.data interface (fd is the file handle, *buf is buffer pointer, offset is the internal offset of the file), that is, which position of the file the file is read and written to at this time, so as to realize orderly reading and writing.
  • the memory management method provided by the embodiment of the present application can be better applied to scenarios where applications need to interact frequently with the storage system; and scenarios that are sensitive to memory usage and require less memory usage; and scenarios that are sensitive to delay and require reading and writing Short scenes.
  • a detection plug-in may be introduced, and the detection plug-in detects whether the application is a storage-intensive application. If it is detected that an application is a storage-intensive application, the memory management method provided by the embodiment of the present application can be used for memory management, and the interface called by the application can be intercepted and redirected, so as to save memory space, reduce read and write delays, and The effect of saving processor computing resources.
  • FIG. 9 shows a schematic diagram of data interaction in the embodiment of the present application in a big data scenario.
  • the front end is a data analysis application program built by Spark (that is, the Spark application), and the back end is a big data storage system using a memory pool.
  • the big data storage system includes a memory, a memory pool, and a memory manager, and the memory pool includes multiple memories.
  • the memory manager allocates storage space in multiple memories as the cache area of the memory, and allocates storage space in each cache area as the buffer of the Spark application.
  • the cache area of the memory and the buffer area of the Spark application have memory multiplexing.
  • the memory manager is used to execute the memory management methods shown in FIG. 3 , FIG. 5 and FIG. 7 .
  • the storage system can include only one memory manager, which is used to manage all the buffers in the memory pool; the storage system can also include multiple memory managers, and each memory corresponds to a memory manager, which is used to manage the buffers in this memory .
  • the allocation of the buffer no longer occupies additional memory storage space, but reuses the storage space occupied by the buffer area in the memory; the same data does not need to be copied again in the buffer allocated for the application copies are stored.
  • FIG. 9 if there are 6 memories in the memory pool as an example, the storage space of 6 buffers can be saved in the memory pool. It can be seen that in a big data scenario with a lot of memory, the embodiment of the present application can save a lot of memory resources.
  • the buffer located in the buffer area can directly obtain data from the memory without duplication of data, which saves time for data duplication. When the amount of data is large or the data is read and written frequently, the read and write delay will be greatly reduced.
  • FIG. 10 shows a schematic structural diagram of a memory management device provided by an embodiment of the present application.
  • the device may be a memory manager 21 as shown in FIG. 4 .
  • the device 90 may include:
  • the allocation module 92 is configured to allocate a buffer for the application in the buffer area of the memory; specifically, step S402 as shown in FIG. 3 and other implicit steps can be implemented.
  • the return module 93 is used to return the address of the buffer in the memory, specifically step S403 as shown in Figure 3 and other implicit steps can be implemented.
  • the buffer allocation instruction includes the size of the buffer, and the allocation module is further configured to: allocate a buffer of the size to the application in the cache area of the memory. a buffer, the size of which is less than the size of the cache.
  • the device further includes: a third receiving module, configured to receive a data reading instruction from the application, and the data reading instruction is used to request to transmit the data to be read to the application.
  • Data a reading module, configured to read the data to be read from the buffer to the buffer when the data to be read is located in an area other than the buffer in the buffer area; when the data to be read is located outside the buffer area, the data to be read is read from the memory into the buffer; the transmission module is used to transfer the data in the buffer sent to the app.
  • the allocation of the buffer does not occupy additional memory storage space, but reuses the storage space occupied by the buffer area in the memory, thereby saving memory resources.
  • the same data does not need to be copied in the buffer allocated for the application for storage.
  • the buffer located in the buffer area can directly obtain the data from the memory without the need to copy the data, which can save the time for data copying , thereby reducing read and write delays and saving computing resources of the processor.
  • FIG. 11 shows a schematic structural diagram of the computer device provided in the embodiment of the present application.
  • the computer device includes at least one processor 301 , memory 22 , bus 304 , input/output device 305 , memory manager 21 and storage 23 .
  • the processor 301 is the control center of the computer device, and may be one processing element, or may be a general term for multiple processing elements. Each of these processors can be a single-core processor (single-CPU) or a multi-core processor (multi-CPU).
  • a processor herein may refer to one or more devices, circuits, and/or processing cores for processing data (eg, computer program instructions).
  • the memory manager 21 is a CPU, may also be a specific integrated circuit (Application Specific Integrated Circuit, ASIC), or one or more integrated circuits configured to implement the embodiments of the present disclosure, for example: one or more micro A processor (Digital Signal Processor, DSP), or, one or more Field Programmable Gate Arrays (Field Programmable Gate Array, FPGA).
  • the processor 301 can execute various functions of the computer device by running or executing applications stored in the memory 22 and calling data stored in the memory 22 .
  • the memory manager 21 can be used to receive a buffer allocation instruction from an application, wherein the buffer allocation instruction is used to request the application to allocate a buffer in the memory; it can also be used to allocate a buffer to the application in the buffer area of the memory , and returns the address of the buffer in memory.
  • the memory manager 21 can be an independent hardware or a software device, stored in the memory, and executed by the processor 301 .
  • Memory 22 may be used to store pre-stored, intermediate and result data during sample collection, modeling and prediction.
  • Memory 22 may include read-only memory and random-access memory, and may be either volatile memory or non-volatile memory, or may include both volatile and non-volatile memory.
  • the non-volatile memory can be read-only memory (read-only memory, ROM), programmable read-only memory (programmable ROM, PROM), erasable programmable read-only memory (erasable PROM, EPROM), electrically programmable Erases programmable read-only memory (electrically EPROM, EEPROM) or flash memory.
  • the volatile memory can be random access memory (RAM), which acts as external cache memory.
  • RAM random access memory
  • static RAM static random access memory
  • DRAM dynamic random access memory
  • DRAM synchronous dynamic random access memory
  • SDRAM Double data rate synchronous dynamic random access memory
  • ESDRAM enhanced synchronous dynamic random access memory
  • serial DRAM serial link DRAM
  • DDR RAM direct Memory bus random access memory
  • direct rambus RAM direct rambus RAM, DR RAM
  • the memory 22 can exist independently, and is connected to the processor 301 through the bus 304 .
  • the memory 22 can also be integrated with the processor 301 .
  • the memory 22 can also be used to store pre-stored data, intermediate data and result data stored in the process of sample collection, modeling and prediction.
  • the storage 23 may be a device for persistently storing data, such as a magnetic disk, a hard disk, and an optical disk. In the embodiment of the present application, the storage 23 may be used to store files accessed by applications.
  • the input and output device 305 is used for data transmission with other devices.
  • the input and output device 305 may include a transmitter and a receiver.
  • the transmitter is used to send data to other devices
  • the receiver is used to receive data sent by other devices.
  • the transmitter and receiver can exist independently or be integrated together.
  • the input and output device 305 can perform buffer allocation instructions, memory release instructions, data read instructions and data write instructions, as well as data to be written and data to be read, between the processor executing the application. etc. transmission.
  • the bus 304 may be an Industry Standard Architecture (Industry Standard Architecture, ISA) bus, a Peripheral Component Interconnect (PCI) bus, or an Extended Industry Standard Architecture (Extended Industry Standard Architecture, EISA) bus, etc.
  • ISA Industry Standard Architecture
  • PCI Peripheral Component Interconnect
  • EISA Extended Industry Standard Architecture
  • the bus can be divided into address bus, data bus, control bus and so on. For ease of representation, only one thick line is used in FIG. 11 , but it does not mean that there is only one bus or one type of bus.
  • the device structure shown in FIG. 11 does not constitute a limitation to the computer device, and may include more or less components than shown, or combine some components, or arrange different components.
  • An embodiment of the present application provides a computer-readable storage medium, on which computer program instructions are stored, and the above method is implemented when the computer program instructions are executed by a processor.
  • Non-exhaustive list of computer-readable storage media include: portable computer disk, hard disk, random access memory (Random Access Memory, RAM), read only memory (Read Only Memory, ROM), erasable Electrically Programmable Read-Only-Memory (EPROM or flash memory), Static Random-Access Memory (Static Random-Access Memory, SRAM), Portable Compression Disk Read-Only Memory (Compact Disc Read-Only Memory, CD -ROM), Digital Video Disc (DVD), memory sticks, floppy disks, mechanically encoded devices such as punched cards or raised structures in grooves with instructions stored thereon, and any suitable combination of the foregoing .
  • RAM Random Access Memory
  • ROM read only memory
  • EPROM or flash memory erasable Electrically Programmable Read-Only-Memory
  • Static Random-Access Memory SRAM
  • Portable Compression Disk Read-Only Memory Compact Disc Read-Only Memory
  • CD -ROM Compact Disc Read-Only Memory
  • DVD Digital Video Disc
  • Computer readable program instructions or codes described herein may be downloaded from a computer readable storage medium to a respective computing/processing device, or downloaded to an external computer or external storage device over a network, such as the Internet, local area network, wide area network, and/or wireless network.
  • the network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers.
  • a network adapter card or a network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in each computing/processing device .
  • the remote computer can be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or it can be connected to an external computer such as use an Internet service provider to connect via the Internet).
  • electronic circuits such as programmable logic circuits, field-programmable gate arrays (Field-Programmable Gate Array, FPGA) or programmable logic arrays (Programmable Logic Array, PLA), the electronic circuit can execute computer-readable program instructions, thereby realizing various aspects of the present application.
  • These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine such that when executed by the processor of the computer or other programmable data processing apparatus , producing an apparatus for realizing the functions/actions specified in one or more blocks in the flowchart and/or block diagram.
  • These computer-readable program instructions can also be stored in a computer-readable storage medium, and these instructions cause computers, programmable data processing devices and/or other devices to work in a specific way, so that the computer-readable medium storing instructions includes An article of manufacture comprising instructions for implementing various aspects of the functions/acts specified in one or more blocks in flowcharts and/or block diagrams.
  • each block in a flowchart or block diagram may represent a module, a portion of a program segment, or an instruction that includes one or more Executable instructions.
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks in succession may, in fact, be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block in the block diagrams and/or flowcharts, and combinations of blocks in the block diagrams and/or flowcharts can be implemented with hardware (such as circuits or ASIC (Application Specific Integrated Circuit, application-specific integrated circuit)), or can be implemented with a combination of hardware and software, such as firmware.
  • hardware such as circuits or ASIC (Application Specific Integrated Circuit, application-specific integrated circuit)
  • firmware such as firmware

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The present application relates to a memory management method and apparatus, and a computer device. The method comprises: receiving a buffer allocation instruction from an application, wherein the buffer allocation instruction is used to request the allocation of a buffer for the application in a memory; allocating a buffer for the application in a cache of the memory; and returning the address of the buffer in the memory. In the memory management method and apparatus, and the computer device provided in the present application, the memory space can be effectively saved.

Description

内存管理方法、装置和计算机设备Memory management method, device and computer equipment
本申请要求于2021年08月04日提交中国专利局、申请号为202110892745.3、申请名称为“内存管理方法、装置和计算机设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of the Chinese patent application with the application number 202110892745.3 and the application name "memory management method, device and computer equipment" submitted to the China Patent Office on August 4, 2021, the entire contents of which are incorporated in this application by reference middle.
技术领域technical field
本申请涉及数据存储领域,尤其涉及一种内存管理方法、装置和计算机设备。The present application relates to the field of data storage, in particular to a memory management method, device and computer equipment.
背景技术Background technique
近年来,随着处理器运算能力的增强以及数据规模的增大,较长的数据存取时间逐渐成为制约应用运行速度加快的主要因素。In recent years, with the enhancement of processor computing power and the increase of data scale, longer data access time has gradually become the main factor restricting the speed of application operation.
一个应用在运行过程中,需要将存储器的数据先读取到内存中,再使用内存的数据执行计算,并将计算结果写入内存,最后存储在存储器中以便长时间保存。通常,系统一般会在内存中划分出一块存储空间作为缓存区(Cache),以加快存储器和应用之间数据读写的速度。当应用写入数据时,不直接写入内存,而是会先写入到缓存区中。由于缓存区的读写速度很快,应用可以较快的完成数据写入,之后,应用便可释放资源用于执行其他任务。而暂存在缓存区里的数据可等待系统空闲时再慢慢写入存储器。当应用读取数据时,若缓存区中有应用所需的数据便可以直接快速读入。由于内存的读写速度较执行应用的处理器的速度较慢。为了减小短期内突发大量读写操作对处理器的影响,系统可以在内存中划分出一块存储空间作为缓冲区(Buffer),以临时存储应用需要使用的数据。图1示出相关技术中存储系统的结构示意图。如图1所示,存储系统中包括内存和存储器,内存中划分出了缓存区和缓冲区。系统可以先将应用需要使用的数据存储在缓冲区中,在缓冲区中数据量达到一定数量时,应用再读取缓冲区中的数据进行计算。在缓冲区中数据量未达到一定数量时,应用可以执行其他任务。相关技术中,通过优化应用与缓存区之间的数据交互方式,对应用与存储器之间的数据交互过程进行了优化,提升了上层应用读写速度。When an application is running, it needs to read the data in the memory into the memory first, then use the data in the memory to perform calculations, write the calculation results into the memory, and finally store them in the memory for long-term storage. Usually, the system generally divides a storage space in the memory as a cache area (Cache), so as to speed up the speed of reading and writing data between the memory and the application. When an application writes data, it is not written directly to the memory, but is first written to the cache. Since the reading and writing speed of the cache area is very fast, the application can complete data writing quickly, and then the application can release resources for performing other tasks. The data temporarily stored in the cache area can be slowly written into the memory when the system is idle. When the application reads data, if there is data required by the application in the cache, it can be directly and quickly read in. Because the read and write speed of the memory is slower than the speed of the processor executing the application. In order to reduce the impact of a large number of sudden read and write operations on the processor in a short period of time, the system can divide a storage space in the memory as a buffer (Buffer) to temporarily store the data that the application needs to use. FIG. 1 shows a schematic structural diagram of a storage system in the related art. As shown in FIG. 1 , a storage system includes a memory and a storage, and the memory is divided into a cache area and a buffer. The system can first store the data that the application needs to use in the buffer, and when the amount of data in the buffer reaches a certain amount, the application reads the data in the buffer for calculation. When the amount of data in the buffer does not reach a certain amount, the application can perform other tasks. In the related technology, by optimizing the data interaction mode between the application and the cache area, the data interaction process between the application and the memory is optimized, and the reading and writing speed of the upper layer application is improved.
参见图1,在相关技术中,从存储空间上来说,缓冲区和缓存区占用了较多的内存空间,造成了较大的内存开销;同时,同一份数据在应用的缓冲区和存储器的缓存区中均复制了一份,造成了内存资源的浪费。同时,应用与缓存区之间的数据读写拆分成了缓冲区与缓存区之间的数据读写和应用与缓冲区之间的数据读写,增加了应用与存储器之间进行数据交互的时延,消耗较多的处理器运算资源。因此,如何提供一种高效的,减少内存资源浪费和读写时延的内存管理的方法成为了亟待解决的问题。Referring to Figure 1, in related technologies, in terms of storage space, buffers and buffers occupy more memory space, resulting in a larger memory overhead; at the same time, the same data in the application buffer and memory cache One copy is copied in each area, resulting in a waste of memory resources. At the same time, the data reading and writing between the application and the cache area is divided into data reading and writing between the buffer area and the data reading and writing between the application and the buffer area, which increases the data interaction between the application and the memory. Latency consumes more processor computing resources. Therefore, how to provide an efficient memory management method that reduces waste of memory resources and read/write delays has become an urgent problem to be solved.
发明内容Contents of the invention
有鉴于此,提出了一种内存管理方法、装置和计算机设备,能够节省内存空间。In view of this, a memory management method, device and computer equipment are proposed, which can save memory space.
第一方面,本申请的实施例提供了一种内存管理方法,所述方法包括:接收来自应用的缓冲区分配指令,所述缓冲区分配指令用于请求为所述应用在内存中分配缓冲区,所述缓冲区用于存储的数据达到一定数量后,所述应用一次性获取所述缓冲区中的数据;在所述内存 的缓存区中,为所述应用分配缓冲区;返回所述缓冲区在所述内存中的地址。In a first aspect, an embodiment of the present application provides a memory management method, the method comprising: receiving a buffer allocation instruction from an application, where the buffer allocation instruction is used to request to allocate a buffer in the memory for the application , after the data stored in the buffer reaches a certain amount, the application obtains the data in the buffer at one time; allocates a buffer for the application in the buffer area of the memory; returns the buffer The address of the region in said memory.
在本申请实施例中,通过在缓存区分配缓冲区作为应用的缓冲区,重复利用了缓存区的缓冲区,使得应用的缓冲区不再占用额外的内存空间,减少了对内存资源的占用,节省了内存资源。In the embodiment of the present application, by allocating the buffer in the buffer as the buffer of the application, the buffer of the buffer is reused, so that the buffer of the application no longer occupies additional memory space, reducing the occupation of memory resources. Memory resources are saved.
根据第一方面,在所述内存管理方法的第一种可能的实现方式中,所述缓冲区分配指令中包括所述缓冲区的大小,所述在所述内存的缓存区中,为所述应用分配缓冲区,包括:在所述内存的缓存区中,为所述应用分配所述大小的缓冲区,所述大小小于所述缓存区的大小。According to the first aspect, in the first possible implementation of the memory management method, the buffer allocation instruction includes the size of the buffer, and in the buffer area of the memory, the Allocating a buffer for the application includes: allocating a buffer of the size for the application in the buffer area of the memory, where the size is smaller than the size of the buffer area.
这样,通过在缓冲区分配指令中指示内存空间的大小,可以按需为应用分配缓冲区,既能够满足了应用的需求,又减少了对缓存区中存储资源的占用。In this way, by indicating the size of the memory space in the buffer allocation instruction, the buffer can be allocated for the application on demand, which not only meets the requirements of the application, but also reduces the occupation of storage resources in the buffer area.
根据第一方面,或者第一方面的第一种可能的实现方式,在所述内存管理方法的第二种可能的实现方式中,所述方法还包括:接收来自所述应用的内存释放指令,所述内存释放指令用于请求释放所述应用的缓冲区,所述内存释放指令中包括所述缓冲区在所述内存中的地址;释放所述缓冲区。According to the first aspect, or the first possible implementation of the first aspect, in a second possible implementation of the memory management method, the method further includes: receiving a memory release instruction from the application, The memory release instruction is used to request to release the buffer of the application, and the memory release instruction includes the address of the buffer in the memory; release the buffer.
这样,通过释放为应用分配的缓冲区,可以使应用不会一直占用内存资源,实现了缓冲区的动态分配,从而较大程度上节省了内存资源。In this way, by releasing the buffer allocated for the application, the application will not occupy the memory resource all the time, and the dynamic allocation of the buffer is realized, thereby saving the memory resource to a large extent.
根据第一方面,或者以上第一方面的任意一种可能的实现方式,在所述内存管理方法的第三种可能的实现方式中,所述方法还包括:接收来自所述应用的数据读取指令,所述数据读取指令用于请求向所述应用传送待读取数据;当所述待读取数据位于所述缓存区之外时,将所述待读取数据从存储器读取到所述缓冲区中;并将所述缓冲区中的数据传送至所述应用。According to the first aspect, or any possible implementation of the above first aspect, in a third possible implementation of the memory management method, the method further includes: receiving data read from the application instruction, the data read instruction is used to request to transmit the data to be read to the application; when the data to be read is outside the cache area, read the data to be read from the memory to the in the buffer; and transfer the data in the buffer to the application.
这样,位于缓存区中的缓冲区可以直接与存储器进行数据交互,省去了缓冲区与缓存区之间进行数据复制,达到了减小读写时延和节省处理器运算资源的效果。In this way, the buffer located in the buffer area can directly interact with the memory for data, eliminating the need to copy data between the buffer area and the buffer area, achieving the effects of reducing read and write delays and saving processor computing resources.
根据第一方面的第三种可能的实现方式中,在所述内存管理方法的第四种可能的实现方式中,将所述缓冲区中的数据传送至所述应用,具体包括:为所述应用分配的存储空间缓冲区中存储的数据的数量大于第一预设阈值时,将所述缓冲区中的数据传送至所述应用;或者,所述应用需要的待读取数据均已经被读取到所述缓冲区中时,将所述缓冲区中的数据传送至所述应用;或者,执行所述应用的处理器处于空闲状态时,将所述缓冲区中的数据传送至所述应用。According to the third possible implementation of the first aspect, in the fourth possible implementation of the memory management method, transmitting the data in the buffer to the application specifically includes: When the amount of data stored in the storage space buffer allocated by the application is greater than a first preset threshold, the data in the buffer is transferred to the application; or, all the data to be read required by the application has been read When fetching into the buffer, transfer the data in the buffer to the application; or, when the processor executing the application is in an idle state, transfer the data in the buffer to the application .
这样,在缓冲区中存储的数据的数量较多时再传送给应用可以减少执行应用的处理器因等待更多的数据而消耗的时间和资源。In this way, sending the data to the application when the amount of data stored in the buffer is large can reduce the time and resources consumed by the processor executing the application waiting for more data.
根据第一方面,或者以上第一方面的任意一种可能的实现方式,在所述内存管理方法的第五种可能的实现方式中,所述方法还包括:接收来自所述应用的数据写入指令,所述数据写入指令用于请求存储所述应用的待写入数据;将所述待写入数据存储至所述缓冲区中;并将所述待写入数据从所述缓冲区中存储至存储器中。According to the first aspect, or any possible implementation of the above first aspect, in a fifth possible implementation of the memory management method, the method further includes: receiving data written from the application instruction, the data writing instruction is used to request to store the data to be written in the application; store the data to be written in the buffer; and remove the data to be written from the buffer stored in memory.
这样,位于缓存区中的缓冲区可以直接与存储器进行数据交互,省去了缓冲区与缓存区之间进行数据复制,达到了减小读写时延和节省处理器运算资源的效果。In this way, the buffer located in the buffer area can directly interact with the memory for data, eliminating the need to copy data between the buffer area and the buffer area, achieving the effects of reducing read and write delays and saving processor computing resources.
根据第一方面的第五种可能的实现方式,在所述内存管理方法的第六种可能的实现方式中,所述缓冲区中存储的数据的数量大于第二预设阈值时,将所述待写入数据从所述缓冲区存储至存储器中;或者,所述应用需要的待写入数据均已经存储在了缓冲区时,将所述待写入数据从所述缓冲区存储至存储器中;或者,接收写入完成指令时,将所述待写入数据从所 述缓冲区存储至存储器中;或者,接收写入暂停指令时,将所述待写入数据从所述缓冲区存储至存储器中。According to the fifth possible implementation of the first aspect, in the sixth possible implementation of the memory management method, when the amount of data stored in the buffer is greater than a second preset threshold, the storing the data to be written from the buffer into the memory; or, when all the data to be written required by the application has been stored in the buffer, storing the data to be written from the buffer to the memory or, when receiving a write completion instruction, store the data to be written from the buffer into the memory; or, when receiving a write pause instruction, store the data to be written from the buffer to in memory.
这样,在缓冲区中存储的数据的数量较多时再将数据传送给缓存区中除缓冲区以外的区域,从而减少数据复制次数,节省处理器运算资源。In this way, when the amount of data stored in the buffer is large, the data is transferred to the area in the buffer area other than the buffer area, thereby reducing the number of times of data copying and saving processor computing resources.
第二方面,本申请的实施例提供了一种内存管理装置,所述装置包括:第一接收模块,用于接收来自应用的缓冲区分配指令,所述缓冲区分配指令用于请求为所述应用在内存中分配缓冲区;分配模块,用于在所述内存的缓存区中,为所述应用分配缓冲区;返回模块,用于返回所述缓冲区在所述内存中的地址。In a second aspect, an embodiment of the present application provides a memory management device, the device comprising: a first receiving module, configured to receive a buffer allocation instruction from an application, and the buffer allocation instruction is used to request for the The application allocates a buffer in the memory; the allocation module is used to allocate the buffer for the application in the buffer area of the memory; the return module is used to return the address of the buffer in the memory.
根据第二方面,在所述内存管理装置的第一种可能的实现方式中,所述缓冲区分配指令中包括所述缓冲区的大小,所述分配模块还用于:在所述内存的缓存区中,为所述应用分配所述大小的缓冲区,所述大小小于所述缓存区的大小。According to the second aspect, in the first possible implementation manner of the memory management device, the buffer allocation instruction includes the size of the buffer, and the allocation module is further configured to: In the buffer area, a buffer of the size is allocated to the application, and the size is smaller than the size of the buffer area.
根据第二方面,或者第二方面的第一种可能的实现方式,在所述内存管理装置的第二种可能的实现方式中,所述装置还包括:第二接收模块,用于接收来自所述应用的内存释放指令,所述内存释放指令用于请求释放所述应用的缓冲区,所述内存释放指令中包括所述缓冲区在所述内存中的地址;释放模块,用于释放所述缓冲区。According to the second aspect, or the first possible implementation manner of the second aspect, in the second possible implementation manner of the memory management device, the device further includes: a second receiving module, configured to receive the The memory release instruction of the application, the memory release instruction is used to request to release the buffer of the application, and the memory release instruction includes the address of the buffer in the memory; the release module is used to release the buffer buffer.
根据第二方面,或者以上第二方面的任意一种可能的实现方式,在所述内存管理装置的第三种可能的实现方式中,所述装置还包括:第三接收模块,用于接收来自所述应用的数据读取指令,所述数据读取指令用于请求向所述应用传送待读取数据;读取模块,用于当所述待读取数据位于所述缓存区之外时,将所述待读取数据从存储器读取到所述缓冲区中;传送模块,用于将所述缓冲区中的数据传送至所述应用。According to the second aspect, or any possible implementation of the above second aspect, in a third possible implementation of the memory management device, the device further includes: a third receiving module, configured to receive the The data reading instruction of the application, the data reading instruction is used to request to transmit the data to be read to the application; the reading module is used to, when the data to be read is located outside the cache area, reading the data to be read from the memory into the buffer; a transmission module, configured to transmit the data in the buffer to the application.
根据第二方面的第三种可能的实现方式中,在所述内存管理装置的第四种可能的实现方式中,传送模块还用于:为所述应用分配的缓冲区中存储的数据的数量大于第一预设阈值时,将所述缓冲区中的数据传送至所述应用;或者,所述应用需要的待读取数据均已经被读取到所述缓冲区中时,将所述缓冲区中的数据传送至所述应用;或者,执行所述应用的处理器处于空闲状态时,将所述缓冲区中的数据传送至所述应用。。According to the third possible implementation of the second aspect, in the fourth possible implementation of the memory management device, the transmission module is further configured to: the amount of data stored in the buffer allocated for the application When it is greater than the first preset threshold, the data in the buffer is transferred to the application; or, when all the data to be read required by the application has been read into the buffer, the buffer is transferred to The data in the area is transferred to the application; or, when the processor executing the application is in an idle state, the data in the buffer is transferred to the application. .
根据第二方面,或者以上第二方面的任意一种可能的实现方式,在所述内存管理装置的第五种可能的实现方式中,所述装置还包括:第四接收模块,用于接收来自所述应用的数据写入指令,所述数据写入指令用于请求存储所述应用的待写入数据;写入模块,用于将所述待写入数据存储至所述缓冲区中;存储模块,用于将所述待写入数据从所述缓冲区中存储至存储器中。According to the second aspect, or any possible implementation of the above second aspect, in a fifth possible implementation of the memory management device, the device further includes: a fourth receiving module, configured to receive the The data writing instruction of the application, the data writing instruction is used to request to store the data to be written in the application; the writing module is used to store the data to be written in the buffer; A module, configured to store the data to be written from the buffer into a memory.
根据第二方面的第五种可能的实现方式,在所述内存管理装置的第六种可能的实现方式中,存储模块还用于:所述缓冲区中存储的数据的数量大于第二预设阈值时,将所述待写入数据从所述缓冲区存储至存储器中;或者,所述应用需要的待写入数据均已经存储在了缓冲区时,将所述待写入数据从所述缓冲区存储至存储器中;或者,接收写入完成指令时,将所述待写入数据从所述缓冲区存储至存储器中;或者,接收写入暂停指令时,将所述待写入数据从所述缓冲区存储至存储器中。According to a fifth possible implementation of the second aspect, in a sixth possible implementation of the memory management device, the storage module is further configured to: the amount of data stored in the buffer is greater than the second preset threshold, store the data to be written from the buffer into the memory; or, when all the data to be written required by the application has been stored in the buffer, store the data to be written from the The buffer is stored in the memory; or, when the write completion instruction is received, the data to be written is stored from the buffer to the memory; or, when the write suspend instruction is received, the data to be written is stored from the The buffer is stored in memory.
第三方面,本申请提供了一种数据访问方法,由存储系统执行,所述存储系统包括内存、存储器和内存管理器,所述内存包括缓存区和缓冲区,所述缓冲区位于所述缓存区中,则该方法包括:接收数据读取指令,所述数据读取指令用于请求向所述应用传送待读取数据;当 所述待读取数据位于所述缓存区之外时,将所述待读取数据从存储器读取到所述缓冲区中;将所述缓冲区中的数据传送至所述应用。In a third aspect, the present application provides a data access method, which is executed by a storage system, the storage system includes a memory, a memory, and a memory manager, the memory includes a cache area and a buffer, and the buffer is located in the cache area, the method includes: receiving a data read instruction, the data read instruction is used to request to transmit the data to be read to the application; when the data to be read is located outside the buffer area, the The data to be read is read from the memory into the buffer; and the data in the buffer is sent to the application.
第四方面,本申请的实施例提供了一种计算机设备,该计算机设备包括内存、存储器和内存管理器,其中,内存管理器可以执行上述第一方面或者第一方面的多种可能的实现方式中的一种或几种的内存管理方法。In a fourth aspect, an embodiment of the present application provides a computer device, the computer device includes a memory, a memory, and a memory manager, where the memory manager can implement the above-mentioned first aspect or multiple possible implementations of the first aspect One or more of the memory management methods.
第五方面,本申请的实施例提供了一种计算机程序产品,包括计算机可读代码,或者承载有计算机可读代码的非易失性计算机可读存储介质,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行上述第一方面或者第一方面的多种可能的实现方式中的一种或几种的内存管理方法。In the fifth aspect, the embodiments of the present application provide a computer program product, including computer readable code, or a non-volatile computer readable storage medium bearing computer readable code, when the computer readable code is stored in an electronic When running in the device, the processor in the electronic device executes the memory management method of the first aspect or one or more of the multiple possible implementation manners of the first aspect.
第六方面,本申请的实施例提供了一种计算机存储介质,其上存储有计算机程序指令,所述计算机程序指令被处理器执行时实现权利要求1至7中任意一项所述的方法。In a sixth aspect, the embodiments of the present application provide a computer storage medium on which computer program instructions are stored, and when the computer program instructions are executed by a processor, the method described in any one of claims 1 to 7 is implemented.
本申请的这些和其他方面在以下(多个)实施例的描述中会更加简明易懂。These and other aspects of the present application will be made more apparent in the following description of the embodiment(s).
附图说明Description of drawings
包含在说明书中并且构成说明书的一部分的附图与说明书一起示出了本申请的示例性实施例、特征和方面,并且用于解释本申请的原理。The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate exemplary embodiments, features, and aspects of the application and, together with the specification, serve to explain the principles of the application.
图1示出相关技术中存储系统的结构示意图;FIG. 1 shows a schematic structural diagram of a storage system in the related art;
图2示出本申请实施例中存储系统的结构示意图;FIG. 2 shows a schematic structural diagram of a storage system in an embodiment of the present application;
图3示出本申请提供的内存管理方法的流程示意图;FIG. 3 shows a schematic flow chart of the memory management method provided by the present application;
图4示出本申请实施例中缓冲区的分配示意图;Fig. 4 shows the allocation diagram of the buffer zone in the embodiment of the present application;
图5示出本申请实施例提供的内存管理方法的流程示意图;FIG. 5 shows a schematic flow diagram of a memory management method provided by an embodiment of the present application;
图6示出本申请实施例中应用读取数据的过程的示意图;FIG. 6 shows a schematic diagram of the process of reading data by an application in the embodiment of the present application;
图7示出本申请实施例提供的内存管理方法的流程示意图;FIG. 7 shows a schematic flowchart of a memory management method provided by an embodiment of the present application;
图8a示出本申请实施例的接口重定向示意图;FIG. 8a shows a schematic diagram of interface redirection in an embodiment of the present application;
图8b示出相关技术以及本申请实施例中的接口实现图;FIG. 8b shows a related technology and an interface implementation diagram in the embodiment of the present application;
图9示出大数据场景下本申请实施例中进行数据交互的示意图;FIG. 9 shows a schematic diagram of data interaction in the embodiment of the present application in a big data scenario;
图10示出本申请实施例提供的内存管理装置的结构示意图;FIG. 10 shows a schematic structural diagram of a memory management device provided by an embodiment of the present application;
图11示出本申请实施例提供的计算机设备的结构示意图。FIG. 11 shows a schematic structural diagram of a computer device provided by an embodiment of the present application.
具体实施方式Detailed ways
以下将参考附图详细说明本申请的各种示例性实施例、特征和方面。附图中相同的附图标记表示功能相同或相似的元件。尽管在附图中示出了实施例的各种方面,但是除非特别指出,不必按比例绘制附图。Various exemplary embodiments, features, and aspects of the present application will be described in detail below with reference to the accompanying drawings. The same reference numbers in the figures indicate functionally identical or similar elements. While various aspects of the embodiments are shown in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
在这里专用的词“示例性”意为“用作例子、实施例或说明性”。这里作为“示例性”所说明的任何实施例不必解释为优于或好于其它实施例。The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration." Any embodiment described herein as "exemplary" is not necessarily to be construed as superior or better than other embodiments.
另外,为了更好的说明本申请,在下文的具体实施方式中给出了众多的具体细节。本领域技术人员应当理解,没有某些具体细节,本申请同样可以实施。在一些实例中,对于本领域技术人员熟知的方法、手段、元件和电路未作详细描述,以便于凸显本申请的主旨。In addition, in order to better illustrate the present application, numerous specific details are given in the following specific implementation manners. It will be understood by those skilled in the art that the present application may be practiced without certain of the specific details. In some instances, methods, means, components and circuits well known to those skilled in the art have not been described in detail in order to highlight the gist of the present application.
图2示出本申请实施例中存储系统的结构示意图。如图2所示,存储系统20包括内存管理器21、内存22和存储器23。其中,内存22可以用于临时存储数据,存储器23可以用于持久化存储数据。举例来说,内存22可以用于存储在样本采集、建模和预测的过程中的预存数据、中间数据和结果数据。内存22可以包括只读存储器和随机存取存储器,还可以是易失性存储器或非易失性存储器,或可包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(read-only memory,ROM)、可编程只读存储器(programmable ROM,PROM)、可擦除可编程只读存储器(erasab le PROM,EPROM)、电可擦除可编程只读存储器(electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(random access memory,RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(static RAM,SRAM)、动态随机存取存储器(DRAM)、同步动态随机存取存储器(synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(double data date SDRAM,DDR SDRAM)、增强型同步动态随机存取存储器(enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(synchlink DRAM,SLDRAM)和直接内存总线随机存取存储器(direct rambus RAM,DR RAM)。存储器23可以为磁盘、硬盘和光盘等。存储器23的读写速度较慢,内存22的读写速度较快。内存管理器21用于管理内存的存储空间,在本申请实施例中,内存管理器21在内存22中开辟出了一块存储空间作为缓存区,又在缓存区中开辟出了一块存储空间作为应用10的缓冲区。具体的,内存管理器21可以在内存22预留一块固有的存储空间作为缓存区,以及在应用10运行时在缓存区中动态分配一块存储空间作为缓冲区。可以理解的是,缓冲区的大小小于缓存区的大小。参见图2可知,在本申请实施例中,缓冲区和缓存区复用了内存22的一部分存储空间,减少了对内存空间的占用,节省了内存资源。FIG. 2 shows a schematic structural diagram of a storage system in an embodiment of the present application. As shown in FIG. 2 , the storage system 20 includes a memory manager 21 , a memory 22 and a storage 23 . Wherein, the memory 22 may be used for temporarily storing data, and the memory 23 may be used for persistently storing data. By way of example, memory 22 may be used to store pre-stored, intermediate, and outcome data during sample collection, modeling, and prediction. Memory 22 may include read-only memory and random-access memory, and may be either volatile memory or non-volatile memory, or may include both volatile and non-volatile memory. Among them, the non-volatile memory can be read-only memory (read-only memory, ROM), programmable read-only memory (programmable ROM, PROM), erasable programmable read-only memory (erasable PROM, EPROM), electronic Erasable programmable read-only memory (electrically EPROM, EEPROM) or flash memory. The volatile memory can be random access memory (RAM), which acts as external cache memory. By way of illustration and not limitation, many forms of RAM are available such as static random access memory (static RAM, SRAM), dynamic random access memory (DRAM), synchronous dynamic random access memory (synchronous DRAM, SDRAM), Double data rate synchronous dynamic random access memory (double data date SDRAM, DDR SDRAM), enhanced synchronous dynamic random access memory (enhanced SDRAM, ESDRAM), synchronous connection dynamic random access memory (synchlink DRAM, SLDRAM) and direct Memory bus random access memory (direct rambus RAM, DR RAM). The memory 23 can be a magnetic disk, a hard disk, an optical disk, and the like. The reading and writing speed of the memory 23 is relatively slow, and the reading and writing speed of the memory 22 is relatively fast. The memory manager 21 is used to manage the storage space of the memory. In the embodiment of the present application, the memory manager 21 has opened up a piece of storage space in the memory 22 as a cache area, and has opened up a piece of storage space in the cache area as an application 10 buffers. Specifically, the memory manager 21 may reserve an inherent storage space in the memory 22 as a cache area, and dynamically allocate a piece of storage space in the cache area as a buffer when the application 10 is running. Understandably, the size of the buffer is smaller than the size of the cache. Referring to FIG. 2 , it can be seen that in the embodiment of the present application, the buffer and the cache area reuse a part of the storage space of the memory 22 , which reduces the occupation of memory space and saves memory resources.
图3示出本申请提供的内存管理方法的流程示意图。该方法可以由图2所示的存储系统20中的内存管理器21执行。如图3所示,该方法可以包括:FIG. 3 shows a schematic flowchart of the memory management method provided by the present application. The method may be executed by the memory manager 21 in the storage system 20 shown in FIG. 2 . As shown in Figure 3, the method may include:
步骤S401,内存管理器21接收来自应用10的缓冲区分配指令。Step S401 , the memory manager 21 receives a buffer allocation instruction from the application 10 .
其中,缓冲区分配指令用于请求为应用10在内存中分配存储空间。应用10可以在开始运行时,提交缓冲区分配指令。内存管理器21接收到缓冲区分配指令后,可以为应用10在内存22中分配存储空间,该存储空间即可以作为应用10的缓冲区(简称:缓冲区)。应用10可以将需要交互的数据(包括待写入数据和待读取数据),存放在该缓冲区中,以平衡执行应用10的处理器与内存22之间的速度。Wherein, the buffer allocation instruction is used to request the application 10 to allocate storage space in the memory. The application 10 may submit a buffer allocation instruction when starting to run. After receiving the buffer allocation instruction, the memory manager 21 can allocate a storage space in the memory 22 for the application 10, and the storage space can be used as a buffer (buffer for short) of the application 10. The application 10 may store data that needs to be interacted with (including data to be written and data to be read) in the buffer, so as to balance the speed between the processor executing the application 10 and the memory 22 .
在一种可能的实现方式中,缓冲区分配指令中可以包括为应用10分配的存储空间的大小(或者称为容量),即缓冲区的大小,缓冲区分配指令用于请求为应用10在内存空间中分配该大小的存储空间。In a possible implementation, the buffer allocation instruction may include the size (or capacity) of the storage space allocated for the application 10, that is, the size of the buffer, and the buffer allocation instruction is used to request for the application 10 Allocate storage of that size in the space.
步骤S402,内存管理器21在内存22的缓存区中为应用10分配存储空间作为应用10的缓冲区。Step S402 , the memory manager 21 allocates storage space for the application 10 in the buffer area of the memory 22 as a buffer of the application 10 .
在本申请实施例中,响应于缓冲区分配指令,内存管理器21可以在内存22的缓存区中为应用10分配存储空间作为应用10的缓冲区。在缓冲区分配指令中包括为应用分配的存储空间的大小时,内存管理器21可以在内存22的缓存区中,为应用10分配缓冲区分配指令指示的大小的存储空间。图4示出本申请实施例中缓冲区的分配示意图。如图4所示,内存管理器21响应于缓冲区分配指令,在缓存区中分配了存储空间作为应用10的缓冲区。In the embodiment of the present application, in response to the buffer allocation instruction, the memory manager 21 may allocate storage space for the application 10 in the buffer area of the memory 22 as a buffer of the application 10 . When the buffer allocation instruction includes the size of the storage space allocated for the application, the memory manager 21 may allocate a storage space of the size indicated by the buffer allocation instruction for the application 10 in the cache area of the memory 22 . FIG. 4 shows a schematic diagram of buffer allocation in the embodiment of the present application. As shown in FIG. 4 , the memory manager 21 allocates storage space in the cache area as the buffer of the application 10 in response to the buffer allocation instruction.
可以理解的是,缓存区中除了缓冲区以外的区域,仍可存储从存储器23中读取的数据以及存储需要写入存储器23中的数据。另外,缓存区中除缓冲区以外的区域,有其固有的淘汰机制,例如,缓存数据的数量超过了这一区域的最大容量时,从这一区域中删除长时间不用的数据或者存储时间较长的数据等。本申请实施例对淘汰机制不做限制。It can be understood that, in the cache area except the buffer zone, data read from the memory 23 and data to be written into the memory 23 can still be stored. In addition, areas other than the buffer area in the cache area have their own elimination mechanism. For example, when the amount of cached data exceeds the maximum capacity of this area, data that has not been used for a long time or has a long storage time will be deleted from this area. long data etc. The embodiment of this application does not limit the elimination mechanism.
步骤S403,内存管理器21向应用10返回应用的缓冲区在内存22中的地址。Step S403 , the memory manager 21 returns the address of the buffer of the application in the memory 22 to the application 10 .
内存管理器21向应用10返回缓冲区在内存22中的地址后,应用10在需要读取数据时,可以从该地址读取数据,在需要写入数据时,可以向该地址写入数据。在一个示例中,内存管理器21可以返回缓冲区对应的指针,该指针指向为缓冲区在内存22中占用的存储空间的首地址。After the memory manager 21 returns the address of the buffer in the memory 22 to the application 10, the application 10 can read data from this address when it needs to read data, and can write data to this address when it needs to write data. In an example, the memory manager 21 may return a pointer corresponding to the buffer, and the pointer points to the first address of the storage space occupied by the buffer in the memory 22 .
在本申请实施例中,通过在缓存区分配存储空间作为应用的缓冲区,重复利用了缓存区在内存中占用的存储空间,使得应用的缓冲区不再占用额外的内存空间,减少了对内存资源的占用,节省了内存资源。In the embodiment of this application, by allocating storage space in the cache area as the application buffer, the storage space occupied by the cache area in the memory is reused, so that the application buffer no longer occupies additional memory space, reducing the need for memory Occupation of resources saves memory resources.
应用的缓冲区是动态分配的。在应用结束运行时,内存管理器21可以收回为应用10分配的缓冲区,以进一步节省内存资源。在一种可能的实现方式中,本申请实施例提供的内存管理方法还包括:接收来自应用的内存释放指令,释放缓冲区在内存中占用的存储空间。Application buffers are allocated dynamically. When the application finishes running, the memory manager 21 can reclaim the buffer allocated for the application 10 to further save memory resources. In a possible implementation manner, the memory management method provided by the embodiment of the present application further includes: receiving a memory release instruction from an application, and releasing the storage space occupied by the buffer in the memory.
其中,内存释放指令可以用于请求释放应用的缓冲区,内存释放指令中可以包括缓冲区在内存中的地址。在应用运行结束时,应用10可以提交内存释放指令。内存管理器21接收到内存释放指令后,可以根据内存释放指令中指示的地址,释放应用的缓冲区,从而节省内存资源。Wherein, the memory release instruction may be used to request to release the buffer of the application, and the memory release instruction may include the address of the buffer in the memory. At the end of the running of the application, the application 10 may submit a memory release instruction. After receiving the memory release instruction, the memory manager 21 may release the application buffer according to the address indicated in the memory release instruction, thereby saving memory resources.
需要说明的是,在将缓存区中缓冲区占用的存储空间释放后,可以选择清除该存储空间内存储的数据以便存放其他数据,也可以选择保留该存储空间内存储的数据以便应用下次使用或者其他应用使用,对此本申请实施例不做限制。It should be noted that after releasing the storage space occupied by the buffer in the cache area, you can choose to clear the data stored in the storage space to store other data, or you can choose to keep the data stored in the storage space for the application to use next time Or use in other applications, which is not limited in this embodiment of the present application.
在为应用分配缓冲区之后,在释放缓冲区在内存中占用的存储空间之前,应用的缓冲区一直存在于缓存区中。在应用运行过程中,数据按照应用需求依次进入缓冲区中。当缓冲区中数据的数量达到缓冲区的大小时,或者接近缓冲区的大小(例如,达到缓冲区的大小的90%、95%或者99%等)时,新进入缓冲区的数据会覆盖之前进入缓冲区的数据,覆盖的位置可以由缓冲区的淘汰机制决定。在一个示例中,缓冲区中可以按照在缓冲区中存储的时长由大到小的顺序进行数据覆盖。After a buffer is allocated for an application, the application's buffer exists in the buffer until the storage space occupied by the buffer in memory is released. During the running of the application, the data enters the buffer sequentially according to the application requirements. When the amount of data in the buffer reaches the size of the buffer, or approaches the size of the buffer (for example, reaching 90%, 95%, or 99% of the size of the buffer, etc.), the new data entering the buffer will overwrite the previous For the data entering the buffer, the overwriting position can be determined by the elimination mechanism of the buffer. In an example, the data in the buffer may be overwritten in descending order of the duration stored in the buffer.
下面对本申请实施例中的数据交互过程中的数据读取过程进行说明。图5示出本申请实施例提供的内存管理方法的流程示意图,该方法可以由图2所示的存储系统20中的内存管理器21执行。如图5所示,该内存管理方法包括:The following describes the data reading process in the data interaction process in the embodiment of the present application. FIG. 5 shows a schematic flowchart of a memory management method provided by an embodiment of the present application, and the method may be executed by the memory manager 21 in the storage system 20 shown in FIG. 2 . As shown in Figure 5, the memory management method includes:
步骤S601,内存管理器21接收来自应用10的数据读取指令。Step S601 , the memory manager 21 receives a data read instruction from the application 10 .
在步骤S601中,待读取数据表示应用10需要从存储系统20获取的数据。应用10在需要读取数据时,可以提交数据读取指令。该数据读取指令用于请求向应用传送待读取数据。内存管理器21接收到数据读取指令时,可以先确定待读取数据是位于应用的缓冲区中,位于缓存区中除缓冲区以外的区域中,还是位于缓存区之外。再根据待读取数据的位置进行后续处理。In step S601 , the data to be read represents data that the application 10 needs to obtain from the storage system 20 . When the application 10 needs to read data, it can submit a data reading instruction. The data read instruction is used to request to transmit the data to be read to the application. When the memory manager 21 receives the data read instruction, it may first determine whether the data to be read is located in the buffer area of the application, in an area other than the buffer area in the buffer area, or outside the buffer area. Subsequent processing is then performed according to the location of the data to be read.
步骤S602,当待读取数据位于缓冲区中时,内存管理器21直接执行步骤S605。In step S602, when the data to be read is located in the buffer, the memory manager 21 directly executes step S605.
步骤S603,当待读取数据位于缓存区中除应用的缓冲区之外的区域时,内存管理器21 将待读取数据从缓存区读取到应用的缓冲区中,再执行步骤S605。In step S603, when the data to be read is located in the buffer area other than the buffer area of the application, the memory manager 21 reads the data to be read from the buffer area into the buffer area of the application, and then executes step S605.
步骤S604,当待读取数据位于缓存区之外时,内存管理器21将待读取数据从存储器23读取到应用的缓冲区中,再执行步骤S605。In step S604, when the data to be read is outside the buffer area, the memory manager 21 reads the data to be read from the memory 23 into the application buffer, and then executes step S605.
步骤S605,在满足第一预设条件的情况下,内存管理器21将应用的缓冲区中的数据传送至应用10。Step S605 , if the first preset condition is satisfied, the memory manager 21 transmits the data in the buffer of the application to the application 10 .
在步骤S605中,第一预设条件用于确定是否将缓冲区中的数据传送至应用。第一预设条件可以根据需要进行预先设置。第一预设条件的设置原则为节省时间和资源。In step S605, the first preset condition is used to determine whether to transmit the data in the buffer to the application. The first preset condition can be preset as required. The principle of setting the first preset condition is to save time and resources.
在一个示例中,满足第一预设条件包括:缓冲区中存储的数据的数量大于第一预设阈值。其中,第一预设阈值可以根据需要进行设置。举例来说,第一预设阈值可以为1024字节、100M等。缓冲区中存储的数据的数量大于第一预设阈值表明缓冲区中的数据较多。缓冲区中存储的数据的数量小于或者等于第一预设阈值表明缓冲区中的数据较少。考虑到内存的速度小于执行应用的处理器的速度,在缓冲区中的数据较少时,若将缓冲区中的数据传送给应用,由处理器立刻开始处理这些数据相关的任务,则处理器可能会因等待更多的数据而消耗较多的时间和资源。为了节省资源和时间,在缓冲区中的数据较少时,缓冲区中的数据暂时不向应用传送,处理器可以处理其他任务,在缓冲区中的数据达到第一预设阈值时,再向应用传送。In an example, satisfying the first preset condition includes: the amount of data stored in the buffer is greater than a first preset threshold. Wherein, the first preset threshold can be set as required. For example, the first preset threshold may be 1024 bytes, 100M, and so on. The amount of data stored in the buffer is greater than the first preset threshold, indicating that there is more data in the buffer. The amount of data stored in the buffer is less than or equal to the first preset threshold indicates that there is less data in the buffer. Considering that the speed of the memory is lower than the speed of the processor executing the application, when there is less data in the buffer, if the data in the buffer is transmitted to the application, the processor immediately starts processing these data-related tasks, and the processor It may consume more time and resources due to waiting for more data. In order to save resources and time, when there is less data in the buffer, the data in the buffer will not be sent to the application temporarily, and the processor can handle other tasks. App transfer.
在又一示例中,满足第一预设条件可以包括:应用需要的待读取数据均已经被读取到缓冲区。此时,处理器不需要等待更多的数据,也就是说处理器立刻开始处理这些数据相关的业务时不会因为等待更多的数据而消耗较多的时间和资源。因此,在应用需要的待读取数据均已经被读取到缓冲区时,存储系统可以将应用的缓冲区中的数据传送至应用。In yet another example, satisfying the first preset condition may include: all data to be read required by the application has been read into the buffer. At this time, the processor does not need to wait for more data, which means that the processor will not consume more time and resources for waiting for more data when it immediately starts processing these data-related services. Therefore, when all the data to be read required by the application has been read into the buffer, the storage system can transmit the data in the buffer of the application to the application.
在另一示例中,满足第一预设条件可以包括:执行应用的处理器处于空闲状态。执行应用的处理器处于空闲状态,表明处理器没有其他需要处理的任务。此时,处理器开始处理待读取数据相关的任务,也不会影响其他任务。因此,在执行应用的处理器处于空闲状态时,存储系统可以将应用的缓冲区中的数据传送至应用。In another example, satisfying the first preset condition may include: the processor executing the application is in an idle state. The processor executing the application is idle, indicating that the processor has no other tasks to process. At this point, the processor starts processing tasks related to the data to be read without affecting other tasks. Therefore, when the processor executing the application is in an idle state, the storage system can transfer the data in the buffer of the application to the application.
图6示出本申请实施例中应用读取数据的过程的示意图。图6示出了上述三种情况下应用读取数据的过程。FIG. 6 shows a schematic diagram of a process of reading data by an application in an embodiment of the present application. FIG. 6 shows the process of reading data by the application in the above three cases.
如图6所示,在第一种情况下(对应于步骤S602),待读取数据位于缓冲区中。内存管理器21可以在满足第一预设条件的情况下,直接将该待读取数据从缓冲区中传送至应用。这种情况下,无需从存储器23复制数据,减小了读写时延和节省了处理器运算资源。As shown in FIG. 6, in the first case (corresponding to step S602), the data to be read is located in the buffer. The memory manager 21 may directly transmit the data to be read from the buffer to the application when the first preset condition is satisfied. In this case, there is no need to copy data from the memory 23, which reduces read and write delays and saves processor computing resources.
如图6所示,在第二种情况(对应于步骤S603)下,待读取数据位于缓存区中除缓冲区之外的区域。此时,缓存区已经预取了待读取数据。因此,内存管理器21可以将该待读取数据从缓存区中除缓冲区之外的区域读取到为缓冲区中,之后可以在满足第一预设条件的情况下,将该待读取数据传送至应用10。这种情况下,虽然需要从缓存区中除缓冲区之外的区域向缓冲区中复制数据,但是这个数据复制时间不会超过相关技术中从相互独立的缓存区到缓冲区的数据复制时间。As shown in FIG. 6 , in the second case (corresponding to step S603 ), the data to be read is located in an area of the buffer area other than the buffer area. At this point, the cache has already prefetched the data to be read. Therefore, the memory manager 21 can read the data to be read from an area other than the buffer area in the buffer area into the buffer area, and then can read the data to be read under the condition that the first preset condition is satisfied. The data is sent to the application 10 . In this case, although it is necessary to copy data from an area other than the buffer area in the buffer area to the buffer area, the data copying time will not exceed the data copying time from mutually independent buffer areas to the buffer area in the related art.
如图6所示,在第三种情况(对应于步骤S604)下,待读取数据位于缓存区之外(即位于存储器23中)。此时,内存管理器21可以直接将待读取数据从存储器23读取到应用的缓冲区中,之后可以在满足第一预设条件的情况下,将该待读取数据传送至应用10。在这种情况下,由于缓冲区位于缓存区中,因此待读取数据可以直接从存储器23复制到缓冲区中,而不需要经历从缓存区复制到缓冲区的过程,减小了读写时延和节省了处理器运算资源。As shown in FIG. 6 , in the third case (corresponding to step S604 ), the data to be read is located outside the buffer area (that is, located in the memory 23 ). At this time, the memory manager 21 may directly read the data to be read from the memory 23 into the buffer of the application, and then transmit the data to be read to the application 10 when the first preset condition is satisfied. In this case, since the buffer is located in the buffer, the data to be read can be directly copied from the memory 23 to the buffer without going through the process of copying from the buffer to the buffer, reducing the time for reading and writing. Delay and save processor computing resources.
在一种可能的实现方式中,步骤S601中的数据读取指令可以包括待读取数据所在文件在存储器中的位置,缓冲区在内存中的地址,以及待读取数据的长度。内存管理器21可以根据待读取数据所在文件在存储器23中的位置、缓冲区在内存22中的地址,以及待读取数据的长度,确定待读取数据中位于缓冲区的部分、位于缓存区中除缓冲区之外的区域的部分,以及位于存储器中的部分。之后,内存管理器21可以根据各个部分待读取数据符合的情况分别进行处理。In a possible implementation manner, the data reading instruction in step S601 may include the location in the memory of the file where the data to be read is located, the address of the buffer in the memory, and the length of the data to be read. The memory manager 21 can determine the part of the data to be read that is located in the buffer zone, the buffer zone, and the buffer zone according to the location of the file where the data to be read is located in the memory 23, the address of the buffer zone in the memory 22, and the length of the data to be read. The part of the region other than the buffer zone, and the part that resides in memory. Afterwards, the memory manager 21 may perform processing respectively according to the condition of each part of the data to be read.
在将存储器中的数据复制到缓存区(包括缓冲区以及除缓冲区以外的区域)的过程中,读取数据的指针会随着复制的进度进行移动。内存管理器21了解待读取数据是否从存储器23中进行了复制,以及具体复制到了内存22中的哪个位置。因此,根据待读取数据所在文件在存储器中的位置、缓冲区在内存中的地址,以及待读取数据的长度,可以确定待读取数据中位于缓冲区的部分、位于缓存区中除缓冲区之外的区域的部分,以及位于存储器中的部分。During the process of copying the data in the memory to the buffer area (including the buffer area and areas other than the buffer area), the pointer to read the data will move along with the progress of copying. The memory manager 21 knows whether the data to be read has been copied from the memory 23 , and to which location in the memory 22 it has been specifically copied. Therefore, according to the location of the file where the data to be read is located in the memory, the address of the buffer in the memory, and the length of the data to be read, it is possible to determine the part of the data to be read that is located in the buffer area, the portion located in the cache area except the buffer The portion of the area outside the zone, and the portion located in memory.
在本申请实施例涉及的三种情况中,缓冲区位于缓存区内部,实现了内存中存储空间的复用,节省了内存资源,同时减少了数据复制次数或者提升了数据复制的效率,从而减小了读写时延和节省了处理器运算资源。In the three cases involved in the embodiment of the present application, the buffer is located inside the buffer area, which realizes the multiplexing of the storage space in the memory, saves memory resources, and reduces the number of times of data copying or improves the efficiency of data copying, thereby reducing The read and write delay is reduced and the processor computing resources are saved.
下面对本申请实施例中的数据交互过程中的数据写入过程进行说明。图7示出本申请实施例提供的内存管理方法的流程示意图,该方法可以由图2所示的存储系统20中的内存管理器21执行。如图7所示,该内存管理方法包括:The data writing process in the data interaction process in the embodiment of the present application will be described below. FIG. 7 shows a schematic flowchart of a memory management method provided by an embodiment of the present application, and the method may be executed by the memory manager 21 in the storage system 20 shown in FIG. 2 . As shown in Figure 7, the memory management method includes:
步骤S801,内存管理器21接收来自应用10的数据写入指令。In step S801 , the memory manager 21 receives a data writing instruction from the application 10 .
步骤S802,内存管理器21将待写入数据存储至应用10的缓冲区中。Step S802 , the memory manager 21 stores the data to be written into the buffer of the application 10 .
步骤S803,在满足第二预设条件的情况下,内存管理器21将待写入数据从应用10的缓冲区存储至存储器23中。Step S803 , if the second preset condition is satisfied, the memory manager 21 stores the data to be written from the buffer of the application 10 into the memory 23 .
在步骤S801中,待写入数据表示应用需要写入存储器进行持久化保存的数据。应用10需要写入数据时,可以提交数据写入指令。该数据写入指令用于请求存储应用的待写入数据。内存管理器21接收到数据写入指令时,可以将数据写入指令指示的待写入数据存储在应用10的缓冲区中。In step S801, the data to be written represents data that the application needs to write into the memory for persistent storage. When the application 10 needs to write data, it may submit a data writing instruction. The data writing instruction is used to request the data to be written in the storage application. When the memory manager 21 receives the data write instruction, it may store the data to be written indicated by the data write instruction in the buffer of the application 10 .
在一种可能的实现方式中,步骤S801中的数据写入指令中可以包括缓冲区在内存中的地址,以及待写入数据的长度。基于此,在步骤S802中,在满足第二预设条件的情况下,内存管理器21可以根据缓冲区在内存22中的地址,以及待写入数据的长度,确定目标内存地址;然后,将待写入数据从缓冲区的内存地址复制到目标内存地址;最后,将目标内存地址中的数据存储至存储器23。In a possible implementation manner, the data writing instruction in step S801 may include the address of the buffer in the memory and the length of the data to be written. Based on this, in step S802, in the case of satisfying the second preset condition, the memory manager 21 can determine the target memory address according to the address of the buffer in the memory 22 and the length of the data to be written; The data to be written is copied from the memory address of the buffer to the target memory address; finally, the data in the target memory address is stored in the memory 23 .
在步骤S803中,第二预设条件用于确定是否将缓冲区中的数据传送至存储器中。第二预设条件可以根据需要进行设置。第二预设条件的设置原则为缓冲区中的数据在传送至存储器之前不会被覆盖。In step S803, the second preset condition is used to determine whether to transfer the data in the buffer to the memory. The second preset condition can be set as required. The setting principle of the second preset condition is that the data in the buffer will not be overwritten before being transferred to the memory.
在一个示例中,满足第二预设条件包括:缓冲区中存储的数据的数量大于第二预设阈值。其中,第二预设阈值可以根据需要进行设置。举例来说,第二预设阈值可以为1024字节、100M等。缓冲区中存储的数据的数量大于第二预设阈值表明已经有较多的待写入数据存储在了缓冲区中,此时,可以将缓冲区中的数据传送至存储器。缓冲区中存储的数据的数量小于或者等于第二预设阈值表明缓冲区的待写入数据比较少,此时若将缓冲区中的数据传送至存储器, 可能出现需要传送多次才能将所有的待写入数据都存储至存储器中,数据复制的次数较多,造成数据写入时延增加、处理器运算资源浪费。In an example, satisfying the second preset condition includes: the amount of data stored in the buffer is greater than a second preset threshold. Wherein, the second preset threshold can be set as required. For example, the second preset threshold may be 1024 bytes, 100M, and so on. If the amount of data stored in the buffer is greater than the second preset threshold, it indicates that more data to be written has been stored in the buffer, and at this time, the data in the buffer can be transferred to the memory. The amount of data stored in the buffer is less than or equal to the second preset threshold, indicating that the data to be written in the buffer is relatively small. At this time, if the data in the buffer is transferred to the memory, it may occur that multiple transfers are required to transfer all the data. All the data to be written is stored in the memory, and the data is replicated many times, resulting in increased data writing delay and waste of processor computing resources.
在又一示例中,满足第二预设条件可以包括:应用需要写入的待写入数据均已经存储在了缓冲区。应用需要写入的待写入数据均已经存储在缓冲区中,此时,应用不会传送更多的数据到其缓冲区中。因此,存储系统无需等待更多的数据,可以立刻将缓冲区中的数据传送至存储器。In yet another example, satisfying the second preset condition may include: all data to be written that the application needs to write has been stored in the buffer. The data to be written that the application needs to write has been stored in the buffer, and at this time, the application will not send more data to its buffer. Therefore, the storage system can immediately transfer the data in the buffer to the memory without waiting for more data.
在另一示例中,满足第二预设条件可以包括:接收到来自应用的写入完成指令,或者接收到来自应用的写入暂停指令。应用写入完成或者写入暂停时,表明应用不会传送更多的数据到其缓冲区中。因此,存储系统无需等待更多的数据,可以立刻将缓冲区中的数据传送至存储器。In another example, satisfying the second preset condition may include: receiving a write completion instruction from an application, or receiving a write suspend instruction from an application. When an application write is complete or a write is paused, it indicates that the application will not send any more data into its buffer. Therefore, the storage system can immediately transfer the data in the buffer to the memory without waiting for more data.
考虑到应用需要尽快完成数据的写入,以去执行其他任务,而内存内部的数据复制需要的时间远小于将内存的数据复制到存储器中。因此,在本申请实施例中,可以先将缓冲区中的数据复制到缓存区中除缓冲区以外的区域,实现应用的快速写入。之后,再慢慢将缓存区中的数据存储到存储器中。Considering that the application needs to complete the writing of data as soon as possible to perform other tasks, the time required for data copying inside the memory is much shorter than copying the data in the memory to the storage. Therefore, in the embodiment of the present application, the data in the buffer can be copied to an area other than the buffer in the buffer to implement fast writing of the application. After that, slowly store the data in the cache area into the memory.
在将目标内存地址存储至存储器的过程中,存储系统可以获取待写入数据所在文件在存储器中的位置、缓冲区在内存中的地址,以及待写入数据在文件内部的偏移量作为同步信息,然后基于同步信息将目标内存地址中的数据写入存储器中。随着待写入数据的写入,待写入数据在文件内部的偏移量会发生变化,后续待写入数据写入在存储器中的位置也会同步发生变化。In the process of storing the target memory address to the memory, the storage system can obtain the location of the file where the data to be written is located in the memory, the address of the buffer in the memory, and the offset of the data to be written in the file as a synchronization information, and then writes the data at the target memory address into the memory based on the synchronization information. As the data to be written is written, the offset of the data to be written in the file will change, and the location of the subsequent data to be written in the memory will also change synchronously.
在本申请实施例中,简化数据交互过程,应用不再通过缓冲区间接地与缓存区进行数据交互,而是直接与缓存区进行数据交互,简单资源,节省时间和资源。In the embodiment of the present application, the data interaction process is simplified, and the application no longer indirectly performs data interaction with the buffer area through the buffer area, but directly performs data interaction with the buffer area, which saves time and resources.
考虑到在本申请实施例中,内存管理器接收到缓冲区分配指令、内存释放指令、数据读取指令和数据写入指令后进行的处理与相关技术中完全不同。而应用程序编程接口(Application Programming Interface,API)的变动又会影响上层应用。因此,在本申请实施例中,可以不改变应用提交的缓冲区分配指令、内存释放指令、数据读取指令和数据写入指令,即不改变应用调用的接口,而是由内存管理器对这些接口进行截获后,将应用调用的接口重定向到本申请实施例提供的接口,从而使存储系统执行本申请实施例提供的方法。这样,既保证了上层应用的调用不发生变化,又简单、高效地实现了节省内存资源、处理器运算资源,以及降低读写时延。Considering that in the embodiment of the present application, the processing performed by the memory manager after receiving a buffer allocation instruction, a memory release instruction, a data read instruction, and a data write instruction is completely different from that in the related art. And changes in the application programming interface (Application Programming Interface, API) will affect the upper application. Therefore, in this embodiment of the application, the buffer allocation instruction, memory release instruction, data read instruction, and data write instruction submitted by the application may not be changed, that is, the interface called by the application shall not be changed, but these After the interface is intercepted, the interface called by the application is redirected to the interface provided in the embodiment of the present application, so that the storage system executes the method provided in the embodiment of the present application. In this way, it not only ensures that the calls of upper-layer applications do not change, but also saves memory resources, processor computing resources, and reduces read and write delays in a simple and efficient manner.
下面以四种具体的示例分别介绍内存分配、内存释放、数据读取和数据写入的过程:The following four specific examples introduce the process of memory allocation, memory release, data reading and data writing:
相关技术中,应用可以分别通过调用malloc接口、free接口、read接口和write接口,进行缓冲区分配指令、内存释放指令、数据读取指令和数据写入指令的提交。内存管理器截获上述接口后,重定向至本申请实施例提供的应用直通缓存(application direct cache access,adca)接口,即adca_malloc接口、adca_free接口、adca_read接口和adca_write接口。In related technologies, an application may submit a buffer allocation command, a memory release command, a data read command, and a data write command by calling a malloc interface, a free interface, a read interface, and a write interface respectively. After the memory manager intercepts the above-mentioned interface, it is redirected to the application direct cache access (adca) interface provided by the embodiment of the present application, namely the adca_malloc interface, adca_free interface, adca_read interface and adca_write interface.
在本申请实施例中,内存管理器截获malloc接口重定向为adca_malloc接口,截获free接口重定向为adca_free接口。并通过调用adca_malloc接口实现在内存的缓存区中为应用分配存储空间作为应用的缓冲区(即步骤S401至S403),通过调用adca_free接口实现释放缓冲区在内存中占用的存储空间。通过对adca_malloc接口和adca_free接口的调用,应用 可以直接从缓存区中分配和释放存储空间,进行缓冲区的分配和释放,不需要在内存中额外开辟空间,节省了对内存资源的占用。In the embodiment of the present application, the memory manager intercepts the malloc interface and redirects it to an adca_malloc interface, and intercepts the free interface and redirects it to an adca_free interface. And by calling the adca_malloc interface to allocate storage space for the application in the buffer area of the memory as the application buffer (ie steps S401 to S403), and by calling the adca_free interface to release the storage space occupied by the buffer in the memory. By calling the adca_malloc interface and adca_free interface, the application can directly allocate and release storage space from the buffer area, and perform buffer allocation and release without opening up additional space in the memory, which saves the occupation of memory resources.
在本申请实施例中,内存管理器截获read接口重定向为adca_read接口,截获write接口重定向为adca_write接口。并通过调用adca_read接口实现数据读取(即步骤S601至S605),通过调用adca_write接口实现数据写入(即步骤S801至S803)。由于缓冲区位于缓存区中,应用可以直接获取缓存区中缓冲区的数据,而不需要将数据从缓存区中复制到缓冲区后,再从缓冲区中获取数据,节省了数据复制消耗的时间和处理器资源。In the embodiment of the present application, the memory manager intercepts the read interface and redirects it to the adca_read interface, and intercepts the write interface and redirects it to the adca_write interface. Data reading is implemented by calling the adca_read interface (ie steps S601 to S605), and data writing is implemented by calling the adca_write interface (ie steps S801 to S803). Since the buffer is located in the buffer, the application can directly obtain the data in the buffer in the buffer, without copying the data from the buffer to the buffer, and then obtaining the data from the buffer, saving the time consumed by data copying and processor resources.
图8a示出本申请实施例的接口重定向示意图。如图8a所示,对应用而言,调用的仍为原来的接口,即调用malloc接口分配缓冲区,调用free接口释放缓冲区,调用read接口进行数据读取,调用write接口进行数据写入。但当调用进入本申请实施例提供的存储系统时,该调用分别会被重新定向到本申请实施例提供的adca_malloc接口、adca_free接口、adca_read接口和adca_write接口。可见,上层调用不发生变化,但缓冲区对应的存储空间的分配和释放由存储系统中的缓存区处理,应用的待写入数据和待读取数据也由缓存区中的缓冲区进行存储。Fig. 8a shows a schematic diagram of interface redirection in the embodiment of the present application. As shown in Figure 8a, for the application, the original interface is still called, that is, the malloc interface is called to allocate a buffer, the free interface is called to release the buffer, the read interface is called to read data, and the write interface is called to write data. But when the call enters the storage system provided by the embodiment of the present application, the call will be redirected to the adca_malloc interface, adca_free interface, adca_read interface and adca_write interface provided by the embodiment of the present application respectively. It can be seen that the upper-layer calls do not change, but the allocation and release of the storage space corresponding to the buffer is handled by the buffer in the storage system, and the data to be written and read by the application are also stored in the buffer in the buffer.
图8b示出相关技术以及本申请实施例中的接口实现图。如图8b所示,应用可以仍调用malloc接口、free接口、read接口和write接口进行相应操作,存储系统会主动截获原有的接口,获取本申请实施例提供的adca接口所需的信息,并执行相应的程序。Fig. 8b shows a related technology and an interface implementation diagram in the embodiment of the present application. As shown in Figure 8b, the application can still call the malloc interface, free interface, read interface and write interface to perform corresponding operations, and the storage system will actively intercept the original interface, obtain the information required by the adca interface provided by the embodiment of the present application, and Execute the corresponding program.
在应用开始运行时,应用向内存管理器提交缓冲区分配指令,由内存管理器为应用分配缓冲区。对比图8b所示的标准接口与adca接口可得,malloc接口与adca_malloc接口都需要的参数为存储的数据类型(如:int4)和数据总长度(如:1024),前者由操作系统在内存中开辟存储空间,后者由存储系统在缓存区中开辟存储空间。参照步骤S401可知,应用提交的缓冲区分配指令中包括缓冲区大小。这里的缓冲区大小对应于参数“数据总长度”。When the application starts running, the application submits a buffer allocation instruction to the memory manager, and the memory manager allocates a buffer for the application. Comparing the standard interface and the adca interface shown in Figure 8b, the parameters required by both the malloc interface and the adca_malloc interface are the stored data type (such as: int4) and the total length of the data (such as: 1024), the former is stored in the memory by the operating system Open up storage space, which is opened up by the storage system in the cache area. Referring to step S401, it can be known that the buffer allocation instruction submitted by the application includes the buffer size. The buffer size here corresponds to the parameter "total data length".
进一步,应用调用fd.open接口打开文件。如图8b所示,fd.open接口的参数为文件路径、文件名等信息。基于文件路径和文件名可以确定待读取数据从哪个文件读取,以及待写入数据要写入哪个文件,并将在adca_read接口中需要使用到文件路径和文件名等信息作为文件句柄。之后,应用可以打开文件进行数据的读取或者数据的写入。对比图8b所示的标准接口与adca接口可得,read接口和adca_read接口都需要的参数为文件句柄(fd)、buffer指针(*buf)和读取的字节数(如:1024)。参照步骤S601可知,应用提交的数据读取指令中包括文件在存储器中的位置,缓冲区在内存中的地址,以及待读取数据的长度。这里,文件在存储器中的位置对应于参数“文件句柄”,缓冲区在内存中的地址对应于参数“buffer指针”,待读取数据的长度对应于参数“读取的字节数”。Further, the application calls the fd.open interface to open the file. As shown in FIG. 8b, the parameters of the fd.open interface are information such as file path and file name. Based on the file path and file name, it can be determined which file the data to be read is read from, and which file the data to be written is to be written into, and information such as the file path and file name needs to be used in the adca_read interface as a file handle. After that, the application can open the file to read or write data. Comparing the standard interface shown in Figure 8b with the adca interface, the parameters required by both the read interface and the adca_read interface are file handle (fd), buffer pointer (*buf) and the number of bytes read (eg: 1024). Referring to step S601, it can be seen that the data reading instruction submitted by the application includes the location of the file in the memory, the address of the buffer in the memory, and the length of the data to be read. Here, the location of the file in the memory corresponds to the parameter "file handle", the address of the buffer in memory corresponds to the parameter "buffer pointer", and the length of the data to be read corresponds to the parameter "number of bytes read".
对比图8b所示的标准接口与adca接口可得,write的参数包括:文件句柄(fd)、buffer指针(*buf)和写入的字节数(如:1024),而adca_write接口执行的实际是一个memcpy(即内存拷贝)接口,其参数为源内存地址(*src)、目标内存地址(*dst)和要复制的字节数(如:1024),接口的功能是从源内存地址的起始位置开始复制多个字节到目标内存地址中,即从源内存地址中复制多个字节到目标内存地址中。参照步骤S801可知,应用提交的数据写入指令中包括缓冲区在内存中的地址,以及待写入数据的长度。这里,缓冲区在内存中的地址对应于参数“源内存地址”,存储系统可以根据缓冲区在内存中的地址以及待写入数据的长度确定出参数“目标内存地址”,待写入数据的长度对应于参数“要复制的字节数”。Comparing the standard interface and the adca interface shown in Figure 8b, the parameters of write include: file handle (fd), buffer pointer (*buf) and the number of bytes written (such as: 1024), and the actual adca_write interface execution It is a memcpy (memory copy) interface, its parameters are source memory address (*src), target memory address (*dst) and the number of bytes to be copied (eg: 1024), the function of the interface is from the source memory address The starting position begins to copy multiple bytes to the target memory address, that is, to copy multiple bytes from the source memory address to the target memory address. Referring to step S801, it can be known that the data writing instruction submitted by the application includes the address of the buffer in the memory and the length of the data to be written. Here, the address of the buffer in the memory corresponds to the parameter "source memory address", and the storage system can determine the parameter "target memory address" according to the address of the buffer in the memory and the length of the data to be written. The length corresponds to the parameter "number of bytes to copy".
图8b所示的sync.data用于传递同步信息。应用在进行读写操作时,会在操作系统里自动保存实时的读写位置,adca接口会截获这一信息,向缓冲区通过sync.data接口传递这类信息(fd为文件句柄,*buf为buffer指针,offset为文件的内部偏移量),即文件此时读写到了哪个文件的哪一位置,以实现有序读写。sync.data shown in Figure 8b is used to transfer synchronization information. When the application performs read and write operations, it will automatically save the real-time read and write positions in the operating system, and the adca interface will intercept this information, and transfer this information to the buffer through the sync.data interface (fd is the file handle, *buf is buffer pointer, offset is the internal offset of the file), that is, which position of the file the file is read and written to at this time, so as to realize orderly reading and writing.
本申请实施例提供的内存管理方法可以较好的适用于应用需要与存储系统进行频繁交互的场景;以及对内存占用比较敏感,要求内存占用少的场景;以及对时延比较敏感,要求读写时间短的场景。在本申请实施例中,可以引入检测插件,由该检测插件检测应用是否为存储密集型应用。若检测到一个应用是存储密集型应用,则可以采用本申请实施例提供的内存管理方法进行内存管理,对应用调用的接口进行截获以及重定向,从而达到节省内存空间、降低读写时延以及节省处理器运算资源的效果。The memory management method provided by the embodiment of the present application can be better applied to scenarios where applications need to interact frequently with the storage system; and scenarios that are sensitive to memory usage and require less memory usage; and scenarios that are sensitive to delay and require reading and writing Short scenes. In the embodiment of the present application, a detection plug-in may be introduced, and the detection plug-in detects whether the application is a storage-intensive application. If it is detected that an application is a storage-intensive application, the memory management method provided by the embodiment of the present application can be used for memory management, and the interface called by the application can be intercepted and redirected, so as to save memory space, reduce read and write delays, and The effect of saving processor computing resources.
下面介绍本申请提供的内存管理方法在大数据场景下的应用:The following introduces the application of the memory management method provided by this application in big data scenarios:
大数据场景下,数据读写频繁,为了提高数据交互速率,许多大数据场景的存储系统会采用大内存池的设计作为后端存储;此外,由于大数据场景下流式计算十分普遍,数据往往需要多个缓冲区来保证流水线的不间断计算。本申请实施例提供的内存管理方法能够较好的适用于此场景。In big data scenarios, data is frequently read and written. In order to improve the data interaction rate, many storage systems in big data scenarios use the design of large memory pools as back-end storage. In addition, since streaming computing is very common in big data scenarios, data often requires Multiple buffers to ensure uninterrupted computation of the pipeline. The memory management method provided by the embodiment of the present application can be better applied to this scenario.
图9示出大数据场景下本申请实施例中进行数据交互的示意图。如图9所示,前端为Spark构建的数据分析应用程序(即Spark应用),后端为采用了内存池的大数据存储系统。其中,大数据存储系统中包括存储器、内存池以及内存管理器,内存池中包含了多个内存。内存管理器在多个内存分别分配了存储空间作为存储器的缓存区,并在每个缓存区中分配了存储空间作为Spark应用的缓冲区。其中,存储器的缓存区和Spark应用的缓冲区存在对内存的复用。在Spark应用进行数据读取时,存储器中的待读取数据直接复制到了缓存区中的缓冲区中,然后可以由缓冲区复制到Spark应用中。图9中多个缓冲区可以保证Spark应用的可用数据充足,从而支撑流水线的不间断计算。在大数据场景下的存储系统中,内存管理器用于执行如图3、图5和图7所示的内存管理方法。存储系统可以只包括一个内存管理器,用于管理内存池中所有的缓冲区;存储系统还可以包括多个内存管理器,每一个内存对应一个内存管理器,用于管理这个内存中的缓冲区。FIG. 9 shows a schematic diagram of data interaction in the embodiment of the present application in a big data scenario. As shown in Figure 9, the front end is a data analysis application program built by Spark (that is, the Spark application), and the back end is a big data storage system using a memory pool. Wherein, the big data storage system includes a memory, a memory pool, and a memory manager, and the memory pool includes multiple memories. The memory manager allocates storage space in multiple memories as the cache area of the memory, and allocates storage space in each cache area as the buffer of the Spark application. Among them, the cache area of the memory and the buffer area of the Spark application have memory multiplexing. When the Spark application reads data, the data to be read in the memory is directly copied to the buffer in the cache area, and then can be copied from the buffer to the Spark application. The multiple buffers in Figure 9 can ensure sufficient data available for the Spark application, thereby supporting uninterrupted calculation of the pipeline. In a storage system in a big data scenario, the memory manager is used to execute the memory management methods shown in FIG. 3 , FIG. 5 and FIG. 7 . The storage system can include only one memory manager, which is used to manage all the buffers in the memory pool; the storage system can also include multiple memory managers, and each memory corresponds to a memory manager, which is used to manage the buffers in this memory .
在本申请实施例中:缓冲区的分配不再额外占用内存的存储空间,而是重复利用了缓存区在内存中占用的存储空间;同一份数据无需在为应用分配的缓冲区中再复制一份进行存储。以图9为例,内存池中有6个内存为例,则可在内存池中节省6个缓冲区的存储空间。可见,在内存较多的大数据场景下,本申请实施例可以节省大量的内存资源。同时,位于缓存区中的缓冲区可直接从存储器中获取数据,不需要再进行数据的复制,可节省数据复制的时间。当数据量大或者数据读写频繁时,读写时延会大量减少。In the embodiment of this application: the allocation of the buffer no longer occupies additional memory storage space, but reuses the storage space occupied by the buffer area in the memory; the same data does not need to be copied again in the buffer allocated for the application copies are stored. Taking FIG. 9 as an example, if there are 6 memories in the memory pool as an example, the storage space of 6 buffers can be saved in the memory pool. It can be seen that in a big data scenario with a lot of memory, the embodiment of the present application can save a lot of memory resources. At the same time, the buffer located in the buffer area can directly obtain data from the memory without duplication of data, which saves time for data duplication. When the amount of data is large or the data is read and written frequently, the read and write delay will be greatly reduced.
图10示出本申请实施例提供的内存管理装置的结构示意图。该装置可以是如图4所示的内存管理器21。如图10所示,所述装置90可以包括:FIG. 10 shows a schematic structural diagram of a memory management device provided by an embodiment of the present application. The device may be a memory manager 21 as shown in FIG. 4 . As shown in Figure 10, the device 90 may include:
第一接收模块91,用于接收来自应用的缓冲区分配指令,所述缓冲区分配指令用于请求为所述应用在内存中分配缓冲区。具体可以实现如图3所示的步骤S401,以及其他隐含步骤。The first receiving module 91 is configured to receive a buffer allocation instruction from an application, and the buffer allocation instruction is used to request to allocate a buffer in memory for the application. Specifically, step S401 as shown in FIG. 3 and other implicit steps can be implemented.
分配模块92,用于在所述内存的缓存区中,为所述应用分配缓冲区;具体可以实现如图3所示的步骤S402,以及其他隐含步骤。The allocation module 92 is configured to allocate a buffer for the application in the buffer area of the memory; specifically, step S402 as shown in FIG. 3 and other implicit steps can be implemented.
返回模块93,用于返回所述缓冲区在所述内存中的地址,具体可以实现如图3所示的步 骤S403,以及其他隐含步骤。The return module 93 is used to return the address of the buffer in the memory, specifically step S403 as shown in Figure 3 and other implicit steps can be implemented.
在一种可能的实现方式中,所述缓冲区分配指令中包括所述缓冲区的大小,所述分配模块还用于:在所述内存的缓存区中,为所述应用分配所述大小的缓冲区,所述大小小于所述缓存区的大小。In a possible implementation manner, the buffer allocation instruction includes the size of the buffer, and the allocation module is further configured to: allocate a buffer of the size to the application in the cache area of the memory. a buffer, the size of which is less than the size of the cache.
在一种可能的实现方式中,所述装置还包括:第二接收模块,用于接收来自所述应用的内存释放指令,所述内存释放指令用于请求释放所述应用的缓冲区,所述内存释放指令中包括所述缓冲区在所述内存中的地址;释放模块,用于释放所述缓冲区。In a possible implementation manner, the device further includes: a second receiving module, configured to receive a memory release instruction from the application, where the memory release instruction is used to request to release a buffer of the application, and the The memory release instruction includes the address of the buffer in the memory; the release module is used to release the buffer.
在一种可能的实现方式中,所述装置还包括:第三接收模块,用于接收来自所述应用的数据读取指令,所述数据读取指令用于请求向所述应用传送待读取数据;读取模块,用于当所述待读取数据位于所述缓存区中除所述缓冲区之外的区域时,将所述待读取数据从所述缓存区读取到所述缓冲区中;当所述待读取数据位于所述缓存区之外时,将所述待读取数据从存储器读取到所述缓冲区中;传送模块,用于将所述缓冲区中的数据传送至所述应用。In a possible implementation manner, the device further includes: a third receiving module, configured to receive a data reading instruction from the application, and the data reading instruction is used to request to transmit the data to be read to the application. Data; a reading module, configured to read the data to be read from the buffer to the buffer when the data to be read is located in an area other than the buffer in the buffer area; when the data to be read is located outside the buffer area, the data to be read is read from the memory into the buffer; the transmission module is used to transfer the data in the buffer sent to the app.
在一种可能的实现方式中,所述所述传送模块还用于为所述应用分配的缓冲区中存储的数据的数量大于第一预设阈值时,将所述缓冲区中的数据传送至所述应用;或者,所述应用需要的待读取数据均已经被读取到所述缓冲区中时,将所述缓冲区中的数据传送至所述应用;或者,执行所述应用的处理器处于空闲状态时,将所述缓冲区中的数据传送至所述应用。。In a possible implementation manner, the transmitting module is further configured to transmit the data in the buffer to the The application; or, when all the data to be read required by the application has been read into the buffer, transfer the data in the buffer to the application; or, execute the processing of the application When the memory is in an idle state, transmit the data in the buffer to the application. .
在一种可能的实现方式中,所述装置还包括:第四接收模块,用于接收来自所述应用的数据写入指令,所述数据写入指令用于请求存储所述应用的待写入数据;写入模块,用于将所述待写入数据存储至所述缓冲区中;存储模块,用于将所述待写入数据从所述缓冲区中存储至存储器中。In a possible implementation manner, the device further includes: a fourth receiving module, configured to receive a data write instruction from the application, where the data write instruction is used to request to store the data to be written by the application data; a writing module, configured to store the data to be written into the buffer; a storage module, configured to store the data to be written from the buffer into a memory.
在一种可能的实现方式中,所述存储模块还用于:所述缓冲区中存储的数据的数量大于第二预设阈值时,将所述待写入数据从所述缓冲区存储至存储器中;或者,所述应用需要的待写入数据均已经存储在了缓冲区时,将所述待写入数据从所述缓冲区存储至存储器中;或者,接收写入完成指令时,将所述待写入数据从所述缓冲区存储至存储器中;或者,接收写入暂停指令时,将所述待写入数据从所述缓冲区存储至存储器中。In a possible implementation manner, the storage module is further configured to: when the amount of data stored in the buffer is greater than a second preset threshold, store the data to be written from the buffer to the memory or, when the data to be written required by the application has been stored in the buffer, store the data to be written from the buffer into the memory; or, when receiving the write completion command, store the The data to be written is stored in the memory from the buffer; or, when a write suspend instruction is received, the data to be written is stored in the memory from the buffer.
在本申请实施例中,缓冲区的分配不再额外占用内存的存储空间,而是重复利用了缓存区在内存中占用的存储空间,从而节省了内存资源。同时,同一份数据无需在为应用分配的缓冲区中再复制一份进行存储,位于缓存区中的缓冲区可直接从存储器中获取数据,不需要再进行数据的复制,可节省数据复制的时间,从而降低读写时延以及节省处理器的运算资源。In the embodiment of the present application, the allocation of the buffer does not occupy additional memory storage space, but reuses the storage space occupied by the buffer area in the memory, thereby saving memory resources. At the same time, the same data does not need to be copied in the buffer allocated for the application for storage. The buffer located in the buffer area can directly obtain the data from the memory without the need to copy the data, which can save the time for data copying , thereby reducing read and write delays and saving computing resources of the processor.
本申请的实施例还提供了一种计算机设备,图11示出本申请实施例提供的计算机设备的结构示意图。如图11所示,该计算机设备包括至少一个处理器301、内存22、总线304、输入输出设备305、内存管理器21以及存储器23。The embodiment of the present application also provides a computer device, and FIG. 11 shows a schematic structural diagram of the computer device provided in the embodiment of the present application. As shown in FIG. 11 , the computer device includes at least one processor 301 , memory 22 , bus 304 , input/output device 305 , memory manager 21 and storage 23 .
处理器301是计算机设备的控制中心,可以是一个处理元件,也可以是多个处理元件的统称。这些处理器中的每一个可以是一个单核处理器(single-CPU),也可以是一个多核处理器(multi-CPU)。这里的处理器可以指一个或多个设备、电路、和/或用于处理数据(例如计算机程序指令)的处理核。例如,内存管理器21是一个CPU,也可以是特定集成电路(Application Specific Integrated Circuit,ASIC),或者是被配置成实施本公开实施例的一个或多个集成电路,例如:一个或多个微处理器(Digital Signal Processor,DSP),或,一个或者多个现场可编程门阵列(Field Programmable Gate Array,FPGA)。The processor 301 is the control center of the computer device, and may be one processing element, or may be a general term for multiple processing elements. Each of these processors can be a single-core processor (single-CPU) or a multi-core processor (multi-CPU). A processor herein may refer to one or more devices, circuits, and/or processing cores for processing data (eg, computer program instructions). For example, the memory manager 21 is a CPU, may also be a specific integrated circuit (Application Specific Integrated Circuit, ASIC), or one or more integrated circuits configured to implement the embodiments of the present disclosure, for example: one or more micro A processor (Digital Signal Processor, DSP), or, one or more Field Programmable Gate Arrays (Field Programmable Gate Array, FPGA).
其中,处理器301可以通过运行或执行存储在内存22内的应用,以及调用存储在内存22内的数据,执行计算机设备的各种功能。Wherein, the processor 301 can execute various functions of the computer device by running or executing applications stored in the memory 22 and calling data stored in the memory 22 .
内存管理器21可以用于接收来自应用的缓冲区分配指令,其中该缓冲区分配指令用于请求为应用在内存中分配缓冲区;还可以用于在内存的缓存区中,为应用分配缓冲区,并返回缓冲区在内存中的地址。内存管理器21可以是一个单独的硬件,也可以是一个软件装置,并存储在内存中,由处理器301执行。The memory manager 21 can be used to receive a buffer allocation instruction from an application, wherein the buffer allocation instruction is used to request the application to allocate a buffer in the memory; it can also be used to allocate a buffer to the application in the buffer area of the memory , and returns the address of the buffer in memory. The memory manager 21 can be an independent hardware or a software device, stored in the memory, and executed by the processor 301 .
内存22可以用于存储在样本采集、建模和预测的过程中的预存数据、中间数据和结果数据。内存22可以包括只读存储器和随机存取存储器,还可以是易失性存储器或非易失性存储器,或可包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(read-only memory,ROM)、可编程只读存储器(programmable ROM,PROM)、可擦除可编程只读存储器(erasable PROM,EPROM)、电可擦除可编程只读存储器(electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(random access memory,RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(static RAM,SRAM)、动态随机存取存储器(DRAM)、同步动态随机存取存储器(synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(double data date SDRAM,DDR SDRAM)、增强型同步动态随机存取存储器(enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(synchlink DRAM,SLDRAM)和直接内存总线随机存取存储器(direct rambus RAM,DR RAM)。内存22可以是独立存在,通过总线304与处理器301相连接。内存22也可以和处理器301集成在一起。在本公开实施例中,内存22还可以用于存储存储在样本采集、建模和预测的过程中的预存数据、中间数据和结果数据。 Memory 22 may be used to store pre-stored, intermediate and result data during sample collection, modeling and prediction. Memory 22 may include read-only memory and random-access memory, and may be either volatile memory or non-volatile memory, or may include both volatile and non-volatile memory. Among them, the non-volatile memory can be read-only memory (read-only memory, ROM), programmable read-only memory (programmable ROM, PROM), erasable programmable read-only memory (erasable PROM, EPROM), electrically programmable Erases programmable read-only memory (electrically EPROM, EEPROM) or flash memory. The volatile memory can be random access memory (RAM), which acts as external cache memory. By way of illustration and not limitation, many forms of RAM are available such as static random access memory (static RAM, SRAM), dynamic random access memory (DRAM), synchronous dynamic random access memory (synchronous DRAM, SDRAM), Double data rate synchronous dynamic random access memory (double data date SDRAM, DDR SDRAM), enhanced synchronous dynamic random access memory (enhanced SDRAM, ESDRAM), synchronous connection dynamic random access memory (synchlink DRAM, SLDRAM) and direct Memory bus random access memory (direct rambus RAM, DR RAM). The memory 22 can exist independently, and is connected to the processor 301 through the bus 304 . The memory 22 can also be integrated with the processor 301 . In the embodiment of the present disclosure, the memory 22 can also be used to store pre-stored data, intermediate data and result data stored in the process of sample collection, modeling and prediction.
存储器23可以是磁盘、硬盘和光盘等用于持久化存储数据的设备。在本申请实施例中,存储器23中可以用于存储应用访问的文件。The storage 23 may be a device for persistently storing data, such as a magnetic disk, a hard disk, and an optical disk. In the embodiment of the present application, the storage 23 may be used to store files accessed by applications.
输入输出设备305,用于与其他设备进行数据传输。在具体实现中,作为一种实施例,输入输出设备305可以包括发射器和接收器。其中,发射器用于向其他设备发送数据,接收器用于接收其他设备发送的数据。发射器和接收器可以独立存在,也可以集成在一起。在本申请实施例中,输入输出设备305可以与执行应用的处理器之间进行缓冲区分配指令、内存释放指令、数据读取指令和数据写入指令,以及待写入数据和待读取数据等的传输。The input and output device 305 is used for data transmission with other devices. In a specific implementation, as an example, the input and output device 305 may include a transmitter and a receiver. Among them, the transmitter is used to send data to other devices, and the receiver is used to receive data sent by other devices. The transmitter and receiver can exist independently or be integrated together. In this embodiment of the application, the input and output device 305 can perform buffer allocation instructions, memory release instructions, data read instructions and data write instructions, as well as data to be written and data to be read, between the processor executing the application. etc. transmission.
总线304,可以是工业标准体系结构(Industry Standard Architecture,ISA)总线、外部设备互连(Peripheral Component Interconnect,PCI)总线或扩展工业标准体系结构(Extended Industry Standard Architecture,EISA)总线等。该总线可以分为地址总线、数据总线、控制总线等。为便于表示,图11中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。The bus 304 may be an Industry Standard Architecture (Industry Standard Architecture, ISA) bus, a Peripheral Component Interconnect (PCI) bus, or an Extended Industry Standard Architecture (Extended Industry Standard Architecture, EISA) bus, etc. The bus can be divided into address bus, data bus, control bus and so on. For ease of representation, only one thick line is used in FIG. 11 , but it does not mean that there is only one bus or one type of bus.
图11中示出的设备结构并不构成对计算机设备的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。The device structure shown in FIG. 11 does not constitute a limitation to the computer device, and may include more or less components than shown, or combine some components, or arrange different components.
本申请的实施例提供了一种计算机可读存储介质,其上存储有计算机程序指令,所述计算机程序指令被处理器执行时实现上述方法。An embodiment of the present application provides a computer-readable storage medium, on which computer program instructions are stored, and the above method is implemented when the computer program instructions are executed by a processor.
本申请的实施例提供了一种计算机程序产品,包括计算机可读代码,或者承载有计算机可读代码的非易失性计算机可读存储介质,当所述计算机可读代码在电子设备的处理器中运行时,所述电子设备中的处理器执行上述方法。An embodiment of the present application provides a computer program product, including computer-readable codes, or a non-volatile computer-readable storage medium bearing computer-readable codes, when the computer-readable codes are stored in a processor of an electronic device When running in the electronic device, the processor in the electronic device executes the above method.
计算机可读存储介质可以是可以保持和存储由指令执行设备使用的指令的有形设备。计算机可读存储介质例如可以是(但不限于)电存储设备、磁存储设备、光存储设备、电磁存储设备、半导体存储设备或者上述的任意合适的组合。计算机可读存储介质的更具体的例子(非穷举的列表)包括:便携式计算机盘、硬盘、随机存取存储器(Random Access Memory,RAM)、只读存储器(Read Only Memory,ROM)、可擦式可编程只读存储器(Electrically Programmable Read-Only-Memory,EPROM或闪存)、静态随机存取存储器(Static Random-Access Memory,SRAM)、便携式压缩盘只读存储器(Compact Disc Read-Only Memory,CD-ROM)、数字多功能盘(Digital Video Disc,DVD)、记忆棒、软盘、机械编码设备、例如其上存储有指令的打孔卡或凹槽内凸起结构、以及上述的任意合适的组合。A computer readable storage medium may be a tangible device that can retain and store instructions for use by an instruction execution device. A computer readable storage medium may be, for example, but is not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. More specific examples (non-exhaustive list) of computer-readable storage media include: portable computer disk, hard disk, random access memory (Random Access Memory, RAM), read only memory (Read Only Memory, ROM), erasable Electrically Programmable Read-Only-Memory (EPROM or flash memory), Static Random-Access Memory (Static Random-Access Memory, SRAM), Portable Compression Disk Read-Only Memory (Compact Disc Read-Only Memory, CD -ROM), Digital Video Disc (DVD), memory sticks, floppy disks, mechanically encoded devices such as punched cards or raised structures in grooves with instructions stored thereon, and any suitable combination of the foregoing .
这里所描述的计算机可读程序指令或代码可以从计算机可读存储介质下载到各个计算/处理设备,或者通过网络、例如因特网、局域网、广域网和/或无线网下载到外部计算机或外部存储设备。网络可以包括铜传输电缆、光纤传输、无线传输、路由器、防火墙、交换机、网关计算机和/或边缘服务器。每个计算/处理设备中的网络适配卡或者网络接口从网络接收计算机可读程序指令,并转发该计算机可读程序指令,以供存储在各个计算/处理设备中的计算机可读存储介质中。Computer readable program instructions or codes described herein may be downloaded from a computer readable storage medium to a respective computing/processing device, or downloaded to an external computer or external storage device over a network, such as the Internet, local area network, wide area network, and/or wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers. A network adapter card or a network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in each computing/processing device .
用于执行本申请操作的计算机程序指令可以是汇编指令、指令集架构(Instruction Set Architecture,ISA)指令、机器指令、机器相关指令、微代码、固件指令、状态设置数据、或者以一种或多种编程语言的任意组合编写的源代码或目标代码,所述编程语言包括面向对象的编程语言—诸如Smalltalk、C++等,以及常规的过程式编程语言—诸如“C”语言或类似的编程语言。计算机可读程序指令可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络—包括局域网(Local Area Network,LAN)或广域网(Wide Area Network,WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。在一些实施例中,通过利用计算机可读程序指令的状态信息来个性化定制电子电路,例如可编程逻辑电路、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或可编程逻辑阵列(Programmable Logic Array,PLA),该电子电路可以执行计算机可读程序指令,从而实现本申请的各个方面。Computer program instructions for performing the operations of the present application may be assembly instructions, instruction set architecture (Instruction Set Architecture, ISA) instructions, machine instructions, machine-related instructions, microcode, firmware instructions, state setting data, or in one or more source or object code written in any combination of programming languages, including object-oriented programming languages—such as Smalltalk, C++, etc., and conventional procedural programming languages—such as the “C” language or similar programming languages. Computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server implement. In cases involving a remote computer, the remote computer can be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or it can be connected to an external computer such as use an Internet service provider to connect via the Internet). In some embodiments, electronic circuits, such as programmable logic circuits, field-programmable gate arrays (Field-Programmable Gate Array, FPGA) or programmable logic arrays (Programmable Logic Array, PLA), the electronic circuit can execute computer-readable program instructions, thereby realizing various aspects of the present application.
这里参照根据本申请实施例的方法、装置(系统)和计算机程序产品的流程图和/或框图描述了本申请的各个方面。应当理解,流程图和/或框图的每个方框以及流程图和/或框图中各方框的组合,都可以由计算机可读程序指令实现。Aspects of the present application are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It should be understood that each block of the flowcharts and/or block diagrams, and combinations of blocks in the flowcharts and/or block diagrams, can be implemented by computer-readable program instructions.
这些计算机可读程序指令可以提供给通用计算机、专用计算机或其它可编程数据处理装置的处理器,从而生产出一种机器,使得这些指令在通过计算机或其它可编程数据处理装置的处理器执行时,产生了实现流程图和/或框图中的一个或多个方框中规定的功能/动作的装置。也可以把这些计算机可读程序指令存储在计算机可读存储介质中,这些指令使得计算机、可编程数据处理装置和/或其他设备以特定方式工作,从而,存储有指令的计算机可读介质则包括一个制造品,其包括实现流程图和/或框图中的一个或多个方框中规定的功能/动作的各个方面的指令。These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine such that when executed by the processor of the computer or other programmable data processing apparatus , producing an apparatus for realizing the functions/actions specified in one or more blocks in the flowchart and/or block diagram. These computer-readable program instructions can also be stored in a computer-readable storage medium, and these instructions cause computers, programmable data processing devices and/or other devices to work in a specific way, so that the computer-readable medium storing instructions includes An article of manufacture comprising instructions for implementing various aspects of the functions/acts specified in one or more blocks in flowcharts and/or block diagrams.
也可以把计算机可读程序指令加载到计算机、其它可编程数据处理装置、或其它设备上, 使得在计算机、其它可编程数据处理装置或其它设备上执行一系列操作步骤,以产生计算机实现的过程,从而使得在计算机、其它可编程数据处理装置、或其它设备上执行的指令实现流程图和/或框图中的一个或多个方框中规定的功能/动作。It is also possible to load computer-readable program instructions into a computer, other programmable data processing device, or other equipment, so that a series of operational steps are performed on the computer, other programmable data processing device, or other equipment to produce a computer-implemented process , so that instructions executed on computers, other programmable data processing devices, or other devices implement the functions/actions specified in one or more blocks in the flowcharts and/or block diagrams.
附图中的流程图和框图显示了根据本申请的多个实施例的装置、系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段或指令的一部分,所述模块、程序段或指令的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个连续的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。The flowchart and block diagrams in the figures show the architecture, functions and operations of possible implementations of apparatuses, systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in a flowchart or block diagram may represent a module, a portion of a program segment, or an instruction that includes one or more Executable instructions. In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks in succession may, in fact, be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved.
也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行相应的功能或动作的硬件(例如电路或ASIC(Application Specific Integrated Circuit,专用集成电路))来实现,或者可以用硬件和软件的组合,如固件等来实现。It should also be noted that each block in the block diagrams and/or flowcharts, and combinations of blocks in the block diagrams and/or flowcharts, can be implemented with hardware (such as circuits or ASIC (Application Specific Integrated Circuit, application-specific integrated circuit)), or can be implemented with a combination of hardware and software, such as firmware.
尽管在此结合各实施例对本发明进行了描述,然而,在实施所要求保护的本发明过程中,本领域技术人员通过查看所述附图、公开内容、以及所附权利要求书,可理解并实现所述公开实施例的其它变化。在权利要求中,“包括”(comprising)一词不排除其他组成部分或步骤,“一”或“一个”不排除多个的情况。单个处理器或其它单元可以实现权利要求中列举的若干项功能。相互不同的从属权利要求中记载了某些措施,但这并不表示这些措施不能组合起来产生良好的效果。Although the present invention has been described in conjunction with various embodiments herein, in the process of implementing the claimed invention, those skilled in the art can understand and Other variations of the disclosed embodiments are implemented. In the claims, the word "comprising" does not exclude other components or steps, and "a" or "an" does not exclude a plurality. A single processor or other unit may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that these measures cannot be combined to advantage.
以上已经描述了本申请的各实施例,上述说明是示例性的,并非穷尽性的,并且也不限于所披露的各实施例。在不偏离所说明的各实施例的范围和精神的情况下,对于本技术领域的普通技术人员来说许多修改和变更都是显而易见的。本文中所用术语的选择,旨在最好地解释各实施例的原理、实际应用或对市场中的技术的改进,或者使本技术领域的其它普通技术人员能理解本文披露的各实施例。Having described various embodiments of the present application above, the foregoing description is exemplary, not exhaustive, and is not limited to the disclosed embodiments. Many modifications and alterations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen to best explain the principle of each embodiment, practical application or improvement of technology in the market, or to enable other ordinary skilled in the art to understand each embodiment disclosed herein.

Claims (21)

  1. 一种内存管理方法,其特征在于,所述方法包括:A memory management method, characterized in that the method comprises:
    接收缓冲区分配指令,所述缓冲区分配指令用于为应用在内存中分配缓冲区,所述缓冲区用于存储的数据达到一定数量后,一次性将所述缓冲区中的数据发送给所述应用;Receive a buffer allocation instruction, the buffer allocation instruction is used to allocate a buffer in the memory for the application, and after the data stored in the buffer reaches a certain amount, send the data in the buffer to the application;
    在所述内存的缓存区中,为所述应用分配缓冲区。In the cache area of the memory, a buffer is allocated for the application.
  2. 根据权利要求1所述的方法,其特征在于,所述缓冲区分配指令中包括所述缓冲区的大小,所述在所述内存的缓存区中,为所述应用分配缓冲区,包括:The method according to claim 1, wherein the buffer allocation instruction includes the size of the buffer, and the buffer allocation for the application in the buffer area of the memory includes:
    在所述内存的缓存区中,为所述应用分配所述大小的缓冲区,所述大小小于所述缓存区的大小。In the buffer area of the memory, a buffer of the size is allocated for the application, and the size is smaller than the size of the buffer area.
  3. 根据权利要求1或2所述的方法,其特征在于,所述方法还包括:The method according to claim 1 or 2, characterized in that the method further comprises:
    接收内存释放指令,所述内存释放指令用于请求释放所述应用的缓冲区,所述内存释放指令中包括所述缓冲区在所述内存中的地址;receiving a memory release instruction, the memory release instruction is used to request to release the buffer of the application, and the memory release instruction includes an address of the buffer in the memory;
    释放所述缓冲区。Frees said buffer.
  4. 根据权利要求1至3中任意一项所述的方法,其特征在于,所述方法还包括:The method according to any one of claims 1 to 3, further comprising:
    接收数据读取指令,所述数据读取指令用于请求向所述应用传送待读取数据;receiving a data read instruction, where the data read instruction is used to request to transmit data to be read to the application;
    当所述待读取数据位于所述缓存区之外时,将所述待读取数据从存储器读取到所述缓冲区中;When the data to be read is located outside the buffer area, read the data to be read from the memory into the buffer;
    将所述缓冲区中的数据传送至所述应用。The data in the buffer is transferred to the application.
  5. 根据权利要求4中所述的方法,其特征在于,所述将所述缓冲区中的数据传送至所述应用,具体包括:The method according to claim 4, wherein the transferring the data in the buffer to the application specifically comprises:
    为所述应用分配的缓冲区中存储的数据的数量大于第一预设阈值时,将所述缓冲区中的数据传送至所述应用;或者,When the amount of data stored in the buffer allocated for the application is greater than a first preset threshold, transmitting the data in the buffer to the application; or,
    所述应用需要的待读取数据均已经被读取到所述缓冲区中时,将所述缓冲区中的数据传送至所述应用;或者,When all the data to be read required by the application has been read into the buffer, transfer the data in the buffer to the application; or,
    执行所述应用的处理器处于空闲状态时,将所述缓冲区中的数据传送至所述应用。When the processor executing the application is in an idle state, transmit the data in the buffer to the application.
  6. 根据权利要求1至5中任意一项所述的方法,其特征在于,所述方法还包括:The method according to any one of claims 1 to 5, wherein the method further comprises:
    接收数据写入指令,所述数据写入指令用于请求存储所述应用的待写入数据;receiving a data write instruction, where the data write instruction is used to request to store the data to be written by the application;
    将所述待写入数据存储至所述缓冲区中;storing the data to be written into the buffer;
    将所述待写入数据从所述缓冲区存储至存储器中。storing the data to be written from the buffer into a memory.
  7. 根据权利要求6中所述的方法,其特征在于,所述将所述待写入数据从所述缓冲区存储至存储器中,具体包括:The method according to claim 6, wherein the storing the data to be written from the buffer into the memory specifically comprises:
    所述缓冲区中存储的数据的数量大于第二预设阈值时,将所述待写入数据从所述缓冲区存储至存储器中;或者,When the amount of data stored in the buffer is greater than a second preset threshold, storing the data to be written from the buffer into the memory; or,
    所述应用需要的待写入数据均已经存储在了缓冲区时,将所述待写入数据从所述缓冲区存储至存储器中;或者,When the data to be written required by the application has been stored in the buffer, storing the data to be written from the buffer to the memory; or,
    接收写入完成指令时,将所述待写入数据从所述缓冲区存储至存储器中;或者,When receiving a write completion instruction, storing the data to be written from the buffer into the memory; or,
    接收写入暂停指令时,将所述待写入数据从所述缓冲区存储至存储器中。When receiving the write suspend instruction, the data to be written is stored from the buffer into the memory.
  8. 一种内存管理装置,其特征在于,所述装置包括:A memory management device, characterized in that the device comprises:
    第一接收模块,用于接收来自应用的缓冲区分配指令,所述缓冲区分配指令用于请求为所述应用在内存中分配缓冲区,所述缓冲区用于存储的数据达到一定数量后,所述应用一次性获取所述缓冲区中的数据;The first receiving module is configured to receive a buffer allocation instruction from an application, the buffer allocation instruction is used to request to allocate a buffer in memory for the application, and when the data stored in the buffer reaches a certain amount, The application acquires the data in the buffer once;
    分配模块,用于在所述内存的缓存区中,为所述应用分配缓冲区。The allocating module is configured to allocate a buffer for the application in the buffer area of the memory.
  9. 根据权利要求8所述的装置,其特征在于,所述缓冲区分配指令中包括所述缓冲区的大小,所述分配模块还用于:The device according to claim 8, wherein the buffer allocation instruction includes the size of the buffer, and the allocation module is also used for:
    在所述内存的缓存区中,为所述应用分配所述大小的缓冲区,所述大小小于所述缓存区的大小。In the buffer area of the memory, a buffer of the size is allocated for the application, and the size is smaller than the size of the buffer area.
  10. 根据权利要求8或9所述的装置,其特征在于,所述装置还包括:The device according to claim 8 or 9, wherein the device further comprises:
    第二接收模块,用于接收来自所述应用的内存释放指令,所述内存释放指令用于请求释放所述应用的缓冲区,所述内存释放指令中包括所述缓冲区在所述内存中的地址;The second receiving module is configured to receive a memory release instruction from the application, the memory release instruction is used to request to release the buffer of the application, and the memory release instruction includes the memory release instruction of the buffer in the memory address;
    释放模块,用于释放所述缓冲区。The release module is used to release the buffer.
  11. 根据权利要求8至10中任意一项所述的装置,其特征在于,所述装置还包括:The device according to any one of claims 8 to 10, wherein the device further comprises:
    第三接收模块,用于接收数据读取指令,所述数据读取指令用于请求向所述应用传送待读取数据;The third receiving module is configured to receive a data reading instruction, and the data reading instruction is used to request to transmit the data to be read to the application;
    读取模块,用于当所述待读取数据位于所述缓存区之外时,将所述待读取数据从存储器读取到所述缓冲区中;A reading module, configured to read the data to be read from the memory into the buffer when the data to be read is outside the buffer;
    传送模块,用于将所述缓冲区中的数据传送至所述应用。A transmission module, configured to transmit the data in the buffer to the application.
  12. 根据权利要求11中所述的装置,其特征在于,所述传送模块还用于:The device according to claim 11, wherein the transmission module is further used for:
    为所述应用分配的缓冲区中存储的数据的数量大于第一预设阈值时,将所述缓冲区中的数据传送至所述应用;或者,When the amount of data stored in the buffer allocated for the application is greater than a first preset threshold, transmitting the data in the buffer to the application; or,
    所述应用需要的待读取数据均已经被读取到所述缓冲区中时,将所述缓冲区中的数据传送至所述应用;或者,When all the data to be read required by the application has been read into the buffer, transfer the data in the buffer to the application; or,
    执行所述应用的处理器处于空闲状态时,将所述缓冲区中的数据传送至所述应用。When the processor executing the application is in an idle state, transmit the data in the buffer to the application.
  13. 根据权利要求8至12中任意一项所述的装置,其特征在于,所述装置还包括:The device according to any one of claims 8 to 12, wherein the device further comprises:
    第四接收模块,用于接收数据写入指令,所述数据写入指令用于请求存储所述应用的待写入数据;The fourth receiving module is configured to receive a data writing instruction, and the data writing instruction is used to request to store the data to be written of the application;
    写入模块,用于将所述待写入数据存储至所述缓冲区中;a writing module, configured to store the data to be written into the buffer;
    存储模块,用于将所述待写入数据从所述缓冲区中存储至存储器中。A storage module, configured to store the data to be written from the buffer into a memory.
  14. 根据权利要求13中所述的装置,其特征在于,所述存储模块还用于:The device according to claim 13, wherein the storage module is further used for:
    所述缓冲区中存储的数据的数量大于第二预设阈值时,将所述待写入数据从所述缓冲区存储至存储器中;或者,When the amount of data stored in the buffer is greater than a second preset threshold, storing the data to be written from the buffer into the memory; or,
    所述应用需要的待写入数据均已经存储在了缓冲区时,将所述待写入数据从所述缓冲区存储至存储器中;或者,When the data to be written required by the application has been stored in the buffer, storing the data to be written from the buffer to the memory; or,
    接收写入完成指令时,将所述待写入数据从所述缓冲区存储至存储器中;或者,When receiving a write completion instruction, storing the data to be written from the buffer into the memory; or,
    接收写入暂停指令时,将所述待写入数据从所述缓冲区存储至存储器中。When receiving the write suspend instruction, the data to be written is stored from the buffer into the memory.
  15. 一种数据访问方法,其特征在于,所述方法由存储系统执行,所述存储系统包括内存、存储器和内存管理器,所述内存包括缓存区和缓冲区,所述缓冲区位于所述缓存区中,所述方法包括:A data access method, characterized in that the method is executed by a storage system, the storage system includes a memory, a memory, and a memory manager, the memory includes a buffer area and a buffer, and the buffer is located in the buffer area , the method includes:
    接收数据读取指令,所述数据读取指令用于请求向应用传送待读取数据;receiving a data read instruction, where the data read instruction is used to request to transmit the data to be read to the application;
    当所述待读取数据位于所述缓存区之外时,将所述待读取数据从存储器读取到所述缓冲区中;When the data to be read is located outside the buffer area, read the data to be read from the memory into the buffer;
    将所述缓冲区中的数据传送至所述应用。The data in the buffer is transferred to the application.
  16. 根据权利要求15中所述的方法,其特征在于,所述将所述缓冲区中的数据传送至所 述应用,具体包括:The method according to claim 15, wherein the transferring the data in the buffer to the application includes:
    为所述应用分配的缓冲区中存储的数据的数量大于第一预设阈值时,将所述缓冲区中的数据传送至所述应用;或者,When the amount of data stored in the buffer allocated for the application is greater than a first preset threshold, transmitting the data in the buffer to the application; or,
    所述应用需要的待读取数据均已经被读取到所述缓冲区中时,将所述缓冲区中的数据传送至所述应用;或者,When all the data to be read required by the application has been read into the buffer, transfer the data in the buffer to the application; or,
    执行所述应用的处理器处于空闲状态时,将所述缓冲区中的数据传送至所述应用。When the processor executing the application is in an idle state, transmit the data in the buffer to the application.
  17. 根据权利要求15至16中任意一项所述的方法,其特征在于,所述方法还包括:The method according to any one of claims 15 to 16, further comprising:
    接收数据写入指令,所述数据写入指令用于请求存储所述应用的待写入数据;receiving a data write instruction, where the data write instruction is used to request to store the data to be written by the application;
    将所述待写入数据存储至所述缓冲区中;storing the data to be written into the buffer;
    将所述待写入数据从所述缓冲区存储至存储器中。storing the data to be written from the buffer into a memory.
  18. 根据权利要求17中所述的方法,其特征在于,所述将所述待写入数据从所述缓冲区存储至存储器中,具体包括:The method according to claim 17, wherein the storing the data to be written from the buffer into the memory specifically comprises:
    所述缓冲区中存储的数据的数量大于第二预设阈值时,将所述待写入数据从所述缓冲区存储至存储器中;或者,When the amount of data stored in the buffer is greater than a second preset threshold, storing the data to be written from the buffer into the memory; or,
    所述应用需要的待写入数据均已经存储在了缓冲区时,将所述待写入数据从所述缓冲区存储至存储器中;或者,When the data to be written required by the application has been stored in the buffer, storing the data to be written from the buffer to the memory; or,
    接收写入完成指令时,将所述待写入数据从所述缓冲区存储至存储器中;或者,When receiving a write completion instruction, storing the data to be written from the buffer into the memory; or,
    接收写入暂停指令时,将所述待写入数据从所述缓冲区存储至存储器中。When receiving the write suspend instruction, the data to be written is stored from the buffer into the memory.
  19. 一种计算机设备,其特征在于,所述计算机设备包括内存、存储器和内存管理器:所述内存管理器用于执行上述权利要求1至7中任一项所述的方法。A computer device, characterized in that the computer device includes a memory, a memory, and a memory manager: the memory manager is configured to execute the method described in any one of claims 1 to 7 above.
  20. 一种计算机存储介质,其上存储有计算机程序指令,其特征在于,所述计算机程序指令被处理器执行时实现权利要求1至7中任意一项所述的方法。A computer storage medium, on which computer program instructions are stored, wherein the computer program instructions implement the method according to any one of claims 1 to 7 when executed by a processor.
  21. 一种计算机程序产品,包括计算机可读代码,或者承载有计算机可读代码的计算机可读存储介质,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行权利要求1至7中任意一项所述的方法。A computer program product, comprising computer readable codes, or a computer readable storage medium bearing computer readable codes, when the computer readable codes run in an electronic device, a processor in the electronic device executes the rights The method described in any one of 1 to 7 is required.
PCT/CN2022/085854 2021-08-04 2022-04-08 Memory management method and apparatus, and computer device WO2023010879A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110892745.3A CN115934585A (en) 2021-08-04 2021-08-04 Memory management method and device and computer equipment
CN202110892745.3 2021-08-04

Publications (1)

Publication Number Publication Date
WO2023010879A1 true WO2023010879A1 (en) 2023-02-09

Family

ID=85155079

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/085854 WO2023010879A1 (en) 2021-08-04 2022-04-08 Memory management method and apparatus, and computer device

Country Status (2)

Country Link
CN (1) CN115934585A (en)
WO (1) WO2023010879A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116560579A (en) * 2023-05-18 2023-08-08 上海威固信息技术股份有限公司 Simulation data acquisition system based on multisource data fusion

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101847127A (en) * 2010-06-18 2010-09-29 福建星网锐捷网络有限公司 Memory management method and device
CN102567225A (en) * 2011-12-28 2012-07-11 北京握奇数据系统有限公司 Method and device for managing system memory
CN104731721A (en) * 2015-02-10 2015-06-24 深圳酷派技术有限公司 Sharing method and device for memory for display
CN109408412A (en) * 2018-10-24 2019-03-01 龙芯中科技术有限公司 Memory prefetching control method, device and equipment
US20210141727A1 (en) * 2019-11-07 2021-05-13 Research & Business Foundation Sungkyunkwan University Method and system with improved memory input and output speed

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101847127A (en) * 2010-06-18 2010-09-29 福建星网锐捷网络有限公司 Memory management method and device
CN102567225A (en) * 2011-12-28 2012-07-11 北京握奇数据系统有限公司 Method and device for managing system memory
CN104731721A (en) * 2015-02-10 2015-06-24 深圳酷派技术有限公司 Sharing method and device for memory for display
CN109408412A (en) * 2018-10-24 2019-03-01 龙芯中科技术有限公司 Memory prefetching control method, device and equipment
US20210141727A1 (en) * 2019-11-07 2021-05-13 Research & Business Foundation Sungkyunkwan University Method and system with improved memory input and output speed

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116560579A (en) * 2023-05-18 2023-08-08 上海威固信息技术股份有限公司 Simulation data acquisition system based on multisource data fusion
CN116560579B (en) * 2023-05-18 2024-06-04 上海威固信息技术股份有限公司 Simulation data acquisition system based on multisource data fusion

Also Published As

Publication number Publication date
CN115934585A (en) 2023-04-07

Similar Documents

Publication Publication Date Title
US11500689B2 (en) Communication method and apparatus
US20210089343A1 (en) Information processing apparatus and information processing method
US9928168B2 (en) Non-volatile random access system memory with DRAM program caching
US11379374B2 (en) Systems and methods for streaming storage device content
CN113918101B (en) Method, system, equipment and storage medium for writing data cache
WO2022021896A1 (en) Interprocess communication method and apparatus
WO2023010879A1 (en) Memory management method and apparatus, and computer device
CN111124270A (en) Method, apparatus and computer program product for cache management
JP5158576B2 (en) I / O control system, I / O control method, and I / O control program
CN116303126B (en) Caching method, data processing method and electronic equipment
CN107209738B (en) Direct access to storage memory
KR102076248B1 (en) Selective Delay Garbage Collection Method And Memory System Using The Same
CN109634877B (en) Method, device, equipment and storage medium for realizing stream operation
CN117242763A (en) Network interface card for caching file system internal structure
US20160062925A1 (en) Method and system for managing storage device operations by a host device
WO2022262623A1 (en) Data exchange method and apparatus
US11163475B2 (en) Block input/output (I/O) accesses in the presence of a storage class memory
WO2016188293A1 (en) Data acquisition method and device
US10599334B2 (en) Use of capi-attached storage as extended memory
KR101149568B1 (en) Method for large capacity data transmission using kernel level function
KR20000065846A (en) Method for zero-copy between kernel and user in operating system
CN117806523A (en) Method, electronic device and computer program product for synchronous access of data
CN117149670A (en) Computing device and method for caching relevant data of computing cores in computing device
CN117806524A (en) Method, electronic device and computer program product for asynchronously accessing data
CN110968642A (en) System and method for data storage in a distributed system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22851603

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE