CN116755885A - Memory management method and device, electronic equipment and storage medium - Google Patents

Memory management method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN116755885A
CN116755885A CN202310747726.0A CN202310747726A CN116755885A CN 116755885 A CN116755885 A CN 116755885A CN 202310747726 A CN202310747726 A CN 202310747726A CN 116755885 A CN116755885 A CN 116755885A
Authority
CN
China
Prior art keywords
cache data
memory
byte
base address
byte size
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310747726.0A
Other languages
Chinese (zh)
Inventor
刘阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Spreadtrum Communications Tianjin Co Ltd
Original Assignee
Spreadtrum Communications Tianjin Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Spreadtrum Communications Tianjin Co Ltd filed Critical Spreadtrum Communications Tianjin Co Ltd
Priority to CN202310747726.0A priority Critical patent/CN116755885A/en
Publication of CN116755885A publication Critical patent/CN116755885A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/544Buffers; Shared memory; Pipes
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The embodiment of the application provides a memory management method, a memory management device, electronic equipment and a storage medium, which can optimize memory waste caused by that input and output cache data occupy limited memory space at the same time and save memory space occupation by managing storage base address allocation of each cache data in a shared memory. The memory management method comprises the following steps: calculating a first byte difference between the second byte size and the first byte size in response to the second byte size corresponding to the second cache data to be generated being greater than the first byte size corresponding to the first cache data; and forward address offset is carried out on the first memory base address according to the first byte difference value, a second memory base address is obtained, the first cache data are sequentially read and processed according to the preset byte number by taking the first memory base address as a starting point, and the generated second cache data are sequentially written in real time by taking the second memory base address as a starting point until the first cache data are covered by the second cache data.

Description

Memory management method and device, electronic equipment and storage medium
[ field of technology ]
The embodiment of the application relates to the technical field of computer storage, in particular to a memory management method, a memory management device, electronic equipment and a storage medium.
[ background Art ]
The shared memory is a common memory configuration method in multiprocessor systems or multi-pipeline data processing, and by allocating memory space capable of meeting the maximum memory requirements of a plurality of processing processes, cache data commonly used by the plurality of processing processes is saved, so that the memory access speed is remarkably improved, and the memory space of a computer system is saved.
When the byte size of the output buffer data of a certain processing process is larger than that of the input buffer data, the computer system additionally divides a temporary space for writing the output buffer data to be generated in the shared memory, and empties the output buffer data after the processing process finishes outputting the data. In the data processing process, the general memory management strategy often occupies memory space due to the fact that the input cache data and the output cache data occupy the memory space at the same time, so that the shared memory space which can be used for writing other data originally becomes unnecessary memory waste.
Because the hardware of the computer is limited, the memory waste phenomenon often causes the limited memory space to be more quickly occupied, and thus causes the problem of insufficient memory space.
[ application ]
The embodiment of the application provides a memory management method, a memory management device, electronic equipment and a storage medium, which can optimize memory waste caused by that input and output cache data occupy limited memory space at the same time in the existing memory management strategy by managing storage base address allocation of different cache data in a shared memory, and save the space occupation of hardware memory.
In a first aspect, an embodiment of the present application provides a memory management method, applied to a terminal side, where a shared memory exists in the terminal side, and first cache data exists in the shared memory, the method includes:
calculating a first byte difference between a second byte size corresponding to second cache data to be generated and a first byte size corresponding to first cache data in response to the second byte size being larger than the first byte size;
performing address offset on a first memory base address forward according to the first byte difference value to obtain a second memory base address, wherein the first memory base address is used for indicating a writing start address of the first cache data, and the second memory base address is used for indicating a writing start address of the second cache data to be generated;
And sequentially reading and processing the first cache data according to the preset byte number by taking the first memory base address as a starting point, and sequentially writing the generated second cache data in real time by taking the second memory base address as a starting point until the first cache data is covered by the second cache data.
In the embodiment of the application, when the terminal processes the first cache data in the shared memory to generate the second cache data, calculating a byte difference value between the first byte size and the second byte size, and shifting the first memory base address forward by using the byte difference value to obtain the second memory base address, so that the second buffer data can accurately cover the whole first cache data after being generated while ensuring that the first cache data to be read is not covered by the writing of the second cache data; compared with the method for independently distributing the memory occupation with smaller temporary space for the second cache data with larger byte size, the method for managing the memory by the buffer memory and the buffer memory can be used for solving the problems of process blocking, memory occupation and the like caused by the memory limitation of the computer hardware under the existing memory management strategy.
Optionally, before calculating the first byte difference between the second byte size and the first byte size, the method further includes:
Receiving a data processing instruction, wherein the data processing instruction carries a storage address of metadata in a nonvolatile storage medium, and the metadata is a data source for generating the first cache data;
according to the storage address, the metadata is searched from the nonvolatile storage medium, and the size of metadata bytes corresponding to the metadata is determined;
determining the first byte size and the second byte size according to the metadata byte size;
and according to the maximum value between the second byte size and the first byte size, memory space which is equal to the maximum value is allocated for the shared memory.
In the embodiment of the application, when the memory space is allocated for the shared memory, the size of the metadata byte is firstly inquired, the maximum value between the first byte size and the second byte size is calculated by utilizing the size of the metadata byte to be used as the size of the memory space, and the sum of the first byte size and the second byte size is used as the size of the shared memory space instead of the existing memory management method, so that the size of the space allocated for the shared memory is compressed as much as possible while the sufficient shared memory is ensured to be processed, and the allocation of the hardware memory of the computer is optimized.
Optionally, after the memory space equal to the maximum value is allocated for the shared memory according to the maximum value between the second byte size and the first byte size, the method further includes:
responding to the second byte size being larger than the first byte size, and performing address offset on the tail address of the shared memory forward according to the first byte size to obtain the first memory base address;
and reading the metadata from the nonvolatile storage medium according to the storage address, writing the metadata into the shared memory, and generating the first cache data.
In the embodiment of the application, the first memory base address is set to be obtained by shifting the first byte size forward from the end address of the shared memory under the condition that the second byte size is larger than the first byte size, so that the first cache data can be generated at the last part of the shared memory, the second memory base address is obtained by shifting the address, the writing of the second cache data is normally executed, and the risks of data overflow, loss and other problems caused by the fact that the second cache data cannot be written normally are reduced.
Optionally, after the memory space equal to the maximum value is allocated for the shared memory according to the maximum value between the second byte size and the first byte size, the method further includes:
Setting a first address of the shared memory as the first memory base address and the second memory base address in response to the second byte size not being greater than the first byte size;
reading the metadata from the nonvolatile storage medium according to the storage address, writing the metadata into the shared memory, and generating the first cache data;
and sequentially reading and processing the first cache data according to the preset byte number by taking the first memory base address as a starting point, and sequentially writing the generated second cache data in real time by taking the second memory base address as a starting point until the first cache data is read.
In the embodiment of the application, when the second byte size is not larger than the first byte size, the first address of the shared memory is set as the first memory base address and the second memory base address, and the characteristic that the byte writing speed of the second cache data is not larger than the byte reading speed of the first cache data is utilized to directly enable the second cache data to cover the content of the first cache data from the beginning, so that the first cache data and the second cache data share one shared memory space while ensuring that the first cache data cannot be covered by the second cache data before reading, and the effect of saving the memory space is realized.
Optionally, the writing sequentially writes the generated second cache data in real time with the second memory base address as a starting point until the first cache data is covered by the second cache data, and the method further includes:
according to the second byte size, determining M third byte sizes corresponding to M third cache data to be generated respectively, wherein the M third cache data are sets of different cache data which are output after the second cache data are read and processed, and M is any positive integer greater than 1;
subtracting the minimum value between the maximum value of the second byte size and the M third byte sizes from the sum of the second byte size and the M third byte sizes to obtain a memory update value;
and allocating a memory space which is equal to the memory updating value for the shared memory.
In the embodiment of the application, for the situation that the second cache data needs to be repeatedly called, the sum of the sizes of M third bytes and the sizes of the second bytes is calculated first to obtain the memory occupation amount which needs to be allocated for the shared memory based on the existing memory management strategy; and then finding out the minimum value from the maximum value of the second byte size and the M third byte sizes as the maximum memory optimization quantity which can be reduced from the memory occupation quantity. And finally, calculating the difference value between the two parameters to obtain a memory updating value capable of saving the memory to the greatest extent, and updating the space size of the shared memory based on the memory updating value, thereby realizing the effect of saving the space of the shared memory to the greatest extent when the second cache data are repeatedly called.
Optionally, the third buffered data corresponding to the maximum value of the M third byte sizes is the third buffered data output by the mth, and the method further includes:
responding to an mth data processing operation executed on the second cache data, and performing address offset on the second memory base address backwards according to the sum of the second byte size and the third byte size corresponding to the third cache data output by the previous M-1, so as to obtain a third memory base address, wherein the third memory base address is used for indicating a writing starting address of the mth third cache data, and M is any positive integer smaller than M;
and sequentially reading and processing the second cache data according to the preset byte number by taking the second memory base address as a starting point, and sequentially writing the generated m-th third cache data in real time by taking the third memory base address as a starting point until the second cache data is read.
In the embodiment of the application, the third cache data corresponding to the maximum value is arranged to the Mth output, and the third memory base addresses corresponding to the first M third cache data are all set after the addresses storing the second cache data, so that the second cache data is prevented from being covered by other third cache data in advance before the Mth data processing operation, thereby creating conditions for normal execution of the step of covering the second cache data by the third cache data output by the Mth output.
Optionally, the writing sequentially writes the generated mth third cache data in real time with the third memory base address as a starting point until the reading of the second cache data is completed, and the method further includes:
responsive to the third byte size corresponding to the mth of the third buffered data being greater than the second byte size, calculating a second byte difference between the third byte size corresponding to the mth of the third buffered data and the second byte size;
performing address offset on the second memory base address forward according to the second byte difference value to obtain a fourth memory base address, wherein the fourth memory base address is used for indicating the writing start address of the Mth third cache data;
and sequentially reading and processing the second cache data according to the preset byte number by taking the second memory base address as a starting point, and sequentially writing the generated Mth third cache data in real time by taking the fourth memory base address as a starting point until the second cache data is completely covered by the Mth third cache data.
In the embodiment of the application, when the M third cache data, namely the third cache data with the largest byte is output, the relation between the third byte size corresponding to the third cache data and the second byte size is judged, when the third byte size is larger, the second memory base address is shifted forwards to obtain the fourth memory base address, the written M third cache data can not cover the second cache data to be read in advance, and meanwhile, the M third cache data completely covers the second cache data just when the second cache data is read, so that the temporary space for independently distributing the third cache data with the largest byte size is saved, and the memory occupation is furthest reduced under the condition of ensuring the normal processing of the data.
Optionally, the writing sequentially writes the generated mth third cache data in real time with the third memory base address as a starting point until the reading of the second cache data is completed, and the method further includes:
responding to the fact that the third byte size corresponding to the Mth third cache data is not larger than the second byte size, and taking the second memory base address as a writing starting address of the Mth third cache data;
and sequentially reading and processing the second cache data according to the preset byte number by taking the second memory base address as a starting point, and sequentially writing the generated Mth third cache data in real time by taking the second memory base address as a starting point until the second cache data is read.
In the embodiment of the application, when the third byte size of the Mth third cache data is not larger than the second byte size, the second memory base address is simultaneously used as the write-in address of the Mth third cache data, and the reading of the second cache data and the writing of the Mth third cache data are directly and simultaneously carried out on the second memory base address, so that the written Mth third cache data cannot cover the unread second cache data in advance, and meanwhile, the Mth third cache data and the second cache data share one shared memory space, thereby realizing the effect of saving the occupied memory space.
In a second aspect, an embodiment of the present application provides a memory management device applied to a terminal side, where a shared memory exists in the terminal side, and first cache data exists in the shared memory, the device includes:
a calculating unit, configured to calculate a first byte difference value between a second byte size corresponding to second cache data to be generated and the first byte size in response to the second byte size corresponding to the second cache data being greater than the first byte size corresponding to the first cache data;
a base address setting unit, configured to forward address offset to a first memory base address according to the first byte difference value, to obtain a second memory base address, where the first memory base address is used to indicate a storage start address of the first cache data, and the second memory base address is used to indicate a write start address of the second cache data to be generated;
and the cache data processing unit is used for sequentially reading and processing the first cache data according to the preset byte number by taking the first memory base address as a starting point, and sequentially writing the generated second cache data in real time by taking the second memory base address as a starting point until the first cache data is covered by the second cache data.
Optionally, the apparatus further includes:
the receiving unit is used for receiving a data processing instruction, wherein the data processing instruction carries a storage address of metadata in a nonvolatile storage medium, and the metadata is a data source for generating the first cache data;
the memory occupation determining unit is used for searching the metadata from the nonvolatile storage medium according to the storage address and determining the size of metadata bytes corresponding to the metadata;
the memory occupation determining unit is further configured to determine the first byte size and the second byte size according to the metadata byte size;
and the memory allocation unit is used for allocating a memory space which is equal to the maximum value for the shared memory according to the maximum value between the second byte size and the first byte size.
Optionally, the base address setting unit is further configured to, in response to the second byte size being greater than the first byte size, forward address offset the end address of the shared memory according to the first byte size, to obtain the first memory base address;
the cache data processing unit is further configured to read the metadata from the nonvolatile storage medium according to the storage address, write the metadata into the shared memory, and generate the first cache data.
Optionally, the base address setting unit is further configured to set a first address of the shared memory to the first memory base address and the second memory base address in response to the second byte size being not greater than the first byte size;
the cache data processing unit is further configured to read the metadata from the nonvolatile storage medium according to the storage address, write the metadata into the shared memory, and generate the first cache data;
the cache data processing unit is further configured to sequentially read and process the first cache data according to a preset byte number with the first memory base address as a starting point, and sequentially write the generated second cache data in real time with the second memory base address as a starting point until the first cache data is read.
Optionally, the memory occupation determining unit is further configured to determine, according to the second byte size, M third byte sizes corresponding to M third cache data to be generated, where the M third cache data are sets of different cache data that are output after the second cache data are read and processed, and M is any positive integer greater than 1;
The computing unit is further configured to subtract a minimum value between the second byte size and a maximum value of the M third byte sizes from a sum of the second byte size and the M third byte sizes to obtain a memory update value;
the memory allocation unit is further configured to allocate a memory space equal to the memory update value for the shared memory.
Optionally, the base address setting unit is further configured to, in response to an mth data processing operation performed on the second cache data, perform address offset on the second memory base address backward according to a sum of the second byte size and the third byte size corresponding to the first M-1 output third cache data, to obtain a third memory base address, where M is any positive integer smaller than M, and the third memory base address is used to indicate a write start address of the mth third cache data;
the cache data processing unit is further configured to sequentially read and process the second cache data according to the preset byte number with the second memory base address as a starting point, and sequentially write the generated mth third cache data in real time with the third memory base address as a starting point until the second cache data is read.
Optionally, the calculating unit is further configured to calculate a second byte difference value between the third byte size corresponding to the mth third cache data and the second byte size in response to the third byte size corresponding to the mth third cache data being greater than the second byte size;
the base address setting unit is further configured to forward address offset to the second memory base address according to the second byte difference value, to obtain a fourth memory base address, where the fourth memory base address is used to indicate a write start address of the mth third cache data;
the cache data processing unit is further configured to sequentially read and process the second cache data according to a preset byte number with the second memory base address as a starting point, and sequentially write the generated mth third cache data in real time with the fourth memory base address as a starting point until the second cache data is completely covered by the mth third cache data.
Optionally, the base address setting unit is further configured to, in response to the third byte size corresponding to the mth third cache data being not greater than the second byte size, use the second memory base address as a write start address of the mth third cache data;
The cache data processing unit is further configured to sequentially read and process the second cache data according to a preset byte number with the second memory base address as a starting point, and sequentially write the generated mth third cache data in real time with the second memory base address as a starting point until the second cache data is read.
In a third aspect, an embodiment of the present application provides an electronic device, including at least one processor and a memory connected to the at least one processor, where the at least one processor is configured to implement the steps of the method according to any one of the first aspects when executing a computer program stored in the memory.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the method according to any of the first aspects.
It should be understood that, the second to fourth aspects of the embodiments of the present application are consistent with the technical solutions of the first aspect of the embodiments of the present application, and the beneficial effects obtained by each aspect and the corresponding possible implementation manner are similar, and are not repeated.
[ description of the drawings ]
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present specification, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flow chart of a memory management method according to an embodiment of the present application;
fig. 2 is a diagram illustrating a memory structure corresponding to a memory management method according to an embodiment of the present application and a memory structure of an existing memory management method;
FIG. 3 is a flowchart illustrating a memory writing method according to an embodiment of the present application;
fig. 4 is a flow chart of a method for allocating shared memory according to an embodiment of the present application;
fig. 5 is a flowchart of a metadata importing method according to an embodiment of the present application;
FIG. 6 is a flowchart illustrating another memory management method according to an embodiment of the present application;
FIG. 7 is a schematic diagram of a memory structure corresponding to another memory management method according to an embodiment of the present application;
FIG. 8 is a flow chart of a method for allocating shared memory when there is cache data multiplexing according to an embodiment of the present application;
Fig. 9 is a schematic diagram of a memory structure corresponding to a shared memory allocation method when there is cache data multiplexing according to an embodiment of the present application;
fig. 10 is a flowchart of a method for outputting mth third buffered data when there is buffered data multiplexing according to an embodiment of the present application;
FIG. 11 is a flowchart illustrating a method for outputting Mth third buffered data according to an embodiment of the present application;
fig. 12 is a flowchart of another method for outputting mth third buffered data according to the embodiment of the present application.
Fig. 13 is a schematic structural diagram of a memory management device according to an embodiment of the present application;
fig. 14 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
[ detailed description ] of the application
For a better understanding of the technical solutions of the present specification, the following detailed description of the embodiments of the present application refers to the accompanying drawings.
It should be understood that the described embodiments are only some, but not all, of the embodiments of the present description. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present disclosure.
The terminology used in the embodiments of the application is for the purpose of describing particular embodiments only and is not intended to be limiting of the description. As used in this application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
The shared memory is a common memory configuration method in multiprocessor systems or multi-pipeline data processing, and by allocating memory space capable of meeting the maximum memory requirements of a plurality of processing processes, cache data commonly used by the plurality of processing processes is saved, so that the memory access speed is remarkably improved, and the memory space of a computer system is saved.
According to the research of the inventor, when the byte size of the output cache data of a certain processing process is larger than that of the input cache data, the computer system additionally divides a temporary space for writing the output cache data to be generated in the shared memory, and empties the output cache data after the processing process finishes outputting the data. In the data processing process, the general memory management strategy often occupies memory space due to the fact that the input cache data and the output cache data occupy the memory space at the same time, so that the shared memory space which can be used for writing other data originally becomes unnecessary memory waste.
Because the hardware of the computer is limited, the memory waste phenomenon often causes the limited memory space to be more quickly occupied, and thus causes the problem of insufficient memory.
In view of this, the embodiment of the application provides a memory management method, which can optimize the memory waste caused by the simultaneous occupation of memory space by inputting cache data and outputting cache data in the existing memory management strategy by managing the allocation of storage base addresses of different cache data in the shared memory, thereby saving the occupation of memory space.
The technical scheme provided by the embodiment of the application is described below with reference to the accompanying drawings.
Referring to fig. 1, a memory management method provided in an embodiment of the present application is applied to a terminal side, where a shared memory exists in the terminal side and first cache data exists in the shared memory, and the method includes the following steps:
step 101: and calculating a first byte difference value between the second byte size and the first byte size in response to the second byte size corresponding to the second cache data to be generated being greater than the first byte size corresponding to the first cache data.
Step 102: and forward performing address offset on the first memory base address according to the first byte difference value to obtain a second memory base address, wherein the first memory base address is used for indicating a writing start address of the first cache data, and the second memory base address is used for indicating a writing start address of the second cache data to be generated.
Step 103: sequentially reading and processing the first cache data according to the preset byte number by taking the first memory base address as a starting point, and sequentially writing the generated second cache data in real time by taking the second memory base address as a starting point until the first cache data is covered by the second cache data.
In the embodiment of the application, in order to ensure that the process of processing the first cache data to generate the second cache data can save more shared memory space under the condition that the byte size of the second cache data is larger than that of the first cache data, the strategy of 'directly distributing the second memory base address corresponding to the second cache data to the written first cache data' in the traditional method is needed to be changed. One possible method is to change the direction of address offset into forward offset when the first memory base address is shifted backward to obtain the second memory base address, so that the second cache data can cover the written first cache data in real time when generating.
It should be understood that the cache data stored in the shared memory needs to be read and written frequently, so that each written data in the shared memory can be erased and covered quickly, and when a certain cache data no longer needs to be used, the cache data can be covered by the newly added cache data without any problem.
Meanwhile, the memory read-write mode used in the embodiment of the application is read-write according to the line (or read-write is performed by using a cache line), the memory read-write method is commonly used in data processing to optimize read-write performance, and each read-write operation of the memory is read-write according to the line batch. The read-write mode can ensure that the terminal cannot write the cache data into other addresses of the shared memory before the writing of one line of memory addresses is completed, and ensure that the cache data is written backwards row by row strictly according to the memory structure. Therefore, by means of the read-write mode, the second cache data to be written can be enabled to continuously cover the first cache data which is read according to the row sequence in the process of writing the second cache data, and the second cache data to be written is not cleared after the first cache data is read.
In order to enable the second cache data to cover the read first cache data in real time and not to cover the unread first cache data in advance, the embodiment of the application calculates a first byte difference value between a first byte size and a second byte size before data processing, and based on the forward offset method, a layer of buffer memory is manufactured between a first memory base address and a second memory base address by using the first byte difference value as an offset.
By the arrangement, when the second cache data with the byte size larger than that of the first cache data is written, the first cache data is written into the shared memory, and the read first cache data is covered after the memory space for the cache is written; or, the writing progress of the second cache data can never keep pace with the reading progress of the first cache data in the data processing process, and the first cache data is not completely covered by the last bytes of the second cache data until the last bytes of the first cache data are read and processed. Under the base address setting, lower memory occupation can be realized compared with the traditional memory management method, and the effect of improving the memory utilization rate is further achieved.
Fig. 2 is a diagram comparing the memory structure corresponding to the memory management method with the memory structure of the existing memory management method, and as can be seen from the memory structure corresponding to the embodiment of the present application in fig. 2a, in the embodiment of the present application, the second memory base address (number (1)) is just forward shifted by the length of the byte difference (number (3) region) compared with the first memory base address (number (2)), so that the end address of the second cache data (number (4)) to be overlaid and the end address of the first cache data (number (5)) to be overlaid just overlap, and the actually occupied memory size is only the size of the second cache data. In fig. 2b, the memory structure corresponding to the conventional memory management method is that, because the conventional memory management method is performed by shifting backward, the second memory base address is set after the end address of the first cache data (number (6)), so that no overlay relationship exists between the second cache data (number (7)) and the first cache data in the data reading process. From the execution result, the occupied shared memory space in fig. 2b is larger than that in fig. 2a by one size (number (8) area) of the first cache data. That is, the method of FIG. 2a is more memory efficient than the conventional memory management method of FIG. 2 b.
For how to write the second buffer data into the shared memory in the memory management manner, so as to achieve the effect of saving the memory compared with the conventional method, fig. 3 is a specific flow diagram of a memory writing method according to an embodiment of the present application.
For example, in the second cache data writing manner corresponding to the conventional memory management method shown in fig. 3a, when a certain shared memory has first cache data with a first byte size of 32KB and second cache data with a second byte size of 64KB to be written, the first memory base address (number (1)) corresponding to the first cache data is 0x10000. The second memory base address (number (2)) of the second cache data is set to 0x18001 using the conventional memory management method, that is, temporary space for writing is allocated for the second cache data after the end address 0x18000 of the first cache data (number (3) area, byte size of 32 x 1024b=32768 bits), as shown in fig. 3 a. This writing method can make the second cache data never overwrite the content of the first cache data when writing (as shown by the number (4)).
In the method according to the embodiment of the present application, as shown in fig. 3b, the second memory base address (number (7)) is set to 0x8000, i.e. there is a space of 32KB between the first memory base address (number (8)) and the second memory base address as a buffer. When the second cache data is written (as shown by number (5)), it will fill the 32KB of empty memory preferentially without interfering with the read operation of the first cache data (as shown by number (6).
After the 32KB of empty memory is covered, as shown in FIG. 3c, at this point the first cache data is read by at least 16KB. From this point forward, the writing of the second cache data is started to overwrite part of the contents of the first cache data (as shown by the number (9)), but because the first cache data has already read those overwritten contents and the reading progress of the first cache data is still ahead of the second cache data (as shown by the number (r)), the reading of the first cache data is still unaffected.
For example, when the write of the second cache data overwrites this portion of the read data of size 16KB, the first cache has been read for a total of 24KB; when the second cache data overwrites a total of 24KB of read data, the first cache data has been read a total of 28KB; when the second cache data overwrites the 28KB read data, the first cache data totals 30KB … … and so on until the 32KB content of the first cache data is completely read and the second cache data is completely written, the first cache data in the shared memory is just completely overwritten.
It should be noted that, in the embodiment of the present application, the data processing process for processing the first cache data and outputting the second cache data needs to use a mechanism for inter-process communication, which is a shared memory, to implement information sharing, so the data processing process may include, but is not limited to: image processing processes, multimedia (audio or video) processing processes, database management processes (for database queries and entries), server computing processes (e.g., big data computing), etc. Meanwhile, for a plurality of different threads in the same data processing process, when the cache data in the shared memory needs to be called for calculation, the method is also applicable to the memory management strategy.
On the basis of realizing the reduction of the memory occupation by the method, the space allocation rule of the shared memory is also required to be correspondingly changed, and the allocated space size of the shared memory specially used for storing the first cache data and the second cache data is reduced as much as possible.
Fig. 4 is a flowchart of a method for allocating a shared memory according to an embodiment of the present application. As a possible implementation, steps 104 to 107 may be further performed before step 101.
Step 104: and receiving a data processing instruction, wherein the data processing instruction carries a storage address of metadata in a nonvolatile storage medium, and the metadata is a data source for generating first cache data.
Step 105: and according to the storage address, the metadata is searched from the nonvolatile storage medium, and the size of the metadata bytes corresponding to the metadata is determined.
Step 106: and determining a first byte size and a second byte size according to the metadata byte size.
Step 107: and according to the maximum value between the second byte size and the first byte size, allocating a memory space which is equal to the maximum value for the shared memory.
In the embodiment of the application, since the shared memory for storing the first cache data and the second cache data is generally dedicated to the process for processing the first cache data and the second cache data, when the second cache data is completely generated and exported, the shared memory is cleared by the terminal, and the allocated memory space is provided for other processes. Therefore, when the space is allocated for the shared memory, only the minimum space required by the whole process of generating the second cache data can be allocated.
In the conventional memory management strategy, the allocation of the shared memory refers to the sum of the first byte size and the second byte size, and the allocation mode is obviously too allowance for the memory management method adopted in the embodiment of the application, so that the most saved shared memory size after the "redundancy" of the memory is removed needs to be obtained by using a proper method.
First, based on the memory management method using the second cache data to forward address offset, when the second cache data is larger than the first cache data, the second cache data will completely cover the first cache data, so that the theoretically optimal shared memory size is the second byte size of the second cache data. Meanwhile, in another second cache data generation manner to be mentioned later, the second byte size is not larger than the first byte size, in this case, since there is no need to additionally divide a temporary space for the second cache data, the second cache data is directly overlaid on the memory address where the first cache data is located, so that the optimal shared memory size corresponding to this case is the first byte size of the first cache data.
In summary, regardless of the size relationship between the second byte size and the first byte size, the optimal size of the shared memory in the embodiment of the present application only needs to be maintained at the maximum value between the first byte size and the second byte size, instead of the sum of the first byte size and the second byte size in the conventional policy.
For example, when the first byte size is 32KB and the second byte size is 64KB, the optimal shared memory size for use in embodiments of the present application will not be the conventional 96KB, but will be 64KB equal to the second byte size. By the mode, the memory utilization rate can be effectively improved.
Since the general memory allocation policy of the terminal allocates a memory space equal to the byte size of the current cache data in advance during the process of the cache data, and writes the cache data into the reserved memory space step by step when the data is generated later. Therefore, before the generation of the cache data, the terminal estimates the expected byte size of the cache data in each step according to the need by using all the data processing processes of the shared memory, the specific operations executed by each data processing process, and the metadata byte size corresponding to the input metadata (i.e. the input data that needs to be imported from the memory for processing).
In order to obtain the maximum value between the first byte size and the second byte size, it is necessary to obtain the storage address of the metadata related to the current data processing first, and count the corresponding metadata byte size.
Specifically, metadata is typically stored in a nonvolatile storage medium, such as a mechanical hard disk, a solid state hard disk, an optical disk, a magnetic tape, etc., and is input into a shared memory after being read by a terminal, so as to be quickly called by the terminal. While most non-volatile storage media have a unique metadata storage address available for retrieval by the terminal. By this storage address pointing to the metadata, the terminal can conveniently read the content of the metadata as well as the metadata byte size.
After the metadata byte size is obtained, the first byte size and the second byte size can be estimated based on the data and a specific data processing process (such as a process of generating second cache data through the first cache data), so that the maximum value is calculated according to the method, and the optimal space size required by the shared memory is obtained.
It should be appreciated that the allocation of memory space to shared memory is typically performed by specific instructions, such as in a Linux system, there are two functions, that is, a shmget () and a shmat (), for allocating a shared memory with a unique identifier that is accessible to multiple processes, and a shared memory that is mapped to the address space of the current process. Similar operation instructions exist in other terminals, and are not described herein.
In addition, since the data processing process cannot directly process the data that is not stored in the shared memory, metadata needs to be imported into the shared memory to form the first cache data.
Fig. 5 is a flowchart of a metadata importing method according to an embodiment of the present application. After the execution of step 107, steps 108 to 109 may be further executed.
Step 108: and in response to the second byte size being greater than the first byte size, performing address offset on the end address of the shared memory forward according to the first byte size to obtain a first memory base address.
Step 109: and reading the metadata from the nonvolatile storage medium according to the storage address, and writing the metadata into the shared memory to generate first cache data.
In the embodiment of the application, the first cache data which is completely consistent with the metadata in the data content is obtained by reading the metadata and importing the metadata into the shared memory, so that the subsequent data processing steps are convenient. And the address offset is carried out forward from the shared memory according to the first byte size, so that the operation of obtaining the first memory base address and writing the first cache data is to place the first cache data at the tail part of the shared memory in accordance with the memory management strategy set forth in the foregoing, thereby facilitating the subsequent execution of covering the first cache data with the second cache data. For example, for the first cache data with a byte size of 2KB (byte size of 2×1024=2048 bits), if the end address of the shared memory size allocated for the first cache data is 0x1000, the first memory base address corresponds to 0x200.
After the preparation of the first cache data is completed, a specific second cache data writing strategy can be determined according to the size relation between the first byte size and the second byte size.
Fig. 6 is a flowchart of another memory management method according to an embodiment of the present application, and as a possible implementation manner, steps 110 to 112 may be further performed after step 107.
Step 110: and setting the first address of the shared memory as a first memory base address and a second memory base address in response to the second byte size not being greater than the first byte size.
Step 111: and reading the metadata from the nonvolatile storage medium according to the storage address, and writing the metadata into the shared memory to generate first cache data.
Step 112: and sequentially reading and processing the first cache data according to the preset byte number by taking the first memory base address as a starting point, and sequentially writing the generated second cache data in real time by taking the second memory base address as a starting point until the first cache data is read.
In the embodiment of the present application, due to the complexity of the data processing operation, the byte size of the second cache data is not necessarily larger than that of the first cache data. For the case where the second byte size is not larger than the first cache data, a method other than "forward address offset to obtain the second memory base" is needed to write the second cache data.
Fig. 7 is a schematic diagram of a memory structure corresponding to the memory management method, specifically, when the second byte size is not greater than the first byte size, there are two cases that (1) the second byte size is equal to the first byte size and (2) the second byte size is smaller than the first byte size.
For the case (1), as shown in fig. 7a, since the byte size (number (1)) of the first cache data is the size of the second cache data and is also the size of the entire shared memory, the second cache data can just completely cover the first cache data (as shown in number (2)) during writing, and a space for "buffering" does not need to be reserved in advance. The steps of "batch reading the first cache data and batch writing the second cache data" can be normally executed only by setting the first memory base address and the second memory base address to the same address (i.e., the memory address (5)), without the unread first cache data being covered in advance.
For example, the first cache data and the second cache data are both 1KB in size, and the first memory base is 0x1000, then the second memory base is also set to 0x1000. When the first cache data is read by 16 bits, the 17 th bit to be read is located at 0x1011, and the 16 th bit of the second cache data to be written into the shared memory is located at 0x1010, and the first cache data to be read is not covered.
For case (2), as shown in fig. 7b, since the byte size (number (3)) of the first cache data is larger than that of the second cache data, the number of bytes written per time is smaller than the number of bytes read by the first cache data even if the first cache data is to be overwritten; even if the last byte of the second cache data is written, the covered progress lags the total size of the first cache data (as indicated by the number (4)). Therefore, only the first memory base address and the second memory base address are set to the same address (i.e., the memory address (5)), and the writing speed of the second cache data cannot catch up with the reading speed of the first cache data, so that the problem that unread data is covered in advance is not concerned.
For example, the first cache data is 2KB in size, the second cache data is 1KB in size, and the first memory base address and the second memory base address are both 0x1000. When the first cache data is read with 32 bits, the 33 th bit to be read is located at 0x1021, and the 16 th bit of the second cache data to be written into the shared memory is located at 0x1010, which is far away from the unread first cache data.
Therefore, when the second byte size is not larger than the first cache data, the first memory base address is directly used as the second memory base address, and the reading and writing can be synchronously executed. In this case, the size of the shared memory is also set to the first byte size, so that the strategy of performing address offset from the end address to the first memory base address is also not problematic.
It should be understood that the "address offset from the end address forward" and the "address offset from the first address backward" in the embodiment of the present application belong to equivalent different implementations, and the difference between the expressions is merely for understanding convenience, and are not capable of replacing the two. In fact, since the shared memory size, the first byte size, the second byte size, and the like are all known parameters, the forward address offset and the backward address offset are used for the terminal just "calculate according to the known parameters, so that a certain memory address is marked as a first memory base address and a second memory base address", and there is no difference in execution effect.
In some embodiments, there is also a case of multi-process serial processing in which after the second cache data is generated using the first cache data, other cache data is generated according to the second cache data. For example, in the field of image processing, there may be a need to interpolate and amplify a picture i to obtain a picture ii, and then use the picture ii to perform noise repairing to obtain a picture iii. In this case, only the method is needed to calculate the expected byte sizes of the plurality of cache data (including the first cache data and the second cache data) at the same time, and find the maximum value from the plurality of expected byte sizes, so that all processes including the allocation of the shared memory, the forward shift to obtain the memory base address and the generation of the cache data can be executed as usual, and the effect of improving the memory utilization rate during the multi-process serial processing can be realized.
In some embodiments, there may be a case of multiplexing the same piece of cache data to generate multiple different pieces of cache data, for example, using the picture i to generate the picture ii, the picture iii, and the picture iv … … simultaneously, and for the case that the same piece of cache data needs to be multiplexed, an additional set of shared memory allocation method is further required.
Fig. 8 is a flow chart of a shared memory allocation method when there is cache data multiplexing according to an embodiment of the present application. Steps 113 to 115 may be further performed after step 103 or step 112.
Step 113: according to the second byte size, M third byte sizes corresponding to M third cache data to be generated are determined, wherein the M third cache data are sets of different cache data which are output after the second cache data are read and processed, and M is any positive integer greater than 1.
Step 114: and subtracting the minimum value between the maximum value of the second byte size and the M third byte sizes from the sum of the second byte size and the M third byte sizes to obtain a memory updating value.
Step 115: and allocating memory space which is equal to the memory update value for the shared memory.
In the embodiment of the application, after the second cache data is generated through the first cache data, the user may further issue a further instruction to request the terminal to generate M third cache data by using the second cache data. For example, when performing gradation conversion using a color picture I to obtain a black-and-white picture II, it is also possible to generate a thumbnail III using the picture I at the same time; for another example, when image splitting or image segmentation is performed using the image i, a plurality of images such as images iv and v may be generated. In these cases, if the picture i is regarded as the second buffered data, the generated pictures ii, iii, iv, v, etc. may be regarded as the third buffered data. Similar situations exist in other data processing scenarios requiring the use of a shared memory, and will not be described herein.
For these cases, two parameters, i.e., the upper limit of process access and the allocation space of the shared memory, need to be adjusted first. While there are various ways to adjust for shared memory. For example, in some embodiments the upper process access limit and allocated space of the shared memory may be altered by adjusting the semaphore, memory map file, or the like of the shared memory. The semaphore of the shared memory is a mechanism for synchronization and mutual exclusion among processes or threads, and the upper limit of process access of the shared memory can be increased or limited by adjusting the value of the semaphore on the available space of the buffer. The memory mapping file is a technology for mapping the file to the virtual memory space, and the size of the shared memory can be dynamically adjusted by adjusting the size of the memory mapping file. Because other shared memory configuration methods are similar to the principles or technical ideas of the above methods, they are not described herein again.
For what value the space size of the shared memory is adjusted, in the embodiment of the present application, the second byte size is mainly referred to, and M third byte sizes corresponding to M third cache data are referred to.
Since the sizes of the M third bytes are different according to the content of the buffered data, and the second buffered data cannot be overwritten until the second buffered data is called last time to generate the mth third buffered data, the optimal memory management policy is to use the corresponding third buffered data with the largest size of the third bytes to overwrite the second buffered data.
In particular, the benefits of using the largest third cache data to overwrite the second cache data are two:
(1) Since the second cache data cannot be covered before the mth third cache data is generated, the remaining third cache data can be arranged at other memory addresses which do not cover the second cache data, like the conventional memory management strategy, and the total size of the remaining third cache data stored in the second cache data can be kept to the minimum by using the largest third cache data to cover the second cache data. And the memory occupation of the part of cache data is reduced.
(2) If the second cache data is not covered, the remaining data portion will occupy the space of the shared memory. Therefore, if the second cache data can be completely covered by the largest third cache data, the remaining data portion of the second cache data is 0, and the situation of continuously occupying the shared memory is not possible; even if the largest third cache data cannot completely cover the second cache data, the rest M-1 third cache data cannot certainly cover more data contents of the second cache data, and the largest third cache data is still the most effective and most space-saving.
Therefore, when the second cache data is covered with the largest third cache data and the remaining M-1 third cache data are stored in other positions where the second cache data is not covered, the problem of the optimal space size of the shared memory becomes a problem of the minimum value between the two parameters of the third byte size and the second byte size corresponding to the largest third cache data.
If the sum of the M third byte sizes plus the second byte size is used, the obtained shared memory size is the memory occupation amount required by the existing memory management strategy, and then the minimum value between the third byte size corresponding to the maximum third cache data and the second byte size is subtracted from the sum, so that the byte size which can be optimized can be removed, and the memory update value corresponding to the optimized shared space can be obtained.
Fig. 9 is a schematic diagram of a memory structure corresponding to the shared memory allocation method when there is cache data multiplexing. For the second cache data (number (1)), the contents thereof are not overwritten when the third cache data is written in the previous M-1 calls, and thus for the remaining M-1 non-maximum third cache data, an exemplary method is to store all of them after the end address of the second cache data (as shown in number (2)). For the largest third cache data (number (3)), it will directly write the memory address where the second cache data (number (1)) is located, so as to overwrite the second cache data (specific writing manner is determined by the byte size relationship and will be described later) stored in the second memory base address (number (5)), so that the allocated space size (number (4) of the shared memory is the most saved state, which is also determined according to the byte size relationship.
One easy-to-understand explanation of how the byte size relationship is used to determine the size of the shared memory is as follows:
when the third byte size is smaller or equal to the second byte size, since the largest third cache data cannot completely cover or just cover the second cache data, the corresponding memory update value is: the second byte size is added to the "remaining M-1 third byte sizes", i.e. "sum of second byte size and M third byte sizes" minus the largest third byte size.
When the second byte size is smaller, the largest third cache data can cover the second cache data without doubt, and the corresponding memory update value is: the maximum third byte size is added to the "remaining M-1 third byte sizes", i.e., the sum of the M third byte sizes, or alternatively, the "sum of the second byte size and the M third byte sizes" is subtracted from the second byte size.
It should be noted that the method for calculating the size of the shared memory is not only suitable for the case of updating the size of the shared memory, but also suitable for the case of using the shared memory in a full process (for example, directly calculating a plurality of third cache data by using the first cache data) without changing the size of the shared memory and generating the shared memory at one time. For example, before the metadata of the picture i with the size of 512KB is imported into the shared memory to perform image segmentation, assuming that the total byte size of the multiple pictures included in the segmentation result is calculated to be 512KB and the maximum picture size in the segmented pictures is 224KB, the optimal size of the shared memory is 512kb+512KB-224 kb=800 KB, and not 512kb+512 kb=1024 KB, which obviously does not need to perform space allocation update of the shared memory, but can still be applied to the allocation policy of the shared memory.
And after the shared memory is reallocated, data needs to be read and written in the shared memory. The manner in which the M third cache data are written is slightly different depending on whether it is the largest third cache data.
Fig. 10 is a flowchart of a method for outputting mth third buffered data when there is buffered data multiplexing according to an embodiment of the present application. As a possible implementation, steps 116 to 117 may be further performed after step 115.
Step 116: and responding to an mth data processing operation executed on the second cache data, and performing address offset on the second memory base address backwards according to the sum of the second byte size and a third byte size corresponding to the first M-1 output third cache data to obtain a third memory base address, wherein the third memory base address is used for indicating a writing starting address of the mth third cache data, and M is any positive integer smaller than M.
Step 117: and sequentially reading and processing the second cache data according to the preset byte number by taking the second memory base address as a starting point, and sequentially writing the generated m third cache data in real time by taking the third memory base address as a starting point until the second cache data is read.
In the embodiment of the present application, since it has been mentioned that the specific shared memory allocation policy needs to be adjusted according to the largest third cache data, and no other third cache data can cover the second cache data before the largest third cache data is generated, taking the largest third cache data as the third cache data output by the mth is an optimal scheme for saving memory occupation. Whereas the output of the mth third cache data before this may follow the conventional memory management policy, the output is directly after the (to be precise, after the end address of the (m-1) th third cache data). And for the first output third cache data, the third cache data can be directly stored behind the second cache data in an uncovered mode. The method is characterized in that the execution mode is that the second memory base address is backwards subjected to 'the sum of the second byte size and the first m-1 byte size' to obtain a third memory base address, and the third memory base address is stored into the mth third cache data.
It should be noted that the storage mode is only a generating mode with less memory occupation, and the output of the cache data follows the rule of writing by line by default. For the case of not conforming to the write-by-line in the embodiment of the present application, it is also allowed to directly store the first m third cache data in other memory addresses, and from the execution effect of this manner, the expected effect of optimizing the memory occupation can be achieved as well, so there is no essential difference from the above method.
And after the output of the first M-1 third cache data is completed, the M-th third cache data is required to be output so as to complete the whole data processing flow.
Fig. 11 is a flowchart of a method for outputting mth third buffered data according to an embodiment of the present application. As a possible implementation, steps 118 to 120 may be further performed after step 117.
Step 118: and in response to the third byte size corresponding to the Mth third cache data being greater than the second byte size, calculating a second byte difference between the third byte size corresponding to the Mth third cache data and the second byte size.
Step 119: and performing address offset forward on the second memory base address according to the second byte difference value to obtain a fourth memory base address, wherein the fourth memory base address is used for indicating the writing start address of the Mth third cache data.
Step 120: and sequentially reading and processing the second cache data according to the preset byte number by taking the second memory base address as a starting point, and sequentially writing the generated Mth third cache data in real time by taking the fourth memory base address as a starting point until the second cache data is completely covered by the Mth third cache data.
In the embodiment of the application, after the output of the first M-1 third cache data is completed, the output mode of the Mth third cache data can be performed by referring to the processing thought of reading the first cache data and writing the second cache data. When the third byte size of the mth third cache data is larger than the second byte size, performing address offset of the second byte difference forward on the mth third cache data to obtain a fourth memory base address. Since the core idea of this method has been described above, it is not described here in detail.
In some embodiments, depending on the process or thread involved in the data processing, there may be a strict output sequence of the M third buffer data, or there may be a constraint that only when a certain third buffer data is calculated, the size of the third buffer data may be known, and the output sequence may not be adjusted according to the difference between the sizes of the M third bytes. In this case, it is still allowed to overwrite the second cache data with the third cache data outputted by the mth, and this procedure can be applied to a procedure of generating the second cache data like the first cache data. Although in this case the memory footprint saving effect is not optimal, it is still locally optimal, relatively optimal, and thus still saves more memory than existing memory management strategies.
In some embodiments, for reasons similar to the above, a constraint exists on a portion of the M third cache data, such that its output order cannot be swapped; since the other third buffer data do not have these restrictions, the output order can still be exchanged arbitrarily, and it is necessary to determine whether the largest third buffer data exists in the third buffer data whose order cannot be exchanged. If the data is the data, the third buffer data with byte size less than the maximum third buffer data is used as the mth output, and the memory management method can be executed to reduce the memory occupation to a certain extent.
In addition, in the case where the mth third cache data is not larger than the second cache data, it is also necessary to adopt an output policy different from that in the case where the mth third cache data is larger than the second cache data.
Fig. 12 is a flowchart of another method for outputting mth third buffered data according to the embodiment of the present application, as a possible implementation manner, after step 117, steps 121 to 122 may be further performed.
Step 121: and responding to the third byte size corresponding to the Mth third cache data is not larger than the second byte size, and taking the second memory base address as the writing starting address of the Mth third cache data.
Step 122: and sequentially reading and processing the second cache data according to the preset byte number by taking the second memory base address as a starting point, and sequentially writing the generated Mth third cache data in real time by taking the second memory base address as a starting point until the second cache data is read.
In the embodiment of the present application, when the size of the mth third byte is not greater than the size of the second byte, the second memory base address is the fourth memory base address, and writing and covering can be performed on the second cache data directly along the processing thought when the second cache data is not greater than the first cache data. Specifically, since the second cache data still has a batch read condition, writing of the mth third cache data still cannot overwrite the unread second cache data. At this time, the second memory base address is used as the fourth memory base address, or the second memory base address is used as the writing position of the mth third cache data, which is a more stable writing mode, so that the mth third cache data can be always written without being covered in advance until the content of the second cache data is completely read.
Referring to fig. 13, based on the same inventive concept, an embodiment of the present application further provides a memory management device, where the device includes:
A calculating unit 201, configured to calculate a first byte difference between the second byte size and the first byte size in response to the second byte size corresponding to the second cache data to be generated being greater than the first byte size corresponding to the first cache data;
a base address setting unit 202, configured to forward address offset to a first memory base address according to the first byte difference value, to obtain a second memory base address, where the first memory base address is used to indicate a storage start address of the first cache data, and the second memory base address is used to indicate a write start address of the second cache data to be generated;
the cache data processing unit 203 is configured to sequentially read and process the first cache data according to a preset byte number with the first memory base address as a starting point, and sequentially write the generated second cache data in real time with the second memory base address as a starting point until the first cache data is covered by the second cache data.
Optionally, the apparatus further comprises:
the receiving unit is used for receiving a data processing instruction, wherein the data processing instruction carries a storage address of metadata in a nonvolatile storage medium, and the metadata is a data source for generating first cache data;
the memory occupation determining unit is used for searching the metadata from the nonvolatile storage medium according to the storage address and determining the size of metadata bytes corresponding to the metadata;
The memory occupation determining unit is also used for determining a first byte size and a second byte size according to the size of the metadata bytes;
and the memory allocation unit is used for allocating a memory space which is equal to the maximum value for the shared memory according to the maximum value between the second byte size and the first byte size.
Optionally, the base address setting unit 202 is further configured to, in response to the second byte size being greater than the first byte size, forward address offset the end address of the shared memory according to the first byte size, to obtain a first memory base address;
the cache data processing unit 203 is further configured to read metadata from the nonvolatile storage medium according to the storage address, and write the metadata into the shared memory, so as to generate first cache data.
Optionally, the base address setting unit 202 is further configured to set the first address of the shared memory to be the first memory base address and the second memory base address in response to the second byte size being not greater than the first byte size;
the cache data processing unit 203 is further configured to read metadata from the nonvolatile storage medium according to the storage address, and write the metadata into the shared memory to generate first cache data;
the cache data processing unit 203 is further configured to sequentially read and process the first cache data according to a preset byte number with the first memory base address as a starting point, and sequentially write the generated second cache data in real time with the second memory base address as a starting point until the first cache data is read.
Optionally, the memory occupation determining unit is further configured to determine, according to the second byte size, M third byte sizes corresponding to M third cache data to be generated, where the M third cache data are a set of different cache data that are output after the second cache data are read and processed, and M is any positive integer greater than 1;
the calculating unit 201 is further configured to subtract a minimum value between the second byte size and a maximum value of the M third byte sizes from a sum of the second byte size and the M third byte sizes to obtain a memory update value;
the memory allocation unit is further configured to allocate a memory space equal to the memory update value for the shared memory.
Optionally, the base address setting unit 202 is further configured to, in response to an mth data processing operation performed on the second cache data, perform address offset on the second memory base address backward according to a sum of a second byte size and a third byte size corresponding to the first M-1 output third cache data, to obtain a third memory base address, where the third memory base address is used to indicate a write start address of the mth third cache data, and M is any positive integer less than M;
the cache data processing unit 203 is further configured to sequentially read and process the second cache data according to a preset byte number with the second memory base address as a starting point, and sequentially write the generated mth third cache data in real time with the third memory base address as a starting point until the second cache data is read.
Optionally, the calculating unit 201 is further configured to calculate a second byte difference value between the third byte size corresponding to the mth third cache data and the second byte size in response to the third byte size corresponding to the mth third cache data being greater than the second byte size;
the base address setting unit 202 is further configured to forward address offset to the second memory base address according to the second byte difference value, so as to obtain a fourth memory base address, where the fourth memory base address is used to indicate a write start address of the mth third cache data;
the cache data processing unit 203 is further configured to sequentially read and process the second cache data according to a preset byte number with the second memory base address as a starting point, and sequentially write the generated mth third cache data in real time with the fourth memory base address as a starting point until the second cache data is completely covered by the mth third cache data.
Optionally, the base address setting unit 202 is further configured to, in response to the third byte size corresponding to the mth third cache data being not greater than the second byte size, use the second memory base address as the write start address of the mth third cache data;
the cache data processing unit 203 is further configured to sequentially read and process the second cache data according to the preset byte number with the second memory base address as a starting point, and sequentially write the generated mth third cache data in real time with the second memory base address as a starting point until the second cache data is read.
Referring to fig. 14, based on the same inventive concept, an electronic device 300 is further provided in an embodiment of the present application, where the electronic device 300 may include at least one processor, and the at least one processor is configured to execute a computer program stored in a memory, to implement the steps of the memory management method shown in fig. 1 to 12 provided in the embodiment of the present application.
In the alternative, the processor may be a central processing unit, a specific ASIC, or one or more integrated circuits for controlling the execution of the program.
Optionally, the electronic device 300 may further include a memory coupled to the at least one processor, the memory may include ROM, RAM, and disk memory. The memory is used for storing data required by the processor when running, i.e. instructions are stored which are executable by at least one processor, which by executing the instructions stored by the memory performs the method as shown in fig. 1-12. Wherein the number of memories is one or more.
The physical devices corresponding to the computing unit 201, the base address setting unit 202, and the buffered data processing unit 203 may be the aforementioned processors. The electronic device may be used to perform the methods provided by the embodiments shown in fig. 1-12. Therefore, for the functions that can be implemented by each functional module in the electronic device, reference may be made to corresponding descriptions in the embodiments shown in fig. 1 to fig. 12, which are not repeated.
The electronic device 300 may be an intelligent electronic device such as a smart phone or a tablet computer, and the form of the electronic device is not limited in this embodiment.
By way of example, fig. 14 illustrates a schematic diagram of a structure of an electronic device 300 using a smart phone as an example, as shown in fig. 14, the electronic device 300 may include a processor 310, an external memory interface 320, an internal memory 321, a universal serial bus (universal serial bus, USB) interface 330, a charge management module 340, a power management module 341, a battery 342, an antenna 1, an antenna 2, a mobile communication module 350, a wireless communication module 360, an audio module 370, a speaker 370A, a receiver 370B, a microphone 370C, an earphone interface 370D, a sensor module 380, keys 390, a motor 391, an indicator 392, a camera 393, a display screen 394, a user identification card (subscriber identification module, SIM) card interface 395, and the like.
It should be understood that the illustrated structure of the embodiment of the present application does not constitute a specific limitation on the electronic device 300. In other embodiments of the application, electronic device 300 may include more or less components than illustrated, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 310 may include one or more processing units, such as: the processor 310 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 310 for storing instructions and data. In some embodiments, the memory in the processor 310 is a cache memory. The memory may hold instructions or data that the processor 310 has just used or recycled. If the processor 310 needs to reuse the instruction or data, it may be called directly from the memory. Repeated accesses are avoided and the latency of the processor 310 is reduced, thereby improving the efficiency of the system.
In some embodiments, processor 310 may include one or more interfaces. The interfaces may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
The charge management module 340 is configured to receive a charge input from a charger.
The power management module 341 is configured to connect the battery 342, the charge management module 340 and the processor 310.
In some embodiments, antenna 1 and mobile communication module 350 of electronic device 100 are coupled, and antenna 2 and wireless communication module 360 are coupled, such that electronic device 300 may communicate with a network and other devices via wireless communication techniques. The wireless communication techniques may include the Global System for Mobile communications (global system for mobile communications, GSM), general packet radio service (general packet radio service, GPRS), code division multiple access (code division multiple access, CDMA), wideband code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC, FM, and/or IR techniques, among others. The GNSS may include a global satellite positioning system (global positioning system, GPS), a global navigation satellite system (global navigation satellite system, GLONASS), a beidou satellite navigation system (beidou navigation satellite system, BDS), a quasi zenith satellite system (quasi-zenith satellite system, QZSS) and/or a satellite based augmentation system (satellite based augmentation systems, SBAS).
The electronic device 300 implements display functions through a GPU, a display screen 394, an application processor, and the like.
The display screen 394 is used for displaying images, videos, and the like. The display screen 394 includes a display panel.
The ISP is used to process the data fed back by camera 393.
Camera 393 is used to capture still images or video.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the electronic device 300 is selecting a frequency bin, the digital signal processor is used to fourier transform the frequency bin energy, or the like.
Video codecs are used to compress or decompress digital video. The electronic device 300 may support one or more video codecs. Thus, the electronic device 300 may play or record video in a variety of encoding formats, such as: dynamic picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
The external memory interface 320 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the electronic device 300. The external memory card communicates with the processor 310 through an external memory interface 320 to implement data storage functions. For example, files such as music, video, etc. are stored in an external memory card.
The internal memory 321 may be used to store computer executable program code comprising instructions. The internal memory 321 may include a storage program area and a storage data area. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data created during use of the electronic device 300 (e.g., audio data, phonebook, etc.), and so on. In addition, the internal memory 321 may include a high-speed random access memory, and may also include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like. The processor 310 performs various functional applications of the electronic device 300 and data processing by executing instructions stored in the internal memory 321, and/or instructions stored in a memory provided in the processor.
The electronic device 300 may implement audio functionality through an audio module 370, a speaker 370A, a receiver 370B, a microphone 370C, an ear-headphone interface 370D, and an application processor, among others. Such as music playing, recording, etc.
The audio module 370 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal.
Speaker 370A, also known as a "horn," is used to convert audio electrical signals into sound signals.
A receiver 370B, also referred to as a "earpiece", is used to convert the audio electrical signal into a sound signal.
Microphone 370C, also referred to as a "microphone," is used to convert sound signals into electrical signals.
The earphone interface 370D is for connecting a wired earphone. The headset interface 370D may be a USB interface 330 or a 3.5mm open mobile electronic device platform (open mobile terminal platform, OMTP) standard interface, a american cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The keys 390 include a power on key, a volume key, etc.
The motor 391 may generate a vibration alert.
The indicator 392 may be an indicator light, which may be used to indicate a state of charge, a change in charge, a message indicating a missed call, a notification, etc.
The SIM card interface 395 is for interfacing with a SIM card. In some embodiments, the electronic device 300 employs esims, namely: an embedded SIM card. The eSIM card can be embedded in the electronic device 300 and cannot be separated from the electronic device 300.
Embodiments of the present application also provide a computer storage medium storing computer instructions that, when executed on a computer, cause the computer to perform the methods described in fig. 1-12.
The foregoing description of the preferred embodiments is provided for the purpose of illustration only, and is not intended to limit the scope of the disclosure, since any modifications, equivalents, improvements, etc. that fall within the spirit and principles of the disclosure are intended to be included within the scope of the disclosure.

Claims (11)

1. The memory management method is characterized by being applied to a terminal side, wherein a shared memory exists in the terminal side, and first cache data exists in the shared memory, and the method comprises the following steps:
calculating a first byte difference between a second byte size corresponding to second cache data to be generated and a first byte size corresponding to first cache data in response to the second byte size being larger than the first byte size;
performing address offset on a first memory base address forward according to the first byte difference value to obtain a second memory base address, wherein the first memory base address is used for indicating a writing start address of the first cache data, and the second memory base address is used for indicating a writing start address of the second cache data to be generated;
And sequentially reading and processing the first cache data according to the preset byte number by taking the first memory base address as a starting point, and sequentially writing the generated second cache data in real time by taking the second memory base address as a starting point until the first cache data is covered by the second cache data.
2. The method of claim 1, wherein, in response to the second byte size corresponding to the second cache data to be generated being greater than the first byte size corresponding to the first cache data, prior to calculating the first byte difference between the second byte size and the first byte size, the method further comprises:
receiving a data processing instruction, wherein the data processing instruction carries a storage address of metadata in a nonvolatile storage medium, and the metadata is a data source for generating the first cache data;
according to the storage address, the metadata is searched from the nonvolatile storage medium, and the size of metadata bytes corresponding to the metadata is determined;
determining the first byte size and the second byte size according to the metadata byte size;
and according to the maximum value between the second byte size and the first byte size, memory space which is equal to the maximum value is allocated for the shared memory.
3. The method of claim 2, wherein after allocating memory space for the shared memory equal to the maximum value based on the maximum value between the second byte size and the first byte size, the method further comprises:
responding to the second byte size being larger than the first byte size, and performing address offset on the tail address of the shared memory forward according to the first byte size to obtain the first memory base address;
and reading the metadata from the nonvolatile storage medium according to the storage address, writing the metadata into the shared memory, and generating the first cache data.
4. The method of claim 2, wherein after allocating memory space for the shared memory equal to the maximum value based on the maximum value between the second byte size and the first byte size, the method further comprises:
setting a first address of the shared memory as the first memory base address and the second memory base address in response to the second byte size not being greater than the first byte size;
reading the metadata from the nonvolatile storage medium according to the storage address, writing the metadata into the shared memory, and generating the first cache data;
And sequentially reading and processing the first cache data according to the preset byte number by taking the first memory base address as a starting point, and sequentially writing the generated second cache data in real time by taking the second memory base address as a starting point until the first cache data is read.
5. The method according to any one of claims 1-4, wherein the writing of the generated second cache data sequentially takes place in real time starting from the second memory base address until after the first cache data is covered by the second cache data, the method further comprising:
according to the second byte size, determining M third byte sizes corresponding to M third cache data to be generated respectively, wherein the M third cache data are sets of different cache data which are output after the second cache data are read and processed, and M is any positive integer greater than 1;
subtracting the minimum value between the maximum value of the second byte size and the M third byte sizes from the sum of the second byte size and the M third byte sizes to obtain a memory update value;
and allocating a memory space which is equal to the memory updating value for the shared memory.
6. The method of claim 5, wherein the third buffered data corresponding to the maximum of the M third byte sizes is the M-th output of the third buffered data, the method further comprising:
responding to an mth data processing operation executed on the second cache data, and performing address offset on the second memory base address backwards according to the sum of the second byte size and the third byte size corresponding to the third cache data output by the previous M-1, so as to obtain a third memory base address, wherein the third memory base address is used for indicating a writing starting address of the mth third cache data, and M is any positive integer smaller than M;
and sequentially reading and processing the second cache data according to the preset byte number by taking the second memory base address as a starting point, and sequentially writing the generated m-th third cache data in real time by taking the third memory base address as a starting point until the second cache data is read.
7. The method of claim 6, wherein the sequentially writing the generated mth third cache data in real time starting from the third memory base address until the second cache data is read, the method further comprising:
Responsive to the third byte size corresponding to the mth of the third buffered data being greater than the second byte size, calculating a second byte difference between the third byte size corresponding to the mth of the third buffered data and the second byte size;
performing address offset on the second memory base address forward according to the second byte difference value to obtain a fourth memory base address, wherein the fourth memory base address is used for indicating the writing start address of the Mth third cache data;
and sequentially reading and processing the second cache data according to the preset byte number by taking the second memory base address as a starting point, and sequentially writing the generated Mth third cache data in real time by taking the fourth memory base address as a starting point until the second cache data is completely covered by the Mth third cache data.
8. The method of claim 6, wherein the sequentially writing the generated mth third cache data in real time starting from the third memory base address until the second cache data is read, the method further comprising:
responding to the fact that the third byte size corresponding to the Mth third cache data is not larger than the second byte size, and taking the second memory base address as a writing starting address of the Mth third cache data;
And sequentially reading and processing the second cache data according to the preset byte number by taking the second memory base address as a starting point, and sequentially writing the generated Mth third cache data in real time by taking the second memory base address as a starting point until the second cache data is read.
9. A memory management device, applied to a terminal side, where a shared memory exists in the terminal side and first cache data exists in the shared memory, the device comprising:
a calculating unit, configured to calculate a first byte difference value between a second byte size corresponding to second cache data to be generated and the first byte size in response to the second byte size corresponding to the second cache data being greater than the first byte size corresponding to the first cache data;
the address offset unit is used for performing address offset on a first memory base address forward according to the first byte difference value to obtain a second memory base address, wherein the first memory base address is used for indicating a storage starting point address of the first cache data, and the second memory base address is used for indicating a writing starting point address of the second cache data to be generated;
and the cache data processing unit is used for sequentially reading and processing the first cache data according to the preset byte number by taking the first memory base address as a starting point, and sequentially writing the generated second cache data in real time by taking the second memory base address as a starting point until the first cache data is covered by the second cache data.
10. An electronic device comprising at least one processor and a memory coupled to the at least one processor, the at least one processor being configured to implement the steps of the method of any of claims 1-8 when executing a computer program stored in the memory.
11. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method according to any of claims 1-8.
CN202310747726.0A 2023-06-21 2023-06-21 Memory management method and device, electronic equipment and storage medium Pending CN116755885A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310747726.0A CN116755885A (en) 2023-06-21 2023-06-21 Memory management method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310747726.0A CN116755885A (en) 2023-06-21 2023-06-21 Memory management method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116755885A true CN116755885A (en) 2023-09-15

Family

ID=87958707

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310747726.0A Pending CN116755885A (en) 2023-06-21 2023-06-21 Memory management method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116755885A (en)

Similar Documents

Publication Publication Date Title
CN110727604B (en) Data processing method and device
CN116244067B (en) Virtual memory management method and electronic equipment
US9223707B2 (en) Mobile memory cache read optimization
CN112954244B (en) Method, device, equipment and storage medium for realizing storage of monitoring video
CN111382087A (en) Memory management method and electronic equipment
CN115858046B (en) Method for preloading memory pages, electronic equipment and chip system
CN114461375A (en) Memory resource management method and electronic equipment
CN113495744A (en) Version upgrading method and related device
CN114253872A (en) Electronic device, memory recovery method thereof and medium
WO2024027544A1 (en) Memory management method and electronic device
CN116909945A (en) Memory management method, memory management device, chip module, electronic equipment and storage medium
CN116755885A (en) Memory management method and device, electronic equipment and storage medium
CN113760192B (en) Data reading method, data reading apparatus, storage medium, and program product
KR100731969B1 (en) Method and apparatus for sharing memory through a plurality of routes
CN113792179A (en) Recording waveform processing method and device, electronic terminal equipment and storage medium
CN114489471B (en) Input and output processing method and electronic equipment
CN115357230A (en) Compiling method, electronic device, and medium for register overflow
JP2011505623A (en) Method, apparatus, and computer program for improving memory usage
CN112292660B (en) Method for scheduling data in memory, data scheduling equipment and system
CN117234965A (en) Memory management method and device, electronic equipment and storage medium
CN114443240A (en) Input/output request processing method and electronic equipment
CN116360671A (en) Storage method, storage device, terminal and storage medium
CN115543600A (en) Memory space management method and memory space management device
CN116627855B (en) Memory processing method and related device
CN116339893A (en) High-performance scene identification method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination