CN111538678A - Data buffering method, device and computer readable storage medium - Google Patents

Data buffering method, device and computer readable storage medium Download PDF

Info

Publication number
CN111538678A
CN111538678A CN202010314782.1A CN202010314782A CN111538678A CN 111538678 A CN111538678 A CN 111538678A CN 202010314782 A CN202010314782 A CN 202010314782A CN 111538678 A CN111538678 A CN 111538678A
Authority
CN
China
Prior art keywords
data
capacity
writing
memory
cache
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010314782.1A
Other languages
Chinese (zh)
Inventor
李宏强
季培隆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen TCL Digital Technology Co Ltd
Original Assignee
Shenzhen TCL Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen TCL Digital Technology Co Ltd filed Critical Shenzhen TCL Digital Technology Co Ltd
Priority to CN202010314782.1A priority Critical patent/CN111538678A/en
Publication of CN111538678A publication Critical patent/CN111538678A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0806Multiuser, multiprocessor or multiprocessing cache systems
    • G06F12/0811Multiuser, multiprocessor or multiprocessing cache systems with multilevel cache hierarchies

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention provides a data buffering method, a device and a computer readable storage medium, wherein the method comprises the following steps: when data to be written is written into a memory through a cache region, target load information is obtained, wherein the target load information is used for representing data processing performance, and the cache capacity of the cache region is determined according to the size of a write-in block when the write-in speed of the memory meets a preset condition; and adjusting the cache region according to the target load information so as to write data into the memory through the adjusted cache region. The invention dynamically adjusts the buffer area according to the actual use condition, so that the actual buffer capacity of the system is adaptive to the actual use condition, the adaptability of data buffering is improved, and different data writing requirements are met.

Description

Data buffering method, device and computer readable storage medium
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a data buffering method and apparatus, and a computer-readable storage medium.
Background
At present, the data buffering method is basically to use a part of fixed area in the memory as a buffer area, and write data into the internal or external memory through the fixed buffer area; the method does not consider the actual use condition of the cache region, so that different use environments cannot be adapted, and the writing efficiency is reduced.
Disclosure of Invention
The invention mainly aims to provide a data buffering method, terminal equipment and a computer readable storage medium, and aims to solve the problem of low adaptability of the current data buffering method.
In order to achieve the above object, an embodiment of the present invention provides a data buffering method, including the following steps:
when data to be written is written into a memory through a cache region, target load information is obtained, wherein the target load information is used for representing data processing performance, and the cache capacity of the cache region is determined according to the size of a write-in block when the write-in speed of the memory meets a preset condition;
and adjusting the cache region according to the target load information so as to write data into the memory through the adjusted cache region.
Optionally, the cache region comprises a first cache region and a second cache region,
correspondingly, the step of obtaining the target load information when writing the data to be written into the memory through the cache region includes:
when data to be written is written into a memory through a first cache region, target load information is obtained;
correspondingly, the step of adjusting the cache region according to the target load information to obtain an adjusted cache region, and writing data into the memory through the adjusted cache region includes:
and adjusting the second cache region according to the target load information so as to write data into the memory through the adjusted second cache region.
Optionally, when writing the data to be written into the memory through the first cache region, before the step of obtaining the target load information, the method further includes:
acquiring the data capacity of data to be written;
and when the data capacity is larger than or equal to the first cache capacity of the first cache region, writing data to be written into a memory through the first cache region.
Optionally, the step of obtaining the data capacity of the data to be written includes:
acquiring an input rate of data to be written in a first period, and determining the data capacity of the data to be written in the first period according to the input rate;
correspondingly, when the data capacity is greater than or equal to the first cache capacity of the first cache region, the step of writing the data to be written into the memory through the first cache region includes:
when the data capacity of the data to be written in the first period is larger than or equal to the first cache capacity, writing the data to be written in a memory through the first cache region in the first period;
correspondingly, the step of obtaining a second cache capacity according to the target load information, and adjusting the second cache area according to the second cache capacity, so as to write data into the memory through the adjusted second cache area includes:
and acquiring second cache capacity according to the target load information, and adjusting the second cache area according to the second cache capacity so as to write data into the memory through the adjusted second cache area in a second period.
Optionally, when the data capacity of the data to be written in the first cycle is greater than or equal to the first buffer capacity, writing the data to be written in the first buffer area to a memory in the first cycle includes:
when the data capacity of the data to be written in the first period is larger than or equal to the first cache capacity, determining a first writing frequency according to the data capacity and the first cache capacity, wherein the first writing frequency is the writing frequency of a memory in the first period;
and writing data to be written into a memory through the first cache region in the first period according to the first writing times.
Optionally, the first cache capacity is determined according to the size of the write block and a first coefficient, where the first coefficient is a positive integer;
correspondingly, the step of determining the first writing times according to the data capacity and the first cache capacity includes:
and determining a first writing time according to the data capacity, the writing block size and a first coefficient.
Optionally, the step of determining a first writing number according to the data capacity, the writing block size, and a first coefficient includes:
determining a first writing frequency according to the data capacity, the writing block size, the first coefficient and a preset formula, wherein the preset formula is as follows:
M1*time1=BLK*Nu1*P1
wherein M1 × time1 is the data capacity;
BLK is the write block size;
nu1 is the first coefficient;
p1 is the first write count.
Optionally, the target load information includes a memory occupancy,
correspondingly, the step of obtaining the second cache capacity according to the target load information includes:
and when the memory occupancy rate is greater than a preset occupancy threshold value, acquiring a second cache capacity, wherein the second cache capacity is smaller than the first cache capacity.
Optionally, when the memory occupancy rate is greater than a preset occupancy threshold, the step of obtaining the second cache capacity includes:
when the memory occupancy rate is greater than a preset occupancy threshold value, determining a second coefficient according to the first coefficient, and acquiring a second cache capacity according to the size of the write block and the second coefficient, wherein the second coefficient is smaller than the first coefficient, and the second coefficient is a positive integer; or the like, or, alternatively,
and when the memory occupancy rate is greater than a preset occupancy threshold value, determining a second writing-in frequency according to the first writing-in frequency, and acquiring a second storage capacity according to the second writing-in frequency, the data capacity and the writing-in block size, wherein the second writing-in frequency is the writing-in frequency of the memory in the second period, and the second writing-in frequency is greater than the first writing-in frequency.
Optionally, the target load information includes processor utilization,
correspondingly, the step of obtaining the second cache capacity according to the target load information includes:
and when the utilization rate of the processor is greater than a preset utilization threshold value, acquiring a second cache capacity, wherein the second cache capacity is greater than the first cache capacity.
Optionally, when the processor utilization is greater than a preset utilization threshold, the step of obtaining the second cache capacity includes:
when the occupancy rate of the processor is greater than a preset occupancy threshold value, determining a second coefficient according to the first coefficient, and acquiring a second cache capacity according to the size of the write block and the second coefficient, wherein the second coefficient is greater than the first coefficient, and the second coefficient is a positive integer; or the like, or, alternatively,
and when the occupancy rate of the processor is greater than a preset occupancy threshold value, determining a second writing-in frequency according to the first writing-in frequency, and acquiring a second storage capacity according to the second writing-in frequency, the data capacity and the writing-in block size, wherein the second writing-in frequency is the writing-in frequency of the memory in the second period, and the second writing-in frequency is less than the first writing-in frequency.
Optionally, when writing the data to be written into the memory through the cache region, before the step of obtaining the target load information, the method further includes:
acquiring the input rate of data to be written, and acquiring the highest writing rate of the memory;
and when the input rate is less than or equal to the highest writing rate, writing data to be written into the memory through the buffer area.
In addition, in order to achieve the above object, an embodiment of the present invention further provides a data buffering device, where the data buffering device includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the computer program, when executed by the processor, implements the steps of the data buffering method as described above.
Furthermore, to achieve the above object, an embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the steps of the data buffering method as described above.
According to the embodiment of the invention, when data is written into the memory through the cache region, the target load information is obtained to determine the data processing performance of the system, and then the cache region is adjusted according to the target load information, so that the cache region is dynamically adjusted according to the actual use condition, the actual buffer capacity of the system is adapted to the actual use condition, the adaptability of data buffering is improved, different data writing requirements are met, and the buffer waste is avoided and the data writing efficiency is improved.
Drawings
Fig. 1 is a schematic structural diagram of a data buffering device according to an embodiment of the present invention;
fig. 2 is a flowchart illustrating a data buffering method according to a first embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
The technical solutions in the embodiments of the present invention will be described clearly and completely with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that, if directional indications (such as up, down, left, right, front, and back) are involved in the embodiment of the present invention, the directional indications are only used for explaining the relative positional relationship, the motion situation, and the like between the components in a certain posture, and if the certain posture is changed, the directional indications are changed accordingly.
In addition, technical solutions between various embodiments may be combined with each other, but must be realized by a person skilled in the art, and when the technical solutions are contradictory or cannot be realized, such a combination should not be considered to exist, and is not within the protection scope of the present invention.
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The data buffering method related to the embodiment of the invention is mainly applied to data buffering equipment, and the data buffering equipment can be a server, a Personal Computer (PC), a notebook computer, a mobile phone and the like.
Referring to fig. 1, fig. 1 is a schematic diagram of a hardware architecture of a data buffering device according to an embodiment of the present invention. In this embodiment of the present invention, the audio Processing device includes a processor 1001 (e.g., a Central Processing Unit (CPU)), a communication bus 1002, a user interface 1003, a network interface 1004, and a memory 1005. The communication bus 1002 is used for realizing connection communication among the components; the user interface 1003 may include a Display screen (Display), an input unit such as a key (Keyboard); the network interface 1004 may optionally include a standard wired interface, a WIreless interface (e.g., a WI-FI interface, WI-FI interface); the memory 1005 may be a Random Access Memory (RAM) or a non-volatile memory (non-volatile memory), such as a disk memory, and the memory 1005 may optionally be a storage device independent of the processor 1001. Of course, those skilled in the art will appreciate that the hardware configuration shown in FIG. 1 is not intended to limit the present invention.
With continued reference to FIG. 1, the memory 1005 of FIG. 1, which is one type of readable storage medium, may include an operating system, a network communication module, and a computer program. In fig. 1, the network communication module may be used to connect to the database for data interaction with the database; and the processor 1001 may call a computer program stored in the memory 1005 and implement the data buffering method of the embodiment of the present invention.
The embodiment of the invention provides a data buffering method.
Referring to fig. 2, fig. 2 is a flowchart illustrating a data buffering method according to a first embodiment of the present invention.
In this embodiment, the data buffering method includes the following steps:
step S10, when writing data to be written into the memory through the cache region, obtaining target load information, wherein the target load information is used for representing data processing performance, and the cache capacity of the cache region is determined according to the size of a write-in block when the write-in speed of the memory meets a preset condition;
at present, the data buffering method is basically implemented by using a part of a fixed area in a memory as a buffer area, and writing data into an internal or external memory through the fixed buffer area, for example, the method is applied to a digital television broadcasting system; the method does not consider the actual use condition of the cache region, so that different use environments cannot be adapted, and the writing efficiency is reduced. In view of the above, an embodiment of the present invention provides a data buffering method, which obtains target load information to determine data processing performance of a system when data is written into a memory through a buffer area, and then adjusts the buffer area according to the target load information, so as to dynamically adjust the buffer area according to actual use conditions, so that actual buffering capacity of the system is adapted to the actual use conditions, thereby improving adaptability of data buffering, meeting different data writing requirements, and being beneficial to avoiding buffering waste and improving data writing efficiency.
The data buffering method of this embodiment may be applied to a data buffering device, which is an independent entity device, such as a server, a Personal Computer (PC), a notebook computer, a mobile phone, and the like; or it may be applied to a device that is an abstract functional device composed of one (e.g. CPU) or a plurality of different physical functional modules. For convenience of description, in this embodiment, data buffering processing by "device" is taken as an example for description. In this embodiment, the device is connected to a Memory, which may be a built-in Memory of the device or an external Memory, and the Memory is a Non-Volatile Memory (NVM), and when the Memory is powered off, the stored data does not disappear; the Memory (sometimes also referred to as "load") of the device is a volatile Memory (RAM), or Random Access Memory (RAM), which is used as a temporary storage medium for an operating system or other running programs, and cannot retain data when power is off; a partial area of the memory is set as a cache area; when external input data needs to be written into the memory, the data is first stored into a buffer area of the memory, and then the data is written into the memory through the buffer area.
It should be noted that, in this embodiment, when a cache region is set, the cache capacity of the cache region (i.e., the size of the cache region) may be determined according to the size of a write block when the write speed of the memory meets a preset adjustment, specifically, may be determined according to the size of the write block when the memory is at the highest write speed, where the size of the write block may be denoted as BLK, and then the cache capacity BLK of the cache region is multiplied by a coefficient, where the coefficient may be denoted as Nu, and Nu is a positive integer, for example, BLK is 2MB, Nu is 32, and the cache capacity is 64 MB; of course, if the write block size of the memory at the highest write speed is not a fixed value, the write block size of the memory at the highest write speed in a certain period may be tested, several values obtained, and then an average value is calculated to represent the write block size. Through the arrangement, the data to be written in the memory through the cache area just occupies a plurality of memory blocks of the memory, so that the efficiency of data writing in is improved, and the utilization rate of the memory space of the memory is improved.
In this embodiment, when the device writes data to be written into the storage through the cache region, the device detects and acquires current target load information, where the target load information is used to indicate data processing performance, and may specifically include multiple types of information, such as memory occupancy, processor (e.g., CPU) utilization, processor temperature, and the like. By obtaining the target load information, the data processing performance of the device can be determined for subsequent timely adjustment of the cache area. Of course, the device may be through a particular thread, which may be referred to as a write thread, when writing data to be written to memory through the cache.
Step S20, adjusting the buffer according to the target load information, so as to write data into the memory through the adjusted buffer.
In this embodiment, after obtaining the target load information, the device may adjust the cache region according to the target load information; when the buffer area is adjusted, the buffer capacity of the buffer area is mainly adjusted, for example, the buffer capacity is increased, the buffer capacity is reduced, and the like, and data is written into the memory through the adjusted buffer area in the subsequent use, so that the buffer capacity of the device (or the system) is adapted to the actual use condition.
It should be noted that the cache area of the device memory may be set as one area, or may be two or more (here, "above" includes this number, and the same below) relatively independent areas. For example, the buffer area of the device memory is an area, and the buffer capacity is; when data is written, that is, data is written into the memory through the unique cache region, then the unique cache region is adjusted according to the target load information, and then the next data writing is performed through the adjusted cache region, although the dynamic adjustment process can be executed in a cycle all the time). For another example, the cache regions of the device memory are two relatively independent regions, which may be respectively referred to as a first cache region and a second cache region, where the two cache regions may be pre-created, and then write data into the memory through the first cache region (or the second cache region), then adjust the second cache region (or the first cache region) according to the target load information, and then write data next time through the adjusted second cache region (or the first cache region), then adjust the first cache region according to the target load information of the next data write, and write data next time through the adjusted first cache region, and so on; certainly, the cache regions of the device memory are two relatively independent regions, which may be referred to as a first cache region and a second cache region, respectively, where the first cache region is created in advance, and the second cache region is created and adjusted according to the target load information, and the specific adjustment process is similar to that described above, and is not described here again.
In this embodiment, when data to be written is written into a memory through a cache region, target load information is obtained, where the target load information is used to represent data processing performance, and a cache capacity of the cache region is determined according to a size of a write block when a write speed of the memory meets a preset condition; and adjusting the cache region according to the target load information so as to write data into the memory through the adjusted cache region. In the embodiment, when data is written into the memory through the cache region, the target load information is acquired to determine the data processing performance of the system, and then the cache region is adjusted according to the target load information, so that the cache region is dynamically adjusted according to the actual use condition, the actual buffering capacity of the system is adapted to the actual use condition, the adaptability of data buffering is improved, different data writing requirements are met, and the buffer waste is avoided and the data writing efficiency is improved.
Based on the above first embodiment of the data buffering method, a second embodiment of the data buffering method of the present invention is provided.
In this embodiment, before the step S10, the method further includes:
step A30, acquiring the input rate of data to be written, and acquiring the highest writing rate of the memory;
in this embodiment, when acquiring data to be written, a device first acquires an input rate of the data to be written, acquires a highest writing rate of a memory, and then compares the input rate and the highest writing rate; the data to be written may be data manually input by a user, or data transmitted from a network or other devices, or other input forms.
And step A40, when the input rate is less than or equal to the highest writing rate, writing data to be written into the memory through the buffer area.
In this embodiment, when the input rate of the data to be written is less than or equal to the highest write rate of the memory, it indicates that the data write speed (storage speed) of the memory can keep up with the input rate, that is, the input data can be stored in the memory in time, so that the data to be written can be written into the memory through the buffer area. When the input rate of the data to be written is greater than the highest writing speed of the memory, it is described that the data writing speed (storage speed) of the memory may not keep up with the input rate, that is, the process of storing the input data into the memory has hysteresis, and at this time, in order to avoid other risks (such as data loss, thread blockage, etc.) possibly brought by the hysteresis of data storage, relevant prompt information may be output to prompt the user to replace the memory with a higher writing rate.
Through the above manner, in the embodiment, before the data to be written is written in through the cache, the relationship between the data input rate and the highest writing rate of the memory can be judged, and when the input rate is less than or equal to the highest writing rate, the data to be written in is written in the memory through the cache region, so that the timeliness of data storage is ensured; and when the input rate is greater than the highest writing rate, outputting related prompt information to prompt a user to replace the memory with the higher writing rate, so as to avoid other risks possibly brought by data storage hysteresis.
Based on the first embodiment of the data buffering method, a third embodiment of the data buffering method of the present invention is provided.
In this embodiment, the cache region includes a first cache region and a second cache region, and correspondingly, the step S10 includes:
step A11, when writing data to be written into the memory through the first cache region, acquiring target load information;
in this embodiment, the cache regions of the device memory are two relatively independent regions, which may be respectively referred to as a first cache region and a second cache region, where the two cache regions may be pre-established, for example, the block write size is 2MB, the first cache capacity of the first cache region is 64MB, and the second cache capacity of the second cache region is 64 MB; and then, data is written into the memory through the first buffer area (or the second buffer area), and the target load information is obtained during writing.
Accordingly, the step S20 includes:
step a21, adjusting the second buffer according to the target load information, so as to write data into the memory through the adjusted second buffer.
After the target load information is obtained, the second cache region can be adjusted according to the target load information, and next data writing is carried out through the adjusted second cache region (or the first cache region), so that the cache region is dynamically adjusted according to the actual use condition, the actual buffering capacity of the system is adaptive to the actual use condition, and the adaptability of data buffering is improved; of course, when the next data writing is performed through the adjusted second cache region (or the first cache region), the target load information may be continuously obtained, then the first cache region is adjusted according to the target load information, and the next data writing is performed through the adjusted first cache region, and the above steps are performed in a loop. It should be noted that, in the above example, the first buffer area and the second buffer area are both created in advance, but in practice, only the first buffer area may be created in advance, and the second buffer area is newly created and adjusted according to the target load information, and the specific adjustment process is similar to that described above, and is not described here again.
Through the mode, the cache region is dynamically adjusted according to the actual use condition, so that the actual buffering capacity of the system is adaptive to the actual use condition, the adaptability of data buffering is improved, different data writing requirements are met, and the buffer waste is avoided and the data writing efficiency is improved.
Based on the third embodiment of the data buffering method, a fourth embodiment of the data buffering method of the present invention is provided.
In this embodiment, before the step a11, the method further includes:
step A01, acquiring the data capacity of data to be written;
in this embodiment, when acquiring the data to be written, the device acquires the data capacity of the data to be written, that is, determines the data size of the data to be written.
Step a02, when the data capacity is greater than or equal to the first buffer capacity of the first buffer, writing data to be written into a memory through the first buffer.
In this embodiment, when the data capacity of the data to be written is greater than or equal to the first cache capacity of the first cache region, the device may create a write thread, and then write the data to be written into the memory through the first cache region based on the write thread; of course, the write thread may also exist all the time, and when the data capacity of the data to be written is greater than or equal to the first cache capacity of the first cache region, the data to be written is written to the memory through the first cache region based on the write thread.
Through the mode, when the capacity of the data to be written is larger than or equal to the first cache capacity of the first cache region, the data to be written is written into the memory through the first cache region, and resource loss caused by multiple times of writing is reduced; and because the first cache capacity of the first cache region is determined based on the size of the write-in block, the data to be written in the memory written in the first cache region just occupies a plurality of storage blocks of the memory, which is beneficial to improving the efficiency of data writing and improving the utilization rate of the storage space of the memory.
Based on the fourth embodiment of the data buffering method, a fifth embodiment of the data buffering method of the present invention is provided.
In this embodiment, the step a01 includes:
step A011, acquiring an input rate of data to be written in a first period, and determining the data capacity of the data to be written in the first period according to the input rate;
in this embodiment, in order to effectively manage data writing processing, a natural time period may be divided into a plurality of continuous or discontinuous cycles, and data is written into the memory through different cache regions in each cycle; for example, the period duration of each period is 5s, 0-5s is recorded as a first period, 6-10s is recorded as a second period, 11-15s is recorded as a third period, and so on; and in the first period, data are written into the memory through the first cache region, in the second period, data are written into the memory through the second cache region, in the third period, data are written into the memory through the first cache region, and so on. Before the writing is performed in the first period, the input rate of the data to be written in the first period may be obtained, and the data capacity of the data to be written in the first period may be determined according to the input rate, for example, the input rate of the data to be written in the first period is M1, and the period duration of the first period is time1, so that the data capacity of the data to be written in the first period may be calculated to be M1 × time 1.
Correspondingly, the step a02 includes:
step A021, when the data capacity of the data to be written in the first period is larger than or equal to the first cache capacity, writing the data to be written in the memory through the first cache region in the first period;
in this embodiment, when the data capacity of the data to be written in the first period is greater than or equal to the first buffer capacity of the first buffer area, the data to be written in the memory may be written in the first period through the first buffer area.
The step A21 includes:
step a211, obtaining a second cache capacity according to the target load information, and adjusting the second cache area according to the second cache capacity, so as to write data into the memory through the adjusted second cache area in a second period.
In this embodiment, when data is written to the memory through the first buffer area in the first period, target load information of the writing process is obtained. After the target load information is obtained, the second buffer area can be adjusted according to the target load information, and then data is written into the memory through the adjusted second buffer area in the second period, so that the buffering capacity of the device (or the system) is adapted to the actual use condition. Of course, when data is written into the memory through the second cache region in the second period, the target load information of the second period may also be continuously obtained, then the first cache region is adjusted through the target load information of the second period, then the data is written into the memory through the adjusted first cache region in the third period, and so on.
By the mode, data writing is carried out through different cache regions in different periods, and unused cache regions are dynamically adjusted according to target load information, so that the actual buffering capacity of the system is adaptive to the actual use condition, the adaptability of data buffering is improved, and different data writing requirements are met.
Based on the above fifth embodiment of the data buffering method, a sixth embodiment of the data buffering method of the present invention is provided.
In this embodiment, the step a021 includes:
step A0211, when the data capacity of the data to be written in the first period is greater than or equal to the first cache capacity, determining a first writing frequency according to the data capacity and the first cache capacity, wherein the first writing frequency is the writing frequency of a memory in the first period;
in this embodiment, when the data capacity of the data to be written in the first period is greater than or equal to the first buffer capacity of the first buffer area, a first writing frequency may be determined according to the data capacity of the data to be written in the first period and the first buffer capacity, where the first writing frequency is the writing frequency of the memory in the first period; for example, the first write count P1 is 2 (2 ═ 128/64), that is, the data to be written can be written into the memory all the time after the memory is written into the memory 2 times in the first week; of course, if the data size is not divisible by the first buffer size, the result may be fetched up the first number of times.
Further, the first buffer capacity of the first buffer area is determined according to the write block size BLK, and if the product of the write block size BLK and the first coefficient is used as the buffer capacity, the first coefficient is Nu1 and is a positive integer, that is, the first buffer capacity can be represented as BLK Nu1, then there may be a predetermined formula
M1*time1=BLK*Nu1*P1
Wherein M1 × time1 is the data capacity; BLK is the write block size; nu1 is the first coefficient; p1 is the first write count. The first writing times can be determined according to the obtained data capacity M1 × time1, the writing block size BLK, the first coefficient Nu1 and a preset formula. For example, the data capacity M1 × time1 of the data to be written in the first cycle is 128MB, the write block size is 2MB, the first coefficient is 32, and the first write count P1 is 2.
And A0212, writing data to be written into a memory through the first cache region in the first period according to the first writing times.
In this embodiment, when the first write-in frequency is obtained, the data to be written in may be written in the memory through the first cache region in the first period according to the first write-in frequency.
Through the mode, when the data to be written is written into the memory through the first cache region, the first writing times can be obtained according to the data capacity of the data to be written and the first cache capacity, and the data is written according to the first writing times, so that the data to be written in the first period can be completely written into the memory.
Based on the above sixth embodiment of the data buffering method, a seventh embodiment of the data buffering method of the present invention is provided.
In this embodiment, the target load information includes a memory occupancy rate; correspondingly, in step a211, the step of obtaining the second cache capacity according to the target load information includes:
step A2111, when the memory occupancy rate is greater than a preset occupancy threshold, acquiring a second cache capacity, wherein the second cache capacity is smaller than the first cache capacity.
In this embodiment, the target load information includes a memory occupancy rate; that is, during the process of writing data to the storage through the first buffer area, the memory usage, which may represent the data processing capability of the device (or system), is obtained. The larger the memory occupancy rate is, the more the memory used space is, the higher the memory load is, and the data processing capability will be negatively affected; when the memory occupancy rate is greater than the preset occupancy threshold value, it indicates that the memory load is too high, and the data processing capability is relatively poor, at this time, in order to improve the data processing capability, a second cache capacity may be obtained, the second cache capacity is smaller than the first cache capacity, then the second cache area is adjusted through the second cache capacity, and the cache capacity of the second cache area is set to be the second cache capacity, so that the memory occupancy rate of the whole cache area is reduced, and the data processing capability is improved.
Specifically, when the memory occupancy rate is greater than the preset occupancy threshold, the process of acquiring the second cache capacity may be directly calculated on the basis of the first cache capacity; since the first buffer capacity is obtained by multiplying the block write size BLK by the first coefficient Nu1, and since the block write size BLK is related to the memory, a smaller second coefficient Nu2 can be determined according to the first coefficient Nu1, and then the second buffer capacity is obtained according to the multiplication of the block write size BLK by the second coefficient Nu 2; for example, BLK is 2MB, Nu1 is 32, the first buffer size is 64MB, Nu2 can be determined to be 30 according to Nu1, and the second buffer size is 60MB (60 ═ 2 × 30). Secondly, a second capacity can be obtained based on the relation among the data capacity, the writing times and the cache capacity; in M1 × time1 ═ BLK × Nu1 × P1, BLK × Nu1 is the first buffer capacity, and in the case where the data capacity M1 × time1 is not changed, the buffer capacity is decreased when the number of writes increases; therefore, a second write count P2 (the second write count is the write count of the memory in the second period) may be obtained according to the first write count P1, and then a second cache capacity may be obtained according to the data capacity M1 × time1 and the second write count P2; for example, the data size is 128MB, the first buffer size BLK × Nu1 is 64MB, the first write count P1 is 2, the second write count P2 is 4 according to the first write count P1, and the second buffer size is 32MB (of course, when the second buffer size is obtained, it is necessary to ensure that the second buffer size is a positive integer multiple of the block write size).
Through the mode, when the utilization rate of the memory is high, the cache capacity of the cache region can be reduced, so that the memory load is reduced, and the data processing capacity is improved.
Based on the above sixth embodiment of the data buffering method, an eighth embodiment of the data buffering method of the present invention is provided.
In this embodiment, the target load information includes a processor utilization rate; correspondingly, in step a211, the step of obtaining the second cache capacity according to the target load information further includes:
step A2112, when the utilization rate of the processor is greater than a preset utilization threshold, obtaining a second cache capacity, where the second cache capacity is greater than the first cache capacity.
In this embodiment, the target load information includes processor utilization; that is, during the process of writing data to the memory through the first cache region, the processor utilization rate, which may represent the data processing capability of the device (or system), is obtained. The larger the utilization rate of the processor is, the higher the load of the processor is, the data processing capacity is negatively affected; when the utilization rate of the processor is greater than the preset utilization threshold, the processor load is over high, and the data processing capacity is relatively poor, at this time, in order to improve the data processing capacity, a second cache capacity can be obtained, the second cache capacity is greater than the first cache capacity, then the second cache area is adjusted through the second cache capacity, and the cache capacity of the second cache area is set to be the second cache capacity, so that the writing frequency of each period to the memory is reduced, the task load of the processor is reduced, the processor load is reduced, and the data processing capacity is improved.
Specifically, when the processor utilization rate is greater than the preset utilization threshold, the process of obtaining the second cache capacity may be directly obtained by calculation on the basis of the first cache capacity; since the first buffer capacity is obtained by multiplying the block write size BLK by the first coefficient Nu1, and since the block write size BLK is related to the memory, a larger second coefficient Nu2 can be determined according to the first coefficient Nu1, and then the second buffer capacity is obtained by multiplying the block write size BLK by the second coefficient Nu 2; for example, BLK is 2MB, Nu1 is 32, the first buffer size is 64MB, Nu2 can be determined to be 64 according to Nu1, and the second buffer size is 128MB (60 ═ 2 × 64). Secondly, a second capacity can be obtained based on the relation among the data capacity, the writing times and the cache capacity; in M1 × time1 ═ BLK × Nu1 × P1, BLK × Nu1 is the first buffer capacity, and in the case where the data capacity M1 × time1 is not changed, the buffer capacity is increased when the number of writes is decreased; therefore, a second write count P2 (the second write count is the write count of the memory in the second period) may be obtained according to the first write count P1, and then a second cache capacity may be obtained according to the data capacity M1 × time1 and the second write count P2; for example, the data size is 128MB, the first buffer size BLK × Nu1 is 64MB, the first write count P1 is 2, the second write count P2 is determined to be 1 according to the first write count P1, and the second buffer size is further determined to be 128MB (of course, when the second buffer size is obtained, it is required to ensure that the second buffer size is a positive integer multiple of the block write size).
Through the mode, when the utilization rate of the processor is high, the cache capacity of the cache region can be increased, so that the load of the processor is reduced, and the data processing capacity is improved.
It should be noted that, when both the memory occupancy rate and the processor utilization rate are higher, the adjustment may be performed based on the priorities of the memory and the processor, for example, if the priority of the memory is higher, the second cache capacity may be obtained according to the memory occupancy rate, specifically, in the step a 2111; if the priority of the processor is higher, the second cache capacity may be obtained according to the utilization rate of the processor, specifically, as in the step a 2112; of course, the input load may be too high, and the data processing capability is not enough, so as to prompt the user to perform processing in time, such as reducing the input rate, replacing the high-speed memory, and the like.
In addition, the embodiment of the invention also provides a readable storage medium.
The present invention readable storage medium has stored thereon a computer program, which when executed by a processor, performs the steps of the data buffering method as described above.
The method implemented when the computer program is executed may refer to various embodiments of the data buffering method of the present invention, and details thereof are not repeated herein.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) as described above and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention, and all modifications and equivalents of the present invention, which are made by the contents of the present specification and the accompanying drawings, or directly/indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (14)

1. A data buffering method, characterized in that the data buffering method comprises the steps of:
when data to be written is written into a memory through a cache region, target load information is obtained, wherein the target load information is used for representing data processing performance, and the cache capacity of the cache region is determined according to the size of a write-in block when the write-in speed of the memory meets a preset condition;
and adjusting the cache region according to the target load information so as to write data into the memory through the adjusted cache region.
2. The data buffering method of claim 1, wherein the buffer includes a first buffer and a second buffer,
correspondingly, the step of obtaining the target load information when writing the data to be written into the memory through the cache region includes:
when data to be written is written into a memory through a first cache region, target load information is obtained;
correspondingly, the step of adjusting the cache region according to the target load information to obtain an adjusted cache region, and writing data into the memory through the adjusted cache region includes:
and adjusting the second cache region according to the target load information so as to write data into the memory through the adjusted second cache region.
3. The data buffering method according to claim 2, wherein, correspondingly, before the step of obtaining the target load information when writing the data to be written into the memory through the first buffer area, the method further comprises:
acquiring the data capacity of data to be written;
and when the data capacity is larger than or equal to the first cache capacity of the first cache region, writing data to be written into a memory through the first cache region.
4. The data buffering method of claim 3, wherein the step of obtaining the data capacity of the data to be written comprises:
acquiring an input rate of data to be written in a first period, and determining the data capacity of the data to be written in the first period according to the input rate;
correspondingly, when the data capacity is greater than or equal to the first cache capacity of the first cache region, the step of writing the data to be written into the memory through the first cache region includes:
when the data capacity of the data to be written in the first period is larger than or equal to the first cache capacity, writing the data to be written in a memory through the first cache region in the first period;
correspondingly, the step of obtaining a second cache capacity according to the target load information, and adjusting the second cache area according to the second cache capacity, so as to write data into the memory through the adjusted second cache area includes:
and acquiring second cache capacity according to the target load information, and adjusting the second cache area according to the second cache capacity so as to write data into the memory through the adjusted second cache area in a second period.
5. The data buffering method of claim 4, wherein when the data capacity of the data to be written in the first cycle is greater than or equal to the first buffer capacity, the step of writing the data to be written in the first cycle to the memory through the first buffer area comprises:
when the data capacity of the data to be written in the first period is larger than or equal to the first cache capacity, determining a first writing frequency according to the data capacity and the first cache capacity, wherein the first writing frequency is the writing frequency of a memory in the first period;
and writing data to be written into a memory through the first cache region in the first period according to the first writing times.
6. The data buffering method of claim 5, wherein the first buffer capacity is determined based on the write block size and a first coefficient, the first coefficient being a positive integer;
correspondingly, the step of determining the first writing times according to the data capacity and the first cache capacity includes:
and determining a first writing time according to the data capacity, the writing block size and a first coefficient.
7. The data buffering method of claim 6, wherein the step of determining the first number of writes based on the data capacity, the write block size, and a first coefficient comprises:
determining a first writing frequency according to the data capacity, the writing block size, the first coefficient and a preset formula, wherein the preset formula is as follows:
M1*time1=BLK*Nu1*P1
wherein M1 × time1 is the data capacity;
BLK is the write block size;
nu1 is the first coefficient;
p1 is the first write count.
8. The data buffering method of claim 6, wherein the target load information includes memory occupancy,
correspondingly, the step of obtaining the second cache capacity according to the target load information includes:
and when the memory occupancy rate is greater than a preset occupancy threshold value, acquiring a second cache capacity, wherein the second cache capacity is smaller than the first cache capacity.
9. The data buffering method of claim 8, wherein the step of obtaining the second buffer capacity when the memory occupancy is greater than a preset occupancy threshold comprises:
when the memory occupancy rate is greater than a preset occupancy threshold value, determining a second coefficient according to the first coefficient, and acquiring a second cache capacity according to the size of the write block and the second coefficient, wherein the second coefficient is smaller than the first coefficient, and the second coefficient is a positive integer; or the like, or, alternatively,
and when the memory occupancy rate is greater than a preset occupancy threshold value, determining a second writing-in frequency according to the first writing-in frequency, and acquiring a second storage capacity according to the second writing-in frequency, the data capacity and the writing-in block size, wherein the second writing-in frequency is the writing-in frequency of the memory in the second period, and the second writing-in frequency is greater than the first writing-in frequency.
10. The data buffering method of claim 6, wherein the target load information includes processor utilization,
correspondingly, the step of obtaining the second cache capacity according to the target load information includes:
and when the utilization rate of the processor is greater than a preset utilization threshold value, acquiring a second cache capacity, wherein the second cache capacity is greater than the first cache capacity.
11. The data buffering method of claim 10, wherein the step of obtaining the second cache capacity when the processor utilization is greater than a preset utilization threshold comprises:
when the occupancy rate of the processor is greater than a preset occupancy threshold value, determining a second coefficient according to the first coefficient, and acquiring a second cache capacity according to the size of the write block and the second coefficient, wherein the second coefficient is greater than the first coefficient, and the second coefficient is a positive integer; or the like, or, alternatively,
and when the occupancy rate of the processor is greater than a preset occupancy threshold value, determining a second writing-in frequency according to the first writing-in frequency, and acquiring a second storage capacity according to the second writing-in frequency, the data capacity and the writing-in block size, wherein the second writing-in frequency is the writing-in frequency of the memory in the second period, and the second writing-in frequency is less than the first writing-in frequency.
12. The data buffering method of claim 1, wherein the step of obtaining the target load information when writing the data to be written to the memory through the buffer area is preceded by the step of:
acquiring the input rate of data to be written, and acquiring the highest writing rate of the memory;
and when the input rate is less than or equal to the highest writing rate, writing data to be written into the memory through the buffer area.
13. A data buffering device, characterized in that the data buffering device comprises a memory, a processor and a computer program stored on the memory and executable on the processor, which computer program, when executed by the processor, performs the steps of the data buffering method as claimed in any one of claims 1 to 12.
14. A computer-readable storage medium, characterized in that a computer program is stored thereon, which computer program, when being executed by a processor, realizes the steps of the data buffering method according to any one of claims 1 to 12.
CN202010314782.1A 2020-04-20 2020-04-20 Data buffering method, device and computer readable storage medium Pending CN111538678A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010314782.1A CN111538678A (en) 2020-04-20 2020-04-20 Data buffering method, device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010314782.1A CN111538678A (en) 2020-04-20 2020-04-20 Data buffering method, device and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN111538678A true CN111538678A (en) 2020-08-14

Family

ID=71978857

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010314782.1A Pending CN111538678A (en) 2020-04-20 2020-04-20 Data buffering method, device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111538678A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112711387A (en) * 2021-01-21 2021-04-27 维沃移动通信有限公司 Method and device for adjusting capacity of buffer area, electronic equipment and readable storage medium
CN113630657A (en) * 2021-08-03 2021-11-09 广东九联科技股份有限公司 Video playing optimization method and system based on hls protocol
CN113805814A (en) * 2021-09-22 2021-12-17 深圳宏芯宇电子股份有限公司 Cache management method and device, storage equipment and readable storage medium
CN113805812A (en) * 2021-09-22 2021-12-17 深圳宏芯宇电子股份有限公司 Cache management method, device, equipment and storage medium
CN117573043A (en) * 2024-01-17 2024-02-20 济南浪潮数据技术有限公司 Transmission method, device, system, equipment and medium for distributed storage data

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112711387A (en) * 2021-01-21 2021-04-27 维沃移动通信有限公司 Method and device for adjusting capacity of buffer area, electronic equipment and readable storage medium
CN112711387B (en) * 2021-01-21 2023-06-09 维沃移动通信有限公司 Buffer capacity adjustment method and device, electronic equipment and readable storage medium
CN113630657A (en) * 2021-08-03 2021-11-09 广东九联科技股份有限公司 Video playing optimization method and system based on hls protocol
CN113805814A (en) * 2021-09-22 2021-12-17 深圳宏芯宇电子股份有限公司 Cache management method and device, storage equipment and readable storage medium
CN113805812A (en) * 2021-09-22 2021-12-17 深圳宏芯宇电子股份有限公司 Cache management method, device, equipment and storage medium
CN113805814B (en) * 2021-09-22 2023-08-15 深圳宏芯宇电子股份有限公司 Cache management method, device, storage equipment and readable storage medium
CN113805812B (en) * 2021-09-22 2024-03-05 深圳宏芯宇电子股份有限公司 Cache management method, device, equipment and storage medium
CN117573043A (en) * 2024-01-17 2024-02-20 济南浪潮数据技术有限公司 Transmission method, device, system, equipment and medium for distributed storage data

Similar Documents

Publication Publication Date Title
CN111538678A (en) Data buffering method, device and computer readable storage medium
CN110187753B (en) Application program control method, device, terminal and computer readable storage medium
CN105100876A (en) Streaming media playing method and device
EP3137965A1 (en) Cpu/gpu dcvs co-optimization for reducing power consumption in graphics frame processing
KR20070086545A (en) Method and apparatus for adjusting a duty cycle to save power in a computing system
CN110703944B (en) Touch data processing method and device, terminal and storage medium
CN112711387B (en) Buffer capacity adjustment method and device, electronic equipment and readable storage medium
WO2024060682A9 (en) Memory management method and apparatus, memory manager, device and storage medium
US20170212581A1 (en) Systems and methods for providing power efficiency via memory latency control
CN112083988A (en) Screen refresh rate control method, mobile terminal and computer readable storage medium
WO2021077375A1 (en) Communication frequency adjustment method and apparatus, and electronic device and storage medium
CN111491169A (en) Digital image compression method, device, equipment and medium
CN110998524B (en) Method for processing configuration file, processing unit, touch chip, device and medium
CN110795323A (en) Load statistical method, device, storage medium and electronic equipment
CN111767136B (en) Process management method, terminal and device with storage function
CN112715040B (en) Method for reducing power consumption, terminal equipment and storage medium
WO2020119029A1 (en) Distributed task scheduling method and system, and storage medium
CN114116231A (en) Data loading method and device, computer equipment and storage medium
CN116955271A (en) Method and device for storing data copy, electronic equipment and storage medium
CN112328351A (en) Animation display method, animation display device and terminal equipment
CN108736082B (en) Method, device, equipment and storage medium for improving endurance capacity of terminal battery
CN115712337A (en) Scheduling method and device of processor, electronic equipment and storage medium
CN113521753A (en) System resource adjusting method, device, server and storage medium
JP2018505489A (en) Dynamic memory utilization in system on chip
WO2020103027A1 (en) Network power consumption adjustment method, network power consumption adjustment device, and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination