CN114048152A - Data caching method and device, electronic equipment and storage medium - Google Patents

Data caching method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114048152A
CN114048152A CN202111400582.9A CN202111400582A CN114048152A CN 114048152 A CN114048152 A CN 114048152A CN 202111400582 A CN202111400582 A CN 202111400582A CN 114048152 A CN114048152 A CN 114048152A
Authority
CN
China
Prior art keywords
data
memory
space
cache space
annular
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111400582.9A
Other languages
Chinese (zh)
Inventor
庄少华
陈文明
江常杯
庄白云
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SHENZHEN HUABAO ELECTRONIC TECHNOLOGY CO LTD
Original Assignee
SHENZHEN HUABAO ELECTRONIC TECHNOLOGY CO LTD
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SHENZHEN HUABAO ELECTRONIC TECHNOLOGY CO LTD filed Critical SHENZHEN HUABAO ELECTRONIC TECHNOLOGY CO LTD
Priority to CN202111400582.9A priority Critical patent/CN114048152A/en
Publication of CN114048152A publication Critical patent/CN114048152A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0866Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
    • G06F12/0871Allocation or management of cache space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/957Browsing optimisation, e.g. caching or content distillation
    • G06F16/9574Browsing optimisation, e.g. caching or content distillation of access to content, e.g. by caching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1016Performance improvement
    • G06F2212/1024Latency reduction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1041Resource optimization
    • G06F2212/1044Space efficiency improvement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/15Use in a specific computing environment
    • G06F2212/154Networked environment

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention discloses a data caching method, a data caching device, electronic equipment and a storage medium, wherein the method comprises the following steps: initializing an annular cache space according to a first memory threshold; and dynamically adjusting the memory space of the annular cache space according to the data volume of the stored data and/or the read data. The embodiment of the invention can improve the utilization efficiency of the system memory, realize the fast caching of the data and reduce the data transmission delay by adjusting the occupation of the memory space in the storage and reading processes of the data volume.

Description

Data caching method and device, electronic equipment and storage medium
Technical Field
The embodiment of the invention relates to the technical field of computer application, in particular to a data caching method and device, electronic equipment and a storage medium.
Background
With the development of internet technology, the degree of intelligence of terminal devices such as automobiles and televisions is increasing year by year. The automobile also becomes an important carrier of multimedia services, and the use experience of the automobile is enriched by media contents such as video, audio and the like. However, as media content increases, the network burden on automobiles increases. Especially, when a plurality of media contents are transmitted, the data amount of the media contents is too large, which easily causes the delay or blockage of the automobile network.
In order to solve the above problems, the automobile needs to cache media content in advance to prevent the problems of network congestion and data loss. In the prior art, media content caching is mainly realized according to threads, and threads are independently arranged for receiving data and sending data for each media content, however, the method is only suitable for communication scenes with small data flow, and the cache space applied in the data receiving and sending processes is relatively fixed, so that memory resources are greatly wasted.
Disclosure of Invention
The invention provides a data caching method, a data caching device, electronic equipment and a storage medium, which are used for dynamically caching data, improving the utilization efficiency of memory resources and reducing transmission delay caused by data caching.
In a first aspect, an embodiment of the present invention provides a data caching method, where the method includes:
initializing an annular cache space according to a first memory threshold;
and dynamically adjusting the memory space of the annular cache space according to the data volume of the stored data and/or the read data.
In a second aspect, an embodiment of the present invention further provides a data caching apparatus, where the apparatus includes:
the initialization module is used for initializing the annular cache space according to a first memory threshold value;
and the dynamic adjustment module is used for dynamically adjusting the memory space of the annular cache space according to the data volume of the stored data and/or the read data.
In a third aspect, an embodiment of the present invention further provides an electronic device, where the electronic device includes:
one or more processors;
a memory for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the data caching method as in any one of the embodiments of the invention.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the data caching method according to any one of the embodiments of the present invention.
According to the embodiment of the invention, the annular cache space with the space size corresponding to the first memory threshold value is generated, and the memory space of the annular cache space is dynamically adjusted according to the data volume in the data storage and data reading processes, so that the dynamic use of the cache space is realized, the utilization efficiency of memory resources can be improved, the processing time of data cache is reduced, and the data transmission delay can be reduced.
Drawings
Fig. 1 is a flowchart of a data caching method according to an embodiment of the present invention;
fig. 2 is a flowchart of another data caching method according to a second embodiment of the present invention;
FIG. 3 is a diagram illustrating an exemplary ring buffer space according to a second embodiment of the present invention;
FIG. 4 is a diagram illustrating an exemplary adjustment of a ring buffer space according to a second embodiment of the present invention;
fig. 5 is a flowchart of another data caching method according to a third embodiment of the present invention;
fig. 6 is a schematic structural diagram of a data caching apparatus according to a fourth embodiment of the present invention;
fig. 7 is a schematic structural diagram of an electronic device according to a fifth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be noted that, for convenience of description, only a part of the structures related to the present invention, not all of the structures, are shown in the drawings, and furthermore, embodiments of the present invention and features of the embodiments may be combined with each other without conflict.
Example one
Fig. 1 is a flowchart of a data caching method according to an embodiment of the present invention, where the method is applicable to a situation of service data caching in an automobile, and the method may be executed by a data caching device, and the device may be implemented in a hardware and/or software manner, referring to fig. 1, the method according to the embodiment of the present invention specifically includes the following steps:
step 110, initializing a ring cache space according to a first memory threshold.
The first memory threshold may be a preset memory space occupation value, the first memory threshold may be determined by an average length of the service data, the annular cache space may be a cache space ending in a logical structure, and the service data may be cached in the annular cache space.
In the embodiment of the present invention, when caching the service data, a memory space may be applied as an annular cache space according to a first memory threshold, where a size of the memory space occupied by the annular cache space may correspond to the first memory threshold. It is understood that the process of initializing the circular buffer space may include applying for a buffer address of the circular buffer space and generating a read-write pointer of the circular buffer space.
And step 120, dynamically adjusting the memory space of the annular cache space according to the data quantity of the stored data and/or the read data.
The data storage may be a process of buffering data into the annular buffer space, and the stored data may include video data, audio data, text data, and the like. The process of reading data from the annular buffer space may be a process of reading buffered data from the annular buffer space, the data amount may be information reflecting a data size of the stored data or the read data, and the data amount may refer to a size of the annular buffer space that needs to be occupied by the stored data or the read data.
Specifically, when the annular cache space is used for reading data or storing data, the data volume of the read data or the stored data can be extracted, and the memory space of the annular cache space is increased or the memory space occupied by the annular cache space is reduced according to the data volume. It is understood that the ring buffer space may be a continuous memory space or a discontinuous memory space node, and the process of dynamically adjusting the memory space may be to lengthen or shorten the continuous memory space or may be to increase or decrease the discontinuous memory space node.
According to the embodiment of the invention, by generating the annular cache space with the space size corresponding to the first memory threshold value and dynamically adjusting the memory space of the annular cache space according to the data volume in the data storage and data reading processes, the dynamic use of the cache space is realized, the utilization efficiency of memory resources is reduced, the processing time of data cache is improved, and the data transmission delay is reduced.
Example two
Fig. 2 is a flowchart of another data caching method according to a second embodiment of the present invention, which is embodied on the basis of the second embodiment of the present invention, and referring to fig. 2, the method according to the second embodiment of the present invention specifically includes the following steps:
step 210, applying for a memory space as an annular cache space according to a first memory threshold.
Specifically, a memory space may be applied as a ring buffer space, and the size of the memory space may be a first memory threshold, and it is understood that the memory space may include a continuous memory space or a plurality of discontinuous memory space nodes.
Step 220, initializing the read pointer and the write pointer according to the start address of the memory space.
The start address may be a logical address or a physical address at a start position of the memory space, the read pointer may be identification information for marking a start position of data reading in the memory space, and the write pointer may be identification information for marking a start position of data writing in the memory space.
In the embodiment of the present invention, a start address of the memory space may be obtained, and a write pointer and a read pointer may be set for the circular cache space, where the write pointer and the read pointer may be set to point to the start address.
Step 230, initialize the first memory threshold to the remaining memory size of the ring cache space.
The remaining memory size may be the size of the unoccupied memory space of the ring cache space.
Specifically, the remaining memory size of the ring cache space may be set to a first memory threshold, which may indicate that no data has been written into the ring cache space.
In an exemplary implementation, fig. 3 is an exemplary diagram of a circular buffer space according to a second embodiment of the present invention, and referring to fig. 3, the circular buffer space may be a memory space connected end to end, a read pointer and a write pointer may be disposed in the memory space, service data may be stored in the circular buffer space in a write direction according to the write pointer, and the service data stored in the circular buffer space may be read out from the circular buffer space in the write direction according to the read pointer. The black part between the write pointer and the read pointer in the annular buffer space can be represented as an area in which the service data is written, and the white part between the write pointer and the read pointer in the annular buffer space can be represented as a blank area, and the space size of the blank area can be represented as the remaining memory size. Referring to fig. 3, the ring buffer area can be guaranteed to be in two states, a normal state and a rollback state, in the normal state, the write pointer is writing data, and the read pointer is before the write pointer, which indicates that the memory space is still vacant. In the rollback state, the write pointer precedes the read pointer, indicating that there is free space in the memory space. The normal state may refer to the write pointer moving in the write direction, and the rollback state may refer to the write pointer moving in the opposite direction to the write direction.
And step 240, creating a data receiving thread to acquire the service data, and determining the data volume of the service data.
The data receiving thread may be the smallest execution unit for receiving data, and may have separate system resources. The number of the data receiving threads can be one or more, each data receiving thread can own respective system resources, and different data receiving threads can execute in parallel to receive different service data.
In the embodiment of the present invention, a data receiving thread may be created to receive the service data, and after the service data is obtained by using the data receiving thread, the data volume of the received service data may be determined to determine the memory space occupied by the received service data in the ring cache space.
Step 250, determining whether the difference between the data size and the remaining memory size of the memory space satisfies a storage increase threshold.
The memory increase threshold may be a critical value that the annular cache space needs to be increased, and when the memory increase threshold is satisfied, the memory space of the annular cache space needs to be increased.
In the embodiment of the present invention, the data size may be compared with the remaining memory size of the memory space to determine a difference between the data size and the remaining memory size, if the difference is greater than or equal to the storage increase threshold, it indicates that the remaining memory of the annular cache space can store the service data without increasing the memory space, and the storage increase threshold is satisfied, if the difference is smaller than the storage increase threshold, it indicates that the memory space of the annular cache space cannot accommodate the service number, and the memory space threshold does not satisfy the storage increase threshold.
Further, on the basis of the above embodiment of the present invention, the storage increase threshold is determined according to the rollback space of the rollback operation of the circular cache space.
The rollback operation may be an operation of rolling back the data storage state of the circular cache space to a state between a period of time, and the rollback operation may affect the data storage state of the circular cache space. The rollback space may be the minimum memory space required for rollback operation data security.
In the embodiment of the present invention, the storage increase threshold may be set to a space size of a rollback space required by the circular cache space when performing a rollback operation.
And step 260, if yes, storing the service data into the memory space according to the write-in pointer of the annular cache space.
Specifically, when the memory space of the annular cache space meets the storage increase threshold, the service data may be written into the memory space in sequence by using the write address of the annular cache space as the starting position, so as to implement the caching of the service data.
And 270, if not, increasing the memory space of the annular cache space and writing the service data into the annular cache space.
In the embodiment of the present invention, when the memory space of the annular cache space does not satisfy the storage increase threshold, the memory space of the annular cache space may be increased, and the service data may be written into the annular cache space after the memory space is increased. It will be appreciated that the addresses of the increased memory space may be contiguous or non-contiguous with the addresses of the circular cache space.
According to the embodiment of the invention, a memory space is applied as an annular cache space according to a first memory threshold, a reading pointer and a writing pointer are initialized by using a starting address of the annular cache space, the size of a residual memory is set as the first memory threshold, a data receiving thread is used for receiving service data, the size of data volume of the service data is obtained, whether the difference value between the size of the data volume and the size of the residual memory meets a storage increase threshold or not is determined, if yes, the service data is directly stored in the annular cache space, and if not, the service data is stored in the annular cache space after the memory space of the annular cache space is increased, so that the dynamic use of the cache space is realized, the utilization efficiency of memory resources can be improved, the processing time of data caching is shortened, and the data transmission delay can be reduced.
Further, on the basis of the above embodiment of the present invention, writing the service data into the annular cache space after increasing the memory space of the annular cache space includes:
applying a memory space as a new annular cache space according to the sum of the first memory threshold and the storage increase threshold; migrating the stored data in the original annular cache space to a new annular cache space; and writing the service data into a new annular cache space, and updating the size of the residual memory.
In the embodiment of the present invention, a sum of the first memory threshold and the storage increase threshold may be determined, a memory space may be reapplied as a new annular cache space according to the sum, the stored data in the original annular cache space may be copied to the new annular cache space, the service data may be written into the new annular cache space, and the size of the remaining memory may be updated to the size of the remaining memory space of the new annular cache space. It can be understood that after the storage data is copied to the new circular buffer space, the original circular buffer space can be released.
In an exemplary implementation, fig. 4 is an exemplary diagram of adjusting a circular buffer space according to a second embodiment of the present invention, referring to fig. 4, in an embodiment of the present invention, when service data is written into the circular buffer space or data is rolled back, a write pointer and a read pointer intersect with each other, that is, when the service data is written, a data amount of the service data and a remaining memory size of a memory space do not satisfy a storage increase threshold, a memory space size of the circular buffer space needs to be adjusted, a circular buffer space with a larger memory space can be newly created, for example, a right circular buffer space in fig. 4, and stored data in an original circular buffer space can be written into a black portion of the new circular buffer space and service data to be written into a slant portion of the new circular buffer space.
EXAMPLE III
Fig. 5 is a flow of another data caching method provided in the third embodiment of the present invention, which is embodied on the basis of the foregoing embodiment of the present invention, and referring to fig. 5, the method provided in the third embodiment of the present invention specifically includes the following steps:
step 310, initializing a ring cache space according to a first memory threshold.
And step 320, creating a data reading thread to read the service data in the annular cache space, and determining the data volume of the service data.
The data reading threads can be the minimum execution unit for reading data in the annular cache space, and can have independent system resources, the number of the data reading threads can be one or more, the system resources among the data reading threads can be mutually independent, and different data reading threads can be executed in parallel.
In the embodiment of the present invention, a data reading thread may be created, and the created thread is used to read the service data stored in the ring cache space, for example, the service data may be sequentially read according to a read pointer of the ring cache space. It can be understood that the data amount of the read service data can be counted while the service data is read.
And step 330, reducing the memory space of the annular cache space when the data size is determined to meet the storage reduction threshold.
Specifically, the read data amount of the service data may be compared with a storage reduction threshold, and when the storage reduction threshold is met, for example, the data amount is greater than or equal to the storage reduction threshold, the memory space for controlling the ring cache space may be reduced.
In the embodiment of the present invention, the manner of reducing the cache space may include releasing a part of the memory space of the annular cache space, or newly applying for an annular cache space with a smaller memory space, and copying the service data in the original annular cache space into the newly applied annular cache space to release the original annular cache space.
According to the embodiment of the invention, by generating the annular cache space with the space size corresponding to the first memory threshold, the service data stored in the annular cache space is read by using the data reading thread, the data volume of the service data is counted, and the memory space occupied by the annular cache space is reduced when the data volume meets the storage reduction threshold, so that the dynamic use of the cache space is realized, the utilization efficiency of memory resources can be improved, and the data transmission delay is reduced.
Further, on the basis of the above embodiment of the present invention, the size of the remaining memory in the annular cache space is updated after the annular cache space is increased or decreased.
In the embodiment of the present invention, after the service data is written in or read from the annular cache space, the size of the remaining memory in the annular cache space changes, and the value of the size of the remaining memory may be increased or decreased according to the data amount of the written or read service data.
In an exemplary embodiment, a data caching method incorporating a window sliding type ring cache mechanism may include the steps of: 1. the initialization window sliding ring buffer has the necessary data structures, which may include the size of the ring buffer, read and write pointers, etc. 2. And starting 2 threads, wherein one thread is responsible for receiving data, and writing the received data into a window sliding type annular buffer according to the size of the received data volume, the size of the rest memory of the system and the data transceiving speed. 3. And reading data from the window sliding ring buffer in another thread. The working principle of the window sliding type annular buffer can be as follows: in normal state, the write pointer is writing data, and the read pointer is before the write pointer, which indicates that there is free space at the back end of the buffer. In the pointer rollback state: the write pointer precedes the read pointer to indicate that the front of the buffer has free space. Also included is a method of determining an error state: when the Write pointer rolls back, it is found that there is not enough space, and it is obviously not reasonable to intersect with the Read pointer (dotted part). A portion of the unprocessed data will be corrupted by the overlay. We must readjust the entire buffer. Reallocation adjustment: when the situation that the write pointer can not be rolled back due to insufficient space is met, only a buffer area which is a little larger is newly created, unprocessed data and write data are copied into a new buffer area in sequence, and the positions of the write pointer and the read pointer are well adjusted. And finally releasing the original buffer area.
It can be understood that through the continuous writing, reading and expanding processes, the window sliding type ring buffer area and the memory space of the buffer area are gradually expanded. However, when the volume is expanded to a certain extent, a balance is reached. Since the amount of information cannot be increased infinitely, the size of the buffer will also stabilize when the amount of information to be processed reaches a maximum value in combination with the constant processing of the read pointer.
Example four
Fig. 6 is a schematic structural diagram of a data caching apparatus according to a fourth embodiment of the present invention. The data caching method provided by any embodiment of the invention can be executed, and the method has the corresponding functional modules and beneficial effects of the execution method. The device can be implemented by software and/or hardware, and specifically comprises: an initialization module 401 and a dynamic adjustment module 402.
The initialization module 401 is configured to initialize the ring cache space according to a first memory threshold.
A dynamic adjustment module 402, configured to dynamically adjust a memory space of the circular cache space according to a data amount of the storage data and/or the read data.
According to the embodiment of the invention, the annular cache space with the space size corresponding to the first memory threshold value is generated by the initialization module, and the dynamic adjustment module dynamically adjusts the memory space of the annular cache space according to the data volume in the data storage and data reading processes, so that the dynamic use of the cache space is realized, the utilization efficiency of memory resources can be reduced, the processing time of data cache is prolonged, and the data transmission delay can be reduced.
Further, on the basis of the above embodiment of the present invention, the initialization module 401 includes:
and the space application unit is used for applying a memory space as the annular cache space according to the first memory threshold value.
And the pointer setting unit is used for initializing a read pointer and a write pointer according to the initial address of the memory space.
And the memory recording unit is used for initializing the first memory threshold value to the size of the residual memory of the annular cache space.
Further, on the basis of the above embodiment of the present invention, the dynamic adjustment module 402 includes:
and the data receiving unit is used for creating a data receiving thread to acquire the service data and determining the data volume of the service data.
And the memory judgment unit is used for determining whether the difference value between the data size and the residual memory size of the memory space meets a storage increase threshold value.
And the first storage processing unit is used for storing the service data into the memory space according to the write-in pointer of the annular cache space if the service data is in the annular cache space.
And the second storage processing unit is used for increasing the memory space of the annular cache space and then writing the service data into the annular cache space if the memory space of the annular cache space is not increased.
Further, on the basis of the foregoing embodiment of the present invention, the second storage processing unit is specifically configured to: applying a memory space as a new annular cache space according to the sum of the first memory threshold and the storage increase threshold; migrating the stored data in the original annular cache space to a new annular cache space; and writing the service data into the new annular cache space, and updating the size of the residual memory.
Further, on the basis of the above embodiment of the present invention, the storage increase threshold in the apparatus is determined according to a rollback space of a rollback operation of the circular cache space.
Further, on the basis of the above embodiment of the present invention, the dynamic adjustment module 402 further includes:
and the data reading unit is used for creating a data reading thread to read the service data in the annular cache space and determining the data volume of the service data.
And the buffer adjustment unit is used for reducing the memory space of the annular buffer space when the data size is determined to meet a storage reduction threshold.
Further, on the basis of the above embodiment of the invention, the apparatus further includes:
and the record updating unit is used for updating the size of the residual memory of the annular cache space after the annular cache space is increased or decreased.
EXAMPLE five
Fig. 7 is a schematic structural diagram of an electronic device according to a fifth embodiment of the present invention, and fig. 7 shows a block diagram of a computer device 312 suitable for implementing an embodiment of the present invention. The computer device 312 shown in FIG. 7 is only an example and should not bring any limitations to the functionality or scope of use of embodiments of the present invention. Device 312 is typically a computing device that implements an access control method.
As shown in FIG. 7, computer device 312 is in the form of a general purpose computing device. The components of computer device 312 may include, but are not limited to: one or more processors 316, a storage device 328, and a bus 318 that couples the various system components including the storage device 328 and the processors 316.
Bus 318 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, an Industry Standard Architecture (ISA) bus, a Micro Channel Architecture (MCA) bus, an enhanced ISA bus, a Video Electronics Standards Association (VESA) local bus, and a Peripheral Component Interconnect (PCI) bus.
Computer device 312 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by computer device 312 and includes both volatile and nonvolatile media, removable and non-removable media.
Storage 328 may include computer system readable media in the form of volatile Memory, such as Random Access Memory (RAM) 330 and/or cache Memory 332. The computer device 312 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 334 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 7, and commonly referred to as a "hard drive"). Although not shown in FIG. 7, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a Compact disk-Read Only Memory (CD-ROM), a Digital Video disk (DVD-ROM), or other optical media) may be provided. In these cases, each drive may be connected to bus 318 by one or more data media interfaces. Storage 328 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.
Program 336 having a set (at least one) of program modules 326 may be stored, for example, in storage 328, such program modules 326 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which may comprise an implementation of a network environment, or some combination thereof. Program modules 326 generally carry out the functions and/or methodologies of embodiments of the invention as described herein.
The computer device 312 may also communicate with one or more external devices 314 (e.g., keyboard, pointing device, camera, display 324, etc.), with one or more devices that enable a user to interact with the computer device 312, and/or with any devices (e.g., network card, modem, etc.) that enable the computer device 312 to communicate with one or more other computing devices. Such communication may occur via input/output (I/O) interfaces 322. Also, computer device 312 may communicate with one or more networks (e.g., a Local Area Network (LAN), Wide Area Network (WAN), etc.) and/or a public Network, such as the internet, via Network adapter 320. As shown, network adapter 320 communicates with the other modules of computer device 312 via bus 318. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the computer device 312, including but not limited to: microcode, device drivers, Redundant processing units, external disk drive Arrays, disk array (RAID) systems, tape drives, and data backup storage systems, to name a few.
The processor 316 executes programs stored in the storage 328 to perform various functional applications and data processing, such as implementing the data caching methods provided by the above-described embodiments of the present invention.
EXAMPLE six
Embodiments of the present invention provide a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processing apparatus, implements a data caching method as in the embodiments of the present invention. The computer readable medium of the present invention described above may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the computer device; or may exist separately and not be incorporated into the computer device.
The computer readable medium carries one or more programs which, when executed by the computing device, cause the computing device to: initializing an annular cache space according to a first memory threshold; and dynamically adjusting the memory space of the annular cache space according to the data volume of the stored data and/or the read data.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of an element does not in some cases constitute a limitation on the element itself.
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (10)

1. A method for caching data, the method comprising:
initializing an annular cache space according to a first memory threshold;
and dynamically adjusting the memory space of the annular cache space according to the data volume of the stored data and/or the read data.
2. The method of claim 1, wherein initializing the ring cache space according to the first memory threshold comprises:
applying for a memory space as the annular cache space according to the first memory threshold;
initializing a reading pointer and a writing pointer according to the initial address of the memory space;
initializing the first memory threshold to a remaining memory size of the annular cache space.
3. The method according to claim 1, wherein the dynamically adjusting the memory space of the ring buffer space according to the data amount of the stored data and/or the read data comprises:
creating a data receiving thread to acquire service data and determining the data volume of the service data;
determining whether a difference between the data size and a remaining memory size of the memory space satisfies a storage increase threshold;
if yes, storing the service data into the memory space according to a write-in pointer of the annular cache space;
and if not, increasing the memory space of the annular cache space and then writing the service data into the annular cache space.
4. The method according to claim 3, wherein the writing the service data into the circular cache space after increasing the memory space of the circular cache space comprises:
applying a memory space as a new annular cache space according to the sum of the first memory threshold and the storage increase threshold;
migrating the stored data in the original annular cache space to a new annular cache space;
and writing the service data into the new annular cache space, and updating the size of the residual memory.
5. The method of claim 3, wherein the storage increase threshold is determined according to a rollback space of a rollback operation of the circular cache space.
6. The method according to claim 1, wherein the dynamically adjusting the memory space of the ring buffer space according to the data amount of the stored data and/or the read data comprises:
creating a data reading thread to read the service data in the annular cache space and determining the data volume of the service data;
reducing the memory space of the annular cache space upon determining that the data size satisfies a storage reduction threshold.
7. The method of claim 3 or 6, further comprising: and updating the size of the residual memory of the annular cache space after increasing or decreasing the annular cache space.
8. A data caching apparatus, comprising:
the initialization module is used for initializing the annular cache space according to a first memory threshold value;
and the dynamic adjustment module is used for dynamically adjusting the memory space of the annular cache space according to the data volume of the stored data and/or the read data.
9. An electronic device, characterized in that the electronic device comprises:
one or more processors;
a memory for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the data caching method as recited in any one of claims 1-7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the data caching method as claimed in any one of claims 1 to 7.
CN202111400582.9A 2021-11-24 2021-11-24 Data caching method and device, electronic equipment and storage medium Pending CN114048152A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111400582.9A CN114048152A (en) 2021-11-24 2021-11-24 Data caching method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111400582.9A CN114048152A (en) 2021-11-24 2021-11-24 Data caching method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114048152A true CN114048152A (en) 2022-02-15

Family

ID=80211479

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111400582.9A Pending CN114048152A (en) 2021-11-24 2021-11-24 Data caching method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114048152A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114565503A (en) * 2022-05-03 2022-05-31 沐曦科技(北京)有限公司 GPU instruction data management method, device, equipment and storage medium
CN114629748A (en) * 2022-04-01 2022-06-14 日立楼宇技术(广州)有限公司 Building data processing method, edge gateway of building and storage medium
CN117407148A (en) * 2022-07-08 2024-01-16 华为技术有限公司 Data writing method, data reading device, electronic equipment and storage medium

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114629748A (en) * 2022-04-01 2022-06-14 日立楼宇技术(广州)有限公司 Building data processing method, edge gateway of building and storage medium
CN114629748B (en) * 2022-04-01 2023-08-15 日立楼宇技术(广州)有限公司 Building data processing method, building edge gateway and storage medium
CN114565503A (en) * 2022-05-03 2022-05-31 沐曦科技(北京)有限公司 GPU instruction data management method, device, equipment and storage medium
CN114565503B (en) * 2022-05-03 2022-07-12 沐曦科技(北京)有限公司 GPU instruction data management method, device, equipment and storage medium
CN117407148A (en) * 2022-07-08 2024-01-16 华为技术有限公司 Data writing method, data reading device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN114048152A (en) Data caching method and device, electronic equipment and storage medium
CN109690512B (en) GPU remote communication with trigger operation
US9836397B2 (en) Direct memory access of dynamically allocated memory
US10701154B2 (en) Sharding over multi-link data channels
US20180300250A1 (en) Method and apparatus for storing data
US20150271286A1 (en) Data Transmission
CN112256231B (en) Volume control method, device, terminal and storage medium
US10198208B2 (en) Performing collective I/O operations within operating system processes
WO2021135571A1 (en) Convolution calculation method, convolution calculation apparatus, and terminal device
US7802031B2 (en) Method and system for high speed network application
CN112068765A (en) Method, apparatus and computer program product for managing a storage system
CN111782614B (en) Data access method, device, equipment and storage medium
KR20080044872A (en) Systems and methods for processing information or data on a computer
CN110263010B (en) Automatic updating method and device for cache file
CN113242321B (en) Data transmission method for mobile storage device
US20090204665A1 (en) System and methods for communicating between serial communications protocol enabled devices
CN115269063A (en) Process creation method, system, device and medium
EP1894089B1 (en) Data pipeline management system and method for using the system
US11132129B2 (en) Methods for minimizing fragmentation in SSD within a storage system and devices thereof
CN113867643A (en) Data storage method, device, equipment and storage medium
CN112163176A (en) Data storage method and device, electronic equipment and computer readable medium
CN108781085B (en) Generating a compressed data stream with lookback prefetch instructions for prefetching decompressed data from a lookback buffer
CN111652002A (en) Text division method, device, equipment and computer readable medium
CN112269957A (en) Picture processing method, device, equipment and storage medium
CN114727132B (en) Definition address acquisition method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination