CN115357183A - Cache data loss prevention method, FIFO memory, computer readable storage medium and computer device - Google Patents

Cache data loss prevention method, FIFO memory, computer readable storage medium and computer device Download PDF

Info

Publication number
CN115357183A
CN115357183A CN202210738992.2A CN202210738992A CN115357183A CN 115357183 A CN115357183 A CN 115357183A CN 202210738992 A CN202210738992 A CN 202210738992A CN 115357183 A CN115357183 A CN 115357183A
Authority
CN
China
Prior art keywords
cache data
data
storage space
fifo memory
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210738992.2A
Other languages
Chinese (zh)
Inventor
章林
宋鹏程
方志宏
邓威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Lavichip Technology Co ltd
Original Assignee
Shenzhen Lavichip Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Lavichip Technology Co ltd filed Critical Shenzhen Lavichip Technology Co ltd
Priority to CN202210738992.2A priority Critical patent/CN115357183A/en
Publication of CN115357183A publication Critical patent/CN115357183A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0614Improving the reliability of storage systems
    • G06F3/0619Improving the reliability of storage systems in relation to data integrity, e.g. data losses, bit errors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F5/00Methods or arrangements for data conversion without changing the order or content of the data handled
    • G06F5/06Methods or arrangements for data conversion without changing the order or content of the data handled for changing the speed of data flow, i.e. speed regularising or timing, e.g. delay lines, FIFO buffers; over- or underrun control therefor

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • Human Computer Interaction (AREA)
  • Memory System Of A Hierarchy Structure (AREA)
  • Information Transfer Systems (AREA)

Abstract

The embodiment of the invention discloses a cache data loss prevention method, an FIFO memory, a computer readable storage medium and computer equipment. The method for preventing the cache data from being lost comprises the following steps. Sending the cache data and storing the cache data in a storage space; reading the storage space and feeding back a remaining space value of the storage space; and judging whether to continue receiving the cache data or not based on the residual space value. According to the cache data loss prevention method provided by the embodiment of the invention, the situation that the input data exceeds the capacity of the storage space is avoided, and the problem of data loss is further solved.

Description

Cache data loss prevention method, FIFO memory, computer-readable storage medium and computer device
Technical Field
The invention relates to the technical field of data transmission, in particular to a cache data loss prevention method, an FIFO memory, a computer readable storage medium and computer equipment.
Background
The FIFO memory is a device applied to a computer system to register and convert data, and can play a role in preventing data loss, avoiding frequent bus operations, reducing CPU load, and the like.
In the prior art, the internal storage space of the FIFO memory is limited, and when the amount of input data exceeds the storage space of the FIFO memory and data cannot be output in time, the overflowed data is lost, which affects the integrity of the data.
Disclosure of Invention
Based on this, it is necessary to provide a cache data loss prevention method, a FIFO memory, a computer-readable storage medium, and a computer apparatus capable of solving the above-described problems.
A cache data loss prevention method based on a FIFO memory comprises the following steps:
sending the cache data to an FIFO memory and storing the cache data in a storage space;
reading the storage space and feeding back a remaining space value of the storage space;
and controlling the size of the cache data to be sent next time based on the residual space numerical value.
Further, the controlling the size of the next time the buffered data is sent based on the remaining space value includes:
setting a first threshold value;
determining whether the remaining space value is less than or equal to the first threshold;
and determining whether to reduce the size of the cache data based on the determination result.
Further, the determining whether to reduce the size of the cache data based on the determination result specifically includes:
and determining whether to stop sending the cache data based on the determination result.
Further, the determining whether to stop sending the cache data based on the determination result includes:
setting a pause time;
and resuming sending the buffered data after the pause time.
Further, the controlling the size of the buffered data to be sent next time based on the remaining space value further includes:
setting a second threshold value;
determining whether the remaining space value is greater than or equal to the second threshold;
and determining whether to increase the size of the cache data based on the determination result.
Further, after receiving the cache data and storing the cache data in the storage space, the method further includes:
and controlling the FIFO memory to output the cache data.
A FIFO memory, comprising:
the receiving unit is used for receiving the cache data and storing the cache data in the storage space;
the feedback unit is used for reading the storage space and feeding back the residual space data of the storage space;
and the judging unit is used for controlling the size of the cache data sent next time according to the residual space numerical value.
A computer-readable storage medium storing a computer program which, when executed by a processor, causes the processor to perform the steps of the cache data anti-loss method according to any of the embodiments described above.
A computer device comprising a memory and a processor, the memory storing a computer program that, when executed by the processor, causes the processor to perform the steps of the cache data anti-loss method according to any of the embodiments described above.
The embodiment of the invention has the following beneficial effects:
according to the cache data loss prevention method provided by the embodiment of the invention, the residual space value of the storage space is fed back in real time, and then the value is compared with the set first threshold value, so that the situation that the input data exceeds the capacity of the storage space is avoided, and the problem of data loss is solved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Wherein:
FIG. 1 is a diagram of an embodiment of an application environment for a cache data loss prevention method;
FIG. 2 is a flow diagram of a method for cache data loss prevention in one embodiment;
FIG. 3 is a flow diagram of a method for preventing cache data loss in one embodiment;
FIG. 4 is a flow diagram of a method for cache data loss prevention in one embodiment;
FIG. 5 is a block diagram of a FIFO memory in one embodiment;
FIG. 6 is a block diagram of a computer device in one embodiment.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
FIG. 1 is a diagram of an application environment of a cache data loss prevention method in an embodiment. Referring to fig. 1, the cache data loss prevention method is applied to a cache data loss prevention method system. The cache data loss prevention method system comprises a terminal 110 and a server 120. The terminal 110 and the server 120 are connected through a network, the terminal 110 may be specifically a desktop terminal or a mobile terminal, and the mobile terminal may be specifically at least one of a mobile phone, a tablet computer, a notebook computer, and the like. The server 120 may be implemented as a stand-alone server or a server cluster comprising a plurality of servers.
As shown in FIG. 2, in one embodiment, a method for preventing cache data loss is provided. The method can be applied to both the terminal and the server, and this embodiment is exemplified by being applied to the terminal. The method for preventing the cache data from being lost specifically comprises the following steps.
Step S100: and sending the cache data to the FIFO memory and storing the cache data in the storage space.
It should be noted that the buffered data is mainly sent to the FIFO memory by the DSP. The FIFO memory is used as a data conversion tool, and after receiving the buffered data, the output storage space for the data is located in the FIFO memory. Under normal condition, DSP sends 1 frame of buffer data to FIFO memory every millisecond, FIFO memory outputs buffer data at 1 frame per millisecond speed, so that the FIFO memory input and output are kept balance. However, in some special situations, the DSP inputs the FIFO space at a rate of 10 frames per millisecond or tens of frames, and the FIFO memory still outputs the buffered data at a rate of 1 frame per millisecond, which may cause the buffered data to be stacked in the FIFO space, and there is a risk of data loss due to the buffered data exceeding the capacity of the storage space. Therefore, the method provided by the embodiment of the invention is applied to solving the technical problem.
Step S200: and reading the storage space and feeding back the value of the residual space of the storage space.
For example, the total value of the storage space is 2000 frames, and when the received buffered data is 1900 frames, the value of the remaining space is 100 frames.
Step S300: and controlling the size of the next time of sending the cache data based on the residual space value.
Specifically, S300 includes the following steps.
Step S310: a first threshold is set.
The first threshold is 25 frames to 100 frames.
Step S320: it is determined whether the remaining space value is less than or equal to a first threshold.
Step S330: it is determined whether to reduce the size of the transmission buffer data based on the determination result.
For example, when the first threshold is 50 frames and the value of the remaining space is the same as the first threshold, the control DSP suspends sending the buffered data. This can be used as a "full" protection mechanism to prevent data loss.
In a specific embodiment, step S330 determines whether to reduce the size of the sending buffer data based on the determination result, and specifically determines whether to stop sending the buffer data based on the determination result.
Further, the step includes the following substeps.
Step S321: a pause time is set.
Specifically, the DSP stops inputting data to the FIFO for a period of time, but still looks for a remaining space condition every millisecond, and resumes the frequency of normal data transmission to the FIFO when below a threshold.
Step S322: resuming receiving the buffered data after the pause time.
By adopting the cache data loss prevention method provided by the embodiment of the invention, the residual space value of the storage space is fed back in real time, and then the value is compared with the set first threshold value, so that the condition that the input data exceeds the capacity of the storage space is avoided, and the problem of data loss is further solved.
Step S340: a second threshold is set.
Step S350: it is determined whether the remaining space value is greater than or equal to a second threshold.
Step S360: whether to increase the size of the transmission buffer data is determined based on the determination result.
For example, the size of the storage space of the FIFO memory is 2000 frames, and the second threshold is 1950 frames. When the remaining space value is 1950 frames, that is, there are only 50 frames of buffered data in the storage space, the DSP is controlled to send more data, for example, 50 frames/ms. This can be used as a "air defense" mechanism to avoid data shortage in the storage space.
It should be noted that, in the initial stage, if there is no cache data in the storage space, the DSP is controlled to send a certain number of data frames to the FIFO memory in advance, and then the output port of the memory is opened, so that the FIFO memory outputs the cache data. Preferably, the input and output of the FIFO memory are balanced.
Referring to fig. 5, an embodiment of the invention provides a FIFO memory, which specifically includes a receiving unit for receiving buffer data and storing the buffer data in a storage space. And the feedback unit is used for reading the storage space and feeding back the residual space data of the storage space. And the judging unit is used for judging whether to continue receiving the cache data according to the residual space value.
FIG. 6 is a diagram that illustrates an internal structure of the computer device in one embodiment. The computer device may be specifically a terminal, and may also be a server. As shown in fig. 6, the computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the memory includes a non-volatile storage medium and an internal memory. The non-volatile storage medium of the computer device stores an operating system and may also store a computer program, which, when executed by the processor, causes the processor to implement a cache data loss prevention method. The internal memory may also have a computer program stored therein, which when executed by the processor, causes the processor to perform a method of cache data loss prevention. It will be appreciated by those skilled in the art that the configuration shown in fig. 6 is a block diagram of only a portion of the configuration associated with the present application, and is not intended to limit the computing device to which the present application may be applied, and that a particular computing device may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In an embodiment, a computer-readable storage medium is proposed, in which a computer program is stored, which, when being executed by a processor, causes the processor to carry out the steps of the cache data anti-loss method provided according to any of the above embodiments.
Step S100: and sending the cache data to the FIFO memory and storing the cache data in the storage space.
It should be noted that the buffered data is mainly sent to the FIFO memory by the DSP. The FIFO memory is used as a data conversion tool, and after the FIFO memory receives the buffer data, the data can be output, and the storage space is positioned in the FIFO memory. Under normal condition, DSP sends 1 frame of buffer data to FIFO memory every millisecond, FIFO memory outputs buffer data at 1 frame every millisecond, so that FIFO memory input and output are balanced. However, in some special situations, the DSP inputs the FIFO space at a rate of 10 frames per millisecond or tens of frames, and the FIFO memory still outputs the buffered data at a rate of 1 frame per millisecond, which may cause the buffered data to be stacked in the FIFO space, and there is a risk of data loss due to the buffered data exceeding the capacity of the storage space. Therefore, the method provided by the embodiment of the invention is applied to solving the technical problem.
Step S200: and reading the storage space and feeding back the value of the residual space of the storage space.
For example, the total value of the storage space is 2000 frames, and when the received buffered data is 1900 frames, the value of the remaining space is 100 frames.
Step S300: and controlling the size of the next sent cache data based on the residual space value.
Specifically, S300 includes the following steps.
Step S310: a first threshold is set.
The first threshold is 25 frames to 100 frames.
Step S320: it is determined whether the remaining space value is less than or equal to a first threshold.
Step S330: it is determined whether to reduce the size of the transmission buffer data based on the determination result.
For example, when the first threshold is 50 frames and the remaining space value is the same as the first threshold, the DSP is controlled to suspend sending the buffered data. This can be used as a "full" protection mechanism to prevent data loss.
In a specific embodiment, step S330 determines whether to reduce the size of the sending buffer data based on the determination result, specifically, whether to stop sending the buffer data based on the determination result.
Further, the step includes the following substeps.
Step S321: a pause time is set.
Specifically, the DSP stops inputting data to the FIFO for a period of time, but still looks for a remaining space condition every millisecond, and resumes the frequency of normal data transmission to the FIFO when below a threshold.
Step S322: resuming receiving the buffered data after the pause time.
By adopting the cache data loss prevention method provided by the embodiment of the invention, the residual space value of the storage space is fed back in real time, and then the value is compared with the set first threshold value, so that the situation that the input data exceeds the capacity of the storage space is avoided, and the problem of data loss is solved.
Step S340: a second threshold is set.
Step S350: it is determined whether the remaining space value is greater than or equal to a second threshold.
Step S360: whether to increase the size of the transmission buffer data is determined based on the determination result.
For example, the size of the storage space of the FIFO memory is 2000 frames, and the second threshold is 1950 frames. When the remaining space value is 1950 frames, that is, there are only 50 frames of buffered data in the storage space, the DSP is controlled to send more data, for example, 50 frames/ms. This can be used as a "air defense" mechanism to avoid data shortage in the storage space.
It should be noted that, in the initial stage, if there is no cache data in the storage space, the DSP is controlled to send a certain number of data frames to the FIFO memory in advance, and then the output port of the memory is opened, so that the FIFO memory outputs the cache data. Preferably, the input and output of the FIFO memory are balanced.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a non-volatile computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the program is executed. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), rambus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent application shall be subject to the appended claims. Please enter the implementation content part.

Claims (9)

1. A cache data loss prevention method based on a FIFO memory is characterized by comprising the following steps:
sending the cache data to an FIFO memory and storing the cache data in a storage space;
reading the storage space and feeding back a remaining space value of the storage space;
and controlling the size of the cache data to be sent next time based on the residual space numerical value.
2. The method for preventing cache data from being lost according to claim 1, wherein the controlling the size of the cache data to be sent next based on the remaining space value comprises:
setting a first threshold value;
determining whether the remaining space value is less than or equal to the first threshold;
and determining whether to reduce the size of the cache data based on the determination result.
3. The method according to claim 2, wherein the determining whether to reduce the size of the cache data to be sent based on the determination result is specifically:
and determining whether to stop sending the cache data based on the determination result.
4. The method for preventing cache data from being lost according to claim 3, wherein the determining whether to stop sending the cache data based on the determination result comprises:
setting a pause time;
resuming sending the buffered data after the pause time.
5. The method according to any one of claims 1 to 4, wherein the controlling the size of the next transmission of the buffered data based on the remaining space value further comprises:
setting a second threshold value;
determining whether the remaining space value is greater than or equal to the second threshold;
and determining whether to increase the size of the cache data based on the determination result.
6. The method for preventing cache data from being lost according to any one of claims 1 to 4, wherein after receiving the cache data and storing the cache data in the storage space, the method further comprises:
and controlling the FIFO memory to output the cache data.
7. A FIFO memory, comprising:
the receiving unit is used for receiving the cache data and storing the cache data in the storage space;
the feedback unit is used for reading the storage space and feeding back the residual space data of the storage space;
and the judging unit is used for controlling the size of the cache data sent next time according to the residual space numerical value.
8. A computer-readable storage medium, storing a computer program which, when executed by a processor, causes the processor to perform the steps of the cache data anti-loss method according to any one of claims 1-6.
9. A computer device comprising a memory and a processor, the memory storing a computer program that, when executed by the processor, causes the processor to perform the steps of the cache data loss prevention method according to any one of claims 1 to 6.
CN202210738992.2A 2022-06-27 2022-06-27 Cache data loss prevention method, FIFO memory, computer readable storage medium and computer device Pending CN115357183A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210738992.2A CN115357183A (en) 2022-06-27 2022-06-27 Cache data loss prevention method, FIFO memory, computer readable storage medium and computer device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210738992.2A CN115357183A (en) 2022-06-27 2022-06-27 Cache data loss prevention method, FIFO memory, computer readable storage medium and computer device

Publications (1)

Publication Number Publication Date
CN115357183A true CN115357183A (en) 2022-11-18

Family

ID=84030256

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210738992.2A Pending CN115357183A (en) 2022-06-27 2022-06-27 Cache data loss prevention method, FIFO memory, computer readable storage medium and computer device

Country Status (1)

Country Link
CN (1) CN115357183A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117348909A (en) * 2023-12-04 2024-01-05 成都佰维存储科技有限公司 eMMC upgrading method, chip, equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117348909A (en) * 2023-12-04 2024-01-05 成都佰维存储科技有限公司 eMMC upgrading method, chip, equipment and storage medium
CN117348909B (en) * 2023-12-04 2024-02-27 成都佰维存储科技有限公司 eMMC upgrading method, chip, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN108933993B (en) Short message cache queue selection method and device, computer equipment and storage medium
CN108834086B (en) Method and device for sending short message, computer equipment and storage medium
CN110489447B (en) Data query method and device, computer equipment and storage medium
US10334047B2 (en) Remote direct memory access with reduced latency
CN108388478B (en) Log data processing method and system
CN115357183A (en) Cache data loss prevention method, FIFO memory, computer readable storage medium and computer device
CN111212391B (en) Short message pushing method, device, equipment and readable medium
CN114095438B (en) Data transmission method, apparatus, device, storage medium and computer program product
US11140089B2 (en) Data writing method, system, device and computer-readable storage medium
CN112256417A (en) Data request processing method and device and computer readable storage medium
US11734077B2 (en) Evaluation device, evaluation method and evaluation program
CN110995617B (en) MQTT-based data transmission method and device, computer equipment and storage medium
CN108845869B (en) Concurrent request control method, device, computer equipment and storage medium
CN115348222A (en) Message distribution method, device, server and storage medium
CN112783807A (en) Model calculation method and system
CN116208615A (en) Network data processing method, processing module, array server and medium
CN116841936B (en) Multi-device data processing method, device and system and computer device
CN110287023A (en) Message treatment method, device, computer equipment and readable storage medium storing program for executing
CN111309571A (en) Service data control method, device, computer equipment and storage medium
CN115082178A (en) Account checking and clearing method, system, electronic equipment and storage medium
CN112214336B (en) Data queue pushing method and device, computer equipment and storage medium
CN110490737B (en) Transaction channel control method and device and electronic equipment
US11687271B1 (en) Method for diluting cache space, and device and medium
CN108834087B (en) Short message sending method and device, computer equipment and storage medium
CN114356446A (en) Method, device and equipment for processing inter-process event and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination