CN111209228B - Method for accelerating storage of multi-path on-board load file - Google Patents

Method for accelerating storage of multi-path on-board load file Download PDF

Info

Publication number
CN111209228B
CN111209228B CN202010008656.3A CN202010008656A CN111209228B CN 111209228 B CN111209228 B CN 111209228B CN 202010008656 A CN202010008656 A CN 202010008656A CN 111209228 B CN111209228 B CN 111209228B
Authority
CN
China
Prior art keywords
load data
data
storage
load
thread
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010008656.3A
Other languages
Chinese (zh)
Other versions
CN111209228A (en
Inventor
韦杰
刘伟亮
白亮
田文波
滕树鹏
胡浩
双小川
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai aerospace computer technology research institute
Original Assignee
Shanghai aerospace computer technology research institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai aerospace computer technology research institute filed Critical Shanghai aerospace computer technology research institute
Priority to CN202010008656.3A priority Critical patent/CN111209228B/en
Publication of CN111209228A publication Critical patent/CN111209228A/en
Application granted granted Critical
Publication of CN111209228B publication Critical patent/CN111209228B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0844Multiple simultaneous or quasi-simultaneous cache accessing
    • G06F12/0853Cache with multiport tag or data arrays
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0893Caches characterised by their organisation or structure
    • G06F12/0895Caches characterised by their organisation or structure of parts of caches, e.g. directory or tag array
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Radio Relay Systems (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention provides a method for accelerating the storage of a plurality of paths of satellite load files. In the load data receiving thread, the first level buffer memory uses the circulation queue and the count semaphore to cooperate, and each path of load data packet is received from the external interface without distinction through the control of the read-write pointer. In the load data processing thread, the second-level buffer memory adopts double buffer memories for alternately reading and writing for each path of load data, and simultaneously controls the empty, receiving and storing states of each buffer memory in cooperation with a state machine. In the load data storage thread, load data in a cache in which the state is stored is written into a file according to the memory page size for storage. The first-level buffer memory rapidly receives the lower external multi-path load data, the second-level double-buffer memory ping-pong operation accelerates the rapid storage of each path of load file data, and the processor resource is fully utilized, so that the effect of accelerating the storage of the multi-path on-board load file is achieved.

Description

Method for accelerating storage of multi-path on-board load file
Technical Field
The invention relates to a method for accelerating the storage of a plurality of paths of on-board load files.
Background
With the rapid development of the aerospace technology in China, the field of satellite application is wider and wider, the corresponding requirement on the on-board processing capability is higher and higher, the variety and the quantity of load data serving as an on-board processing object are also increased sharply, and a large amount of load data needs to be received, processed and stored on board in time. The traditional satellite data receiving and storing mode is to write the received data into a memory in sequence and then read out the received data in sequence to be transmitted to the ground for analysis. If the traditional data receiving and storing mode is adopted, the development difficulty is multiplied, the development cost is increased sharply, and the data cannot be processed flexibly on the satellite.
In order to meet the processing and storage of the on-board data, the processing and storage of the on-board data are necessarily managed by adopting an embedded file system, and the file system is also an indispensable component for supporting the operation of an operating system. However, the problem of reduced data reading and writing speed is faced after the file system is adopted to manage the data, because after the file system is adopted to manage the data, the data is not read and written in sequence like the traditional Flash memory, but a certain algorithm is adopted to maintain the wear balance of the memory.
In order to improve the reading and writing speed of the embedded file system to the memory, some buffer mechanisms are added on the driving layer to ensure that the memory can read and write data according to the page size, so that the reading and writing efficiency of the memory can be improved. However, the read-write speed is still greatly lost after the file system is adopted to manage the read-write of the data.
Disclosure of Invention
The invention aims to provide a method for accelerating the storage of a plurality of on-board load files.
In order to solve the above problems, the present invention provides a method for accelerating storage of multiple on-board load files, comprising:
carrying out the storage acceleration process of the load data by using a processing method of two-stage cache and multi-thread pipeline operation, wherein the first-stage cache adopts a mode of matching a circular cache queue and a counting semaphore, and receives each path of load data without distinction; the second-level buffer is provided with double buffers for each path of load data, ping-pong operation is performed under the control of a buffer state machine, reading and writing are performed alternately, one buffer is used for receiving the load data analyzed by the first-level buffer, and the other buffer is used for writing the data in the buffer into a file for storage.
Further, in the above method, the multi-line Cheng Liushui operation includes:
the load data receiving thread is responsible for receiving all paths of load data from an external interface without distinction;
the load data processing thread is responsible for analyzing the received load data of each path on the satellite;
and the load data storage thread is responsible for writing the processed data into a file for storage to form a file with specified requirements.
Further, in the above method, the circular buffer queue has a read-write pointer, the count semaphore is configured with corresponding read-write count semaphores, the read-write pointer and the write pointer initialize the start position of the circular buffer queue, the read-count semaphore initializes to zero, and the write count semaphore initializes to the number of payload data packets held by the circular buffer queue.
Further, in the above method, the way in which the circular buffer queue and the count semaphore cooperate further includes:
and between the load data receiving thread and the load data processing thread, the read count signal quantity is utilized to maintain the used space of the circular buffer queue, the write count signal quantity is utilized to maintain the unused space of the circular buffer queue, the read pointer is utilized to maintain the load data address to be processed in the circular buffer, and the write pointer is utilized to maintain the address of external data written into the circular buffer queue.
Further, in the above method, the way in which the circular buffer queue and the count semaphore cooperate further includes:
in the load data receiving thread, each time a write count semaphore is obtained, a read count semaphore is correspondingly released, a packet of load data is successfully written into the circular buffer queue, and a write pointer is correspondingly updated once;
also in the load data processing thread, each time a read count semaphore is acquired, a write count semaphore is correspondingly released, and each time a packet of load data is successfully read out from the circular buffer queue, a read pointer is correspondingly updated.
Further, in the above method, each path of load data is provided with a double buffer, and further includes:
the buffer states in the double buffers are initialized to be empty, and when data are stored, the states are updated to be receiving states;
when the data written into the cache reaches the upper limit of the cache, updating the state of the data into a storage state;
when the data in the cache is all written into the file for storage, the state is updated to be the empty state.
Further, in the above method, each path of load data is provided with a double buffer, and further includes:
each payload data type is provided with a corresponding double cache, in combination with state machine control, wherein,
in a load data processing thread, storing effective data of a corresponding type into a buffer memory of which the state corresponding to the load data is empty or receiving state;
in the load data storage thread, data in a cache in a storage state is written into a specified file to be stored.
Further, in the above method, each path of load data is provided with a double buffer, and further includes:
in the load data processing thread, after the load data packet is obtained from the circular buffer queue, according to the state of each buffer in the corresponding double buffers, the buffer with the receiving state is preferentially selected to store the extracted payload data;
if the buffer memory in the receiving state does not exist, judging whether the buffer memory 1 in the double buffer memories is in an empty state, if so, selecting the buffer memory to store effective data, otherwise, judging whether the buffer memory 2 is in an empty state, and if so, selecting the buffer memory 2 to store effective load data;
and if the current double caches corresponding to the load data are all in a storage state, blocking the load data processing thread, and setting a return value as an information code with the second-level cache busy.
Further, in the above method, the storing and accelerating process of the load data is performed by using a processing method of two-stage buffering and multithreading pipeline operation, including:
in a load data receiving thread, when load data comes in, after a write count signal quantity is acquired, storing a data packet into a memory pointed by a write pointer of a circular buffer queue, writing the data, updating the write pointer, and simultaneously releasing a read count signal quantity;
step two, in the load data processing thread, when the read count signal quantity is obtained, a data packet is taken out from the memory address pointed by the read pointer of the circular buffer queue, and when the load data processing is successful, the read pointer is updated and the write count signal quantity is released at the same time;
in the load data processing thread, after the load data packet is taken out from the circular buffer queue of the first-level buffer, analyzing the data characteristics to extract effective data, and storing the effective data into a buffer with a receiving or empty state in the double buffer corresponding to the load data;
in the load data processing thread, when the state in the double caches is that the load data stored in the received caches reaches the upper limit of the caches, converting the state of the caches into a storage state, and sending the corresponding load data types and the cache numbers of the full data to the load data storage thread through socket communication of an unix domain;
fifthly, in the load data storage thread, after receiving information sent by a socket of an unix domain from the load data processing thread, the storage thread is awakened to start analyzing the received data so as to extract the load data type and the corresponding cache address from the received data;
step six, in the load data storage thread, judging whether the corresponding load data has an opened file, if not, creating a new file according to the attribute of the corresponding load data, writing the acquired data in the cache into the file according to the page size of the memory for storage, and converting the state of the cache into an empty state after the data is written;
and step seven, in the load data storage thread, after the data in the corresponding load double cache is written into the file storage, judging whether the current attribute of the file meets the closing condition, and if so, closing the opened file to form a file with specified requirements.
Compared with the prior art, the method and the device accelerate the storage of the multi-path satellite load file by adopting the processing method of two-stage cache and multi-thread pipeline operation. In the load data receiving thread, the first level buffer memory uses the circulation queue and the count semaphore to cooperate, and each path of load data packet is received from the external interface without distinction through the control of the read-write pointer. In the load data processing thread, the second-level buffer memory adopts double buffer memories for alternately reading and writing for each path of load data, and simultaneously controls the empty, receiving and storing states of each buffer memory in cooperation with a state machine. In the load data storage thread, load data in a cache in which the state is stored is written into a file according to the memory page size for storage. The first-level buffer can quickly receive the lower external multi-path load data, the second-level double-buffer ping-pong operation accelerates the quick storage of each path of load file data, and the processor resource is fully utilized, so that the effect of accelerating the storage of the multi-path on-satellite load files is achieved.
Drawings
FIG. 1 is a flow chart of a method for accelerating the storage of multiple on-board payload files according to an embodiment of the invention;
FIG. 2 is a flow chart of obtaining the data to be written according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating a double cache state transition according to an embodiment of the present invention.
Detailed Description
In order that the above-recited objects, features and advantages of the present invention will become more readily apparent, a more particular description of the invention will be rendered by reference to the appended drawings and appended detailed description.
As shown in fig. 1 to 3, the present invention provides a method for accelerating storage of multiple on-board load files, comprising:
carrying out the storage acceleration process of the load data by using a processing method of two-stage cache and multi-thread pipeline operation, wherein the first-stage cache adopts a mode of matching a circular cache queue and a counting semaphore, and receives each path of load data without distinction; the second-level buffer is provided with double buffers for each path of load data, ping-pong operation is performed under the control of a buffer state machine, reading and writing are performed alternately, one buffer is used for receiving the load data analyzed by the first-level buffer, and the other buffer is used for writing the data in the buffer into a file for storage.
The invention aims at the problem that the requirement of the on-board processing task on the load file storage rate cannot be met after the on-board platform adopts an embedded file system, and the storage of the multi-path on-board load file is accelerated by adopting a processing method of two-stage caching and multi-thread pipeline operation on application.
The method for accelerating the storage of the multi-path satellite load files can solve the problem that the storage rate of the load files cannot meet the processing requirement of the satellite tasks after the satellite platform adopts an embedded file system. The invention further improves the use efficiency of the CPU and the memory of the on-board platform and improves the capacity of on-board task processing.
In an embodiment of the method for accelerating multi-path on-board load file storage of the present invention, the multi-line Cheng Liushui operation includes:
the load data receiving thread is used for receiving the load data of each path from the external interface without distinction;
the load data processing thread is responsible for analyzing the received load data of each path on the satellite;
the function of the load data storage thread is to write the processed data into a file for storage, so as to form a file with specified requirements.
The three threads form a pipeline operation mode, and the storage flow of the satellite multipath load file is accelerated.
In one embodiment of the method for accelerating the storage of the multi-path satellite load file, the first-level buffer is used by combining a circular buffer queue and a counting semaphore. The circulating buffer queue is provided with a read-write pointer, the counting semaphore is provided with corresponding read-write counting semaphores, the read-write pointer is initialized to the starting position of the circulating buffer queue, the read-write counting semaphore is initialized to zero, and the write counting semaphore is initialized to the number of the circulating buffer queue capable of accommodating the load data packets.
In an embodiment of the method for accelerating storage of multiple on-satellite load files of the present invention, the way in which the circular buffer queue and the count semaphore cooperate further includes:
and between the load data receiving thread and the load data processing thread, the read count signal quantity is utilized to maintain the used space of the circular buffer queue, the write count signal quantity is utilized to maintain the unused space of the circular buffer queue, the read pointer is utilized to maintain the load data address to be processed in the circular buffer, and the write pointer is utilized to maintain the address of external data written into the circular buffer queue.
In an embodiment of the method for accelerating storage of multiple on-satellite load files of the present invention, the way in which the circular buffer queue and the count semaphore cooperate further includes:
in the load data receiving thread, each time a write count semaphore is obtained, a read count semaphore is correspondingly released, a packet of load data is successfully written into the circular buffer queue, and a write pointer is correspondingly updated once;
also in the load data processing thread, each time a read count semaphore is acquired, a write count semaphore is correspondingly released, and each time a packet of load data is successfully read out from the circular buffer queue, a read pointer is correspondingly updated.
In an embodiment of the method for accelerating storage of multiple paths of on-board load files, each path of load data is provided with a double cache, and the method further includes:
the buffer states in the double buffers are initialized to be empty, and when data are stored, the states are updated to be receiving states;
when the data written into the cache reaches the upper limit of the cache, updating the state of the data into a storage state;
when the data in the cache is all written into the file for storage, the state is updated to be the empty state.
In an embodiment of the method for accelerating storage of multiple paths of on-board load files, each path of load data is provided with a double cache, and the method further includes:
each payload data type is provided with a corresponding double cache, in combination with state machine control, wherein,
in a load data processing thread, storing effective data of a corresponding type into a buffer memory of which the state corresponding to the load data is empty or receiving state;
in the load data storage thread, data in a cache in a storage state is written into a specified file to be stored.
In an embodiment of the method for accelerating storage of multiple paths of on-board load files, each path of load data is provided with a double cache, and the method further includes:
each path of load data is provided with double caches with three states: null, receive and store, the initialization is in a null state, wherein,
in the load data processing thread, after the load data packet is obtained from the circular buffer queue, according to the state of each buffer in the corresponding double buffers, the buffer with the receiving state is preferentially selected to store the extracted payload data;
if the buffer memory in the receiving state does not exist, judging whether the buffer memory 1 in the double buffer memories is in an empty state, if so, selecting the buffer memory to store effective data, otherwise, judging whether the buffer memory 2 is in an empty state, and if so, selecting the buffer memory 2 to store effective load data;
and if the current double caches corresponding to the load data are all in a storage state, blocking the load data processing thread, and setting a return value as an information code with the second-level cache busy.
In an embodiment of the method for accelerating the storage of the multi-path satellite load file, the method for accelerating the storage of the load data by using the processing method of two-stage cache and multi-thread pipeline operation comprises the following steps:
in a load data receiving thread, when load data comes in, after a write count signal quantity is acquired, storing a data packet into a memory pointed by a write pointer of a circular buffer queue, writing the data, updating the write pointer, and simultaneously releasing a read count signal quantity;
step two, in the load data processing thread, when the read count signal quantity is obtained, a data packet is taken out from the memory address pointed by the read pointer of the circular buffer queue, and when the load data processing is successful, the read pointer is updated and the write count signal quantity is released at the same time;
in the load data processing thread, after the load data packet is taken out from the circular buffer queue of the first-level buffer, analyzing the data characteristics to extract effective data, and storing the effective data into a buffer with a receiving or empty state in the double buffer corresponding to the load data;
in the load data processing thread, when the state in the double caches is that the load data stored in the received caches reaches the upper limit of the caches, converting the state of the caches into a storage state, and sending the corresponding load data types and the cache numbers of the full data to the load data storage thread through socket communication of an unix domain;
fifthly, in the load data storage thread, after receiving information sent by a socket of an unix domain from the load data processing thread, the storage thread is awakened to start analyzing the received data so as to extract the load data type and the corresponding cache address from the received data;
step six, in the load data storage thread, judging whether the corresponding load data has an opened file, if not, creating a new file according to the attribute of the corresponding load data, writing the acquired data in the cache into the file according to the page size of the memory for storage, and converting the state of the cache into an empty state after the data is written;
and step seven, in the load data storage thread, after the data in the corresponding load double cache is written into the file storage, judging whether the current attribute of the file meets the closing condition, and if so, closing the opened file to form a file with specified requirements.
The design principle and the design thought of the invention mainly comprise the following three parts:
(1) The multithreading pipeline operation mode comprises the following steps: the load data receiving thread is used for receiving all paths of load data on the satellite from an external interface without distinction, the load data processing thread is used for analyzing all paths of received load data, and the load data storage thread has the functions of writing the processed data into a file for storage, and the three threads operate in a pipelining mode to accelerate the storage flow of the load file on the satellite.
(2) The method adopts a circular buffer queue and a counting semaphore to be matched for use: the read count semaphore is used to maintain the used space of the circular buffer queue and the write count semaphore is used to maintain the unused space of the circular buffer queue. The read pointer maintains the address of the load data to be processed in the circular buffer, and the write pointer maintains the address of the external data written into the circular buffer queue.
(3) Each load data is equipped with a double-cache ping-pong operation: in a load data processing thread, storing effective data of a corresponding type into a buffer memory of which the state corresponding to the load data is empty or receiving state; in the load data storage thread, data in a cache in a storage state is written into a memory.
In summary, according to the invention, the on-board platform adopts the file system, so that the load file rate can not support the real-time processing requirement of the on-board task, and the storage of the multi-path on-board load file is accelerated in a two-stage cache and multi-thread pipeline mode. Compared with the traditional data storage mode, the mode can fully utilize the processor resource, improves the storage rate of the satellite load file and has stronger engineering practical value.
In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative elements and steps are described above generally in terms of functionality in order to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (8)

1. A method for accelerating the storage of multiple on-board payload files, comprising:
carrying out the storage acceleration process of the load data by using a processing method of two-stage cache and multi-thread pipeline operation, wherein the first-stage cache adopts a mode of matching a circular cache queue and a counting semaphore, and receives each path of load data without distinction; the second-level buffer is provided with double buffers for each path of load data, ping-pong operation is performed under the control of a buffer state machine, reading and writing are performed alternately, one buffer is used for receiving the load data analyzed by the first-level buffer, and the other buffer is used for writing the data in the buffer into a file for storage;
the method for storing and accelerating the load data by using the processing method of the two-stage cache and the multithreading pipeline operation comprises the following steps:
in a load data receiving thread, when load data comes in, after a write count signal quantity is acquired, storing a data packet into a memory pointed by a write pointer of a circular buffer queue, writing the data, updating the write pointer, and simultaneously releasing a read count signal quantity;
step two, in the load data processing thread, when the read count signal quantity is obtained, a data packet is taken out from the memory address pointed by the read pointer of the circular buffer queue, and when the load data processing is successful, the read pointer is updated and the write count signal quantity is released at the same time;
in the load data processing thread, after the load data packet is taken out from the circular buffer queue of the first-level buffer, analyzing the data characteristics to extract effective data, and storing the effective data into a buffer with a receiving or empty state in the double buffer corresponding to the load data;
in the load data processing thread, when the state in the double caches is that the load data stored in the received caches reaches the upper limit of the caches, converting the state of the caches into a storage state, and sending the corresponding load data types and the cache numbers of the full data to the load data storage thread through socket communication of an unix domain;
fifthly, in the load data storage thread, after receiving information sent by a socket of an unix domain from the load data processing thread, the storage thread is awakened to start analyzing the received data so as to extract the load data type and the corresponding cache address from the received data;
step six, in the load data storage thread, judging whether the corresponding load data has an opened file, if not, creating a new file according to the attribute of the corresponding load data, writing the acquired data in the cache into the file according to the page size of the memory for storage, and converting the state of the cache into an empty state after the data is written;
and step seven, in the load data storage thread, after the data in the corresponding load double cache is written into the file storage, judging whether the current attribute of the file meets the closing condition, and if so, closing the opened file to form a file with specified requirements.
2. The method of accelerating multiple on-board payload file storage of claim 1, wherein the multi-line Cheng Liushui job comprises:
the load data receiving thread is responsible for receiving all paths of load data from an external interface without distinction;
the load data processing thread is responsible for analyzing the received load data of each path on the satellite;
and the load data storage thread is responsible for writing the processed data into a file for storage to form a file with specified requirements.
3. The method for accelerating the storage of multiple on-satellite payload files according to claim 1, wherein the circular buffer queue has a read-write pointer, the count semaphore is provided with corresponding read-write count semaphores, the read-write pointer is initialized to a starting position of the circular buffer queue, the read-count semaphore is initialized to zero, and the write-count semaphore is initialized to the number of payload packets held by the circular buffer queue.
4. The method for accelerating the storage of multiple on-board payload files according to claim 3, wherein the way in which the circular buffer queue and the count semaphore cooperate further comprises:
and between the load data receiving thread and the load data processing thread, the read count signal quantity is utilized to maintain the used space of the circular buffer queue, the write count signal quantity is utilized to maintain the unused space of the circular buffer queue, the read pointer is utilized to maintain the load data address to be processed in the circular buffer, and the write pointer is utilized to maintain the address of external data written into the circular buffer queue.
5. The method for accelerating the storage of multiple on-board payload files according to claim 1, wherein the way in which the circular buffer queue and the count semaphore cooperate further comprises:
in the load data receiving thread, each time a write count semaphore is obtained, a read count semaphore is correspondingly released, a packet of load data is successfully written into the circular buffer queue, and a write pointer is correspondingly updated once;
also in the load data processing thread, each time a read count semaphore is acquired, a write count semaphore is correspondingly released, and each time a packet of load data is successfully read out from the circular buffer queue, a read pointer is correspondingly updated.
6. The method for accelerating the storage of multiple satellite payload files according to claim 1, wherein each path of payload data is provided with a double cache, further comprising:
the buffer states in the double buffers are initialized to be empty, and when data are stored, the states are updated to be receiving states;
when the data written into the cache reaches the upper limit of the cache, updating the state of the data into a storage state;
when the data in the cache is all written into the file for storage, the state is updated to be the empty state.
7. The method for accelerating the storage of multiple satellite payload files according to claim 1, wherein each path of payload data is provided with a double cache, further comprising:
each payload data type is provided with a corresponding double cache, in combination with state machine control, wherein,
in a load data processing thread, storing effective data of a corresponding type into a buffer memory of which the state corresponding to the load data is empty or receiving state;
in the load data storage thread, data in a cache in a storage state is written into a specified file to be stored.
8. The method for accelerating the storage of multiple satellite payload files according to claim 1, wherein each path of payload data is provided with a double cache, further comprising:
in the load data processing thread, after the load data packet is obtained from the circular buffer queue, according to the state of each buffer in the corresponding double buffers, the buffer with the receiving state is preferentially selected to store the extracted payload data;
if the buffer memory in the receiving state does not exist, judging whether the buffer memory 1 in the double buffer memories is in an empty state, if so, selecting the buffer memory to store effective data, otherwise, judging whether the buffer memory 2 is in an empty state, and if so, selecting the buffer memory 2 to store effective load data;
and if the current double caches corresponding to the load data are all in a storage state, blocking the load data processing thread, and setting a return value as an information code with the second-level cache busy.
CN202010008656.3A 2020-01-02 2020-01-02 Method for accelerating storage of multi-path on-board load file Active CN111209228B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010008656.3A CN111209228B (en) 2020-01-02 2020-01-02 Method for accelerating storage of multi-path on-board load file

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010008656.3A CN111209228B (en) 2020-01-02 2020-01-02 Method for accelerating storage of multi-path on-board load file

Publications (2)

Publication Number Publication Date
CN111209228A CN111209228A (en) 2020-05-29
CN111209228B true CN111209228B (en) 2023-05-26

Family

ID=70788407

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010008656.3A Active CN111209228B (en) 2020-01-02 2020-01-02 Method for accelerating storage of multi-path on-board load file

Country Status (1)

Country Link
CN (1) CN111209228B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111859458A (en) * 2020-07-31 2020-10-30 北京无线电测量研究所 Data updating recording method, system, medium and equipment
CN112002115B (en) * 2020-08-05 2021-04-23 中车工业研究院有限公司 Data acquisition method and data processor
CN112231246A (en) * 2020-10-31 2021-01-15 王志平 Method for realizing processor cache structure
CN112702418A (en) * 2020-12-21 2021-04-23 潍柴动力股份有限公司 Double-cache data downloading control method and device and vehicle
CN113014308B (en) * 2021-02-23 2022-08-02 湖南斯北图科技有限公司 Satellite communication high-capacity channel parallel Internet of things data receiving method
CN113611102B (en) * 2021-07-30 2022-10-11 中国科学院空天信息创新研究院 Multi-channel radar echo signal transmission method and system based on FPGA
CN113885811B (en) * 2021-10-19 2023-09-19 展讯通信(天津)有限公司 Data receiving method and device, chip and electronic equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107172037A (en) * 2017-05-11 2017-09-15 华东师范大学 A kind of real-time subpackage analytic method of multichannel multi-channel high-speed data stream
CN110347369A (en) * 2019-06-05 2019-10-18 天津职业技术师范大学(中国职业培训指导教师进修中心) A kind of more caching Multithread Data methods

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107172037A (en) * 2017-05-11 2017-09-15 华东师范大学 A kind of real-time subpackage analytic method of multichannel multi-channel high-speed data stream
CN110347369A (en) * 2019-06-05 2019-10-18 天津职业技术师范大学(中国职业培训指导教师进修中心) A kind of more caching Multithread Data methods

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
姚舜,马旭东.工业机器人控制器实时多任务软件设计与实现.《工业控制计算机》.2017,第30卷(第3期),第3.2节. *
董振兴 等.星载存储器吞吐率瓶颈与高速并行缓存机制.《哈尔滨工业大学学报》.2017,第49卷(第11期),第1.2节,第2节. *

Also Published As

Publication number Publication date
CN111209228A (en) 2020-05-29

Similar Documents

Publication Publication Date Title
CN111209228B (en) Method for accelerating storage of multi-path on-board load file
US10693787B2 (en) Throttling for bandwidth imbalanced data transfers
CN1126028C (en) Computer processor with replay system
US7676646B2 (en) Packet processor with wide register set architecture
KR102011949B1 (en) System and method for providing and managing message queues for multinode applications in a middleware machine environment
US7650602B2 (en) Parallel processing computer
CN1294484C (en) Breaking replay dependency loops in processor using rescheduled replay queue
KR100932038B1 (en) Message Queuing System for Parallel Integrated Circuit Architecture and Its Operation Method
US7337275B2 (en) Free list and ring data structure management
US20090133023A1 (en) High Performance Queue Implementations in Multiprocessor Systems
CN111124641B (en) Data processing method and system using multithreading
US20080320478A1 (en) Age matrix for queue dispatch order
CN107153511B (en) Storage node, hybrid memory controller and method for controlling hybrid memory group
US20150040140A1 (en) Consuming Ordered Streams of Messages in a Message Oriented Middleware
CN103019810A (en) Scheduling and management of compute tasks with different execution priority levels
US10146468B2 (en) Addressless merge command with data item identifier
US20050204103A1 (en) Split queuing
US6324601B1 (en) Data structure and method for managing multiple ordered sets
US20020124157A1 (en) Method and apparatus for fast operand access stage in a CPU design using a cache-like structure
US9965321B2 (en) Error checking in out-of-order task scheduling
CN104769553A (en) System and method for supporting work sharing muxing in a cluster
US20230004346A1 (en) Element ordering handling in a ring buffer
US7035908B1 (en) Method for multiprocessor communication within a shared memory architecture
CN116719479B (en) Memory access circuit, memory access method, integrated circuit, and electronic device
US20240111575A1 (en) Synchronization Method for Low Latency Communication for Efficient Scheduling

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant