CN115982068A - Data processing method and related device - Google Patents

Data processing method and related device Download PDF

Info

Publication number
CN115982068A
CN115982068A CN202211729886.4A CN202211729886A CN115982068A CN 115982068 A CN115982068 A CN 115982068A CN 202211729886 A CN202211729886 A CN 202211729886A CN 115982068 A CN115982068 A CN 115982068A
Authority
CN
China
Prior art keywords
data
cache memory
processed
cpu
storage
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211729886.4A
Other languages
Chinese (zh)
Inventor
贾复山
杨八双
孙文瀚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Centec Communications Co Ltd
Original Assignee
Suzhou Centec Communications Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Centec Communications Co Ltd filed Critical Suzhou Centec Communications Co Ltd
Priority to CN202211729886.4A priority Critical patent/CN115982068A/en
Publication of CN115982068A publication Critical patent/CN115982068A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The embodiment of the application provides a data processing method and a related device, and relates to the field of embedded systems. Determining whether the cache memory meets a preset data storage condition or not according to the data size and the data type of the data to be processed; the data to be processed comprises external data and/or storage data stored in a large-capacity dynamic random access memory; if the cache memory meets the preset data storage condition, storing the data to be processed into the cache memory so that the CPU can acquire the data to be processed from the cache memory and process the data to be processed; and if the cache memory does not meet the preset data storage condition, storing the data to be processed into the large-capacity dynamic random access memory. The method can improve the efficiency of the CPU for acquiring data, thereby improving the performance of the embedded system.

Description

Data processing method and related device
Technical Field
The present application relates to the field of embedded systems, and in particular, to a data processing method and a related apparatus.
Background
In an embedded system, when a CPU (Central Processing Unit) performs data interaction with an external device, it is often necessary to receive external data through a DMA (Direct Memory Access) controller, store the external data to a designated address in a high-capacity dynamic random Access Memory, and then obtain the external data from the high-capacity dynamic random Access Memory for Processing.
However, as the amount of data interacted between the CPU and the external device is larger and larger, the method may result in that the efficiency of the CPU to acquire data is low, thereby affecting the performance of the embedded system.
Disclosure of Invention
In view of the above, an object of the present application is to provide a data processing method and a related apparatus, so as to solve the problem of how to improve the efficiency of a CPU for acquiring data, thereby improving the performance of an embedded system.
In order to achieve the above object, the embodiments of the present application adopt the following technical solutions:
in a first aspect, the present application provides a data processing method, which is applied to a DMA controller in an embedded system, where the embedded system further includes a CPU, a cache memory, and a large-capacity dynamic random access memory, and the DMA controller is connected to the CPU, the cache memory, and the large-capacity dynamic random access memory through buses, respectively, and the method includes:
determining whether the cache memory meets a preset data storage condition or not according to the data size and the data type of the data to be processed; the data to be processed comprises external data and/or storage data stored in a large-capacity dynamic random access memory;
if the cache memory meets the preset data storage condition, storing the data to be processed into the cache memory so that the CPU can acquire the data to be processed from the cache memory and process the data to be processed;
and if the cache memory does not meet the preset data storage condition, storing the data to be processed into the large-capacity dynamic random access memory.
In an optional embodiment, a plurality of data channels are arranged between the DMA controller and the high-capacity dynamic random access memory and the cache memory, wherein each data channel corresponds to one data type;
the high-capacity dynamic random access memory is provided with a storage space with continuous addresses according to a second preset capacity for each data channel;
a corresponding channel register is arranged in the DMA controller for each data channel, and the channel register is used for storing a read-write pointer; the read-write pointer is used for representing the storage condition of the storage space corresponding to the data to be processed.
In an optional embodiment, the embedded system further includes an input interface module, the DMA controller is further electrically connected to the input interface module, and the input interface module is configured to obtain the external data; the read-write pointer comprises a cache memory write pointer, a processing completion write pointer and a high-capacity dynamic random access memory write pointer, and the data storage condition comprises a first data storage condition;
the first data storage condition comprises that the residual capacity of the storage space corresponding to the external data in the cache memory is larger than or equal to the data size of the external data, and the large-capacity dynamic random access memory does not store the storage data corresponding to the data type of the external data;
determining the residual capacity of the storage space corresponding to the external data in the cache memory according to the total capacity of the storage space corresponding to the external data in the cache memory, a cache memory write pointer corresponding to the external data and a processing completion write pointer; whether the storage data corresponding to the data type of the external data is stored in the high-capacity dynamic random access memory or not is determined according to a cache memory write pointer corresponding to the external data and a high-capacity dynamic random access memory write pointer;
if the cache memory meets a preset data storage condition, storing the data to be processed into the cache memory, including:
if the cache memory meets the first data storage condition, storing the external data into a corresponding storage space in the cache memory through a data channel corresponding to the external data;
and updating a cache memory write pointer corresponding to the external data.
In an optional embodiment, if the cache memory does not satisfy a preset data storage condition, the storing the data to be processed into the large-capacity dynamic random access memory includes:
if the cache memory does not meet the first data storage condition, storing the external data into a corresponding storage space in the high-capacity dynamic random access memory through a data channel corresponding to the external data;
and updating a high-capacity dynamic random access memory write pointer corresponding to the external data.
In an alternative embodiment, the data storage condition comprises a second data storage condition, the read and write pointers comprise an append read pointer and a cache write pointer;
the second data storage condition comprises that the residual capacity of the storage space corresponding to the storage data in the cache memory is larger than or equal to the data size of the storage data; wherein, the storage data is determined according to the read-in-memory pointer in each channel register;
if the cache memory meets a preset data storage condition, storing the data to be processed into the cache memory, including:
if the cache memory meets the second data storage condition, acquiring the storage data from the high-capacity dynamic random access memory, and storing the storage data into a corresponding storage space in the cache memory through a data channel corresponding to the storage data;
and updating an additional read pointer and a cache memory write pointer corresponding to the stored data.
In an optional embodiment, the embedded system further includes an output interface module, the DMA controller is further electrically connected to the output interface module, and a buffer is disposed in the DMA controller; the read-write pointer comprises a CPU read pointer and a processing completion write pointer, and after the data to be processed is stored in the cache memory, the method further comprises the following steps:
acquiring processing completion information sent by the CPU, and determining whether the processing completion data is forwarding data or not according to the processing completion information; the processed data is data obtained after the CPU performs data processing on the data to be processed;
if the processed data is forwarding data, updating a CPU read pointer corresponding to the processed data, and acquiring the processed data according to storage position information in the processed data;
storing the processing completion data into the buffer, and updating a processing completion write pointer corresponding to the processing completion data;
transmitting the processed data in the buffer to an external device through the output interface module;
and if the processed data is non-forwarded data, updating a CPU read pointer and a processed write pointer corresponding to the processed data.
In an alternative embodiment, the method further comprises:
acquiring spontaneous data information sent by the CPU, and acquiring spontaneous data according to storage position information in the spontaneous data information; the spontaneous data is data spontaneously generated by the CPU;
storing the unsolicited data in the buffer;
and sending the spontaneous data in the buffer to an external device through the output interface module.
In an optional embodiment, a monitoring module is disposed at an interface between the CPU and the bus, and the method further includes:
and receiving the operation information sent by the monitoring module, and determining the current operation state of the CPU according to the operation information.
In a second aspect, the present application provides a data processing apparatus, which is applied to a DMA controller in an embedded system, the embedded system further includes a CPU, a cache memory and a high-capacity dynamic random access memory, and the DMA controller is connected to the CPU, the cache memory and the high-capacity dynamic random access memory through buses, respectively, the apparatus includes:
the determining module is used for determining whether the cache memory meets a preset data storage condition according to the data size and the data type of the data to be processed; the data to be processed comprises external data and/or storage data stored in a large-capacity dynamic random access memory;
the storage module is used for storing the data to be processed into the cache memory if the cache memory meets a preset data storage condition so that the CPU can conveniently acquire the data to be processed from the cache memory and process the data to be processed;
the storage module is further configured to store the data to be processed into the high-capacity dram if the cache memory does not meet a preset data storage condition.
In a third aspect, the present application provides a DMA controller comprising a processor and a memory, wherein the memory stores a computer program executable by the processor, and the processor executes the computer program to implement the method of any one of the foregoing embodiments.
In a fourth aspect, the present application provides an embedded system, including the DMA controller described in the foregoing embodiments.
In a fifth aspect, the present application provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the method according to any of the preceding embodiments.
According to the data processing method and the related device provided by the embodiment of the application, the data to be processed comprises external data and/or storage data stored in a high-capacity dynamic random access memory, the DMA controller can determine whether a cache memory meets a preset data storage condition or not according to the data size and the data type of the data to be processed, and if so, the data to be processed is stored in the cache memory so that a CPU (central processing unit) can obtain the data to be processed from the cache memory and process the data to be processed; if not, storing the data to be processed in a large-capacity dynamic random access memory. By the method, the DMA can store the external data and/or the storage data stored in the large-capacity dynamic random access memory in the cache memory under the condition that the cache memory meets the preset data storage condition, and the CPU can directly acquire the data to be processed from the cache memory for data processing without acquiring the external data from the large-capacity dynamic random access memory, so that the data acquisition efficiency of the CPU can be improved, and the performance of the embedded system is improved.
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a block diagram illustrating an embedded system provided by an embodiment of the present application;
FIG. 2 is a block diagram of a DMA controller provided by an embodiment of the present application;
fig. 3 is a schematic flowchart illustrating a data processing method according to an embodiment of the present application;
FIG. 4 shows an initialization read-write pointer diagram;
FIG. 5 shows a diagram of read and write pointers after a DMA controller receives data;
FIG. 6 is a functional block diagram of a data processing apparatus according to an embodiment of the present application;
fig. 7 is a functional block diagram of a data processing apparatus according to an embodiment of the present application.
Icon: 10-an embedded system; 20-a bus; 100-a DMA controller; 110-a memory; 120-a processor; 130-a communication module; 200-CPU; 300-cache memory; 400-large capacity dynamic random access memory; 500-an input interface module; 600-an output interface module; 700-a monitoring module; 800-a determination module; 810-a storage module; 820-sending module.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present application, as presented in the figures, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
It is noted that relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a component of' ...does not exclude the presence of additional identical elements in the process, method, article, or apparatus that comprises the element.
At present, the performance of the embedded system is also reduced to a certain extent as the amount of data interacted between the CPU and the external device is larger and larger. In general, the following three factors affect the performance of embedded systems: first, the data processing capability of the CPU; second, the data processing capability of the DMA controller; third, the CPU reads and writes the performance of the memory.
For the above factors, the data processing capability of the CPU and the data processing capability of the DMA basically depend on the architecture and the work rate of the hardware itself, wherein the data processing capability of the DMA controller is generally much higher than that of the CPU, so the performance bottleneck of the embedded system mainly depends on the data processing capability of the CPU. If the CPU architecture is optimized, the CPU must be designed again, and because the CPU needs to be designed again and a large amount of time and labor are consumed, most of the existing integrated designs adopt the existing CPU, the architecture of the CPU cannot be updated randomly, and on the basis, the data processing of the CPU is difficult to improve. It can be understood that, at this time, the performance of the embedded system can be optimized by optimizing the performance of the CPU for reading and writing the memory, for example, improving the efficiency of the CPU for acquiring data.
In the prior art, when a CPU performs data interaction with an external device, it is often necessary to receive external data through a DMA controller and write the received data into a designated address in a high-capacity dynamic random access memory, and then the CPU can read and process the data from the designated address in the high-capacity dynamic random access memory. Because the high-capacity dynamic random access memory has long response time and low throughput, the method can cause the efficiency of the CPU for acquiring data to be low, so that the read-write performance of the CPU is poor, and the performance of an embedded system is influenced.
Based on this, the embodiment of the present application provides a data processing method, which improves the read-write performance of a CPU by improving the efficiency of the CPU for obtaining data, thereby implementing optimization of the performance of an embedded system. Specifically, fig. 1 is a block diagram of an embedded system 10 according to an embodiment of the present disclosure, and please refer to fig. 1, in which the embedded system 10 includes a DMA controller 100, a CPU200, a cache memory 300, and a large-capacity dram 400, and the DMA controller 100 is respectively connected to the CPU200, the cache memory 300, and the large-capacity dram through a bus 20.
Optionally, the CPU may be a corex-a 55 of an ARM company, where the core-a 55 has one or two bus access output interfaces with Cache consistency, and also has an ACP interface capable of partially implementing a Cache write operation function. The CPU can control the operation of the DMA controller through the bus and check the status of the DMA controller. The DMA controller can also realize the write operation function of the Cache through the ACP interface. Meanwhile, the large-capacity dynamic random access memory is connected to the bus, and allows the CPU and the DMA controller to perform read-write operation.
Optionally, the Cache Memory 300 may be a Cache or an on-chip SRAM (Static Random-Access Memory), and may be specifically selected according to a CPU function limiting factor. For example, if the CPU does not support an ACP-like interface, or Cache write operations, etc., on-chip SRAM may be selected as the Cache memory 300.
Alternatively, the large capacity dynamic random access memory 400 may be a DDR (Double Data Rate) memory.
Optionally, the embedded system may further include an input interface module 500, an output interface module 600, and a monitoring module 700, wherein the input interface module 500 and the output interface module 600 are respectively electrically connected to the DMA controller 100, and the monitoring module 700 is disposed at an interface between the CPU and the bus.
Optionally, fig. 2 is a block diagram of the DMA controller 100, and referring to fig. 2, the DMA controller 100 includes a memory 110, a processor 120 and a communication module 130. The memory 110, the processor 120, and the communication module 130 are electrically connected to each other directly or indirectly to enable data transmission or interaction. For example, the components may be electrically connected to each other via one or more communication buses or signal lines.
The memory 110 is used to store programs or data. The Memory 110 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read Only Memory (PROM), an Erasable Read Only Memory (EPROM), an electrically Erasable Read Only Memory (EEPROM), and the like.
The processor 120 is used to read/write data or programs stored in the memory and perform corresponding functions.
The communication module 130 is used for establishing a communication connection between the server and other communication terminals through a network, and for transceiving data through the network.
It should be understood that the structure shown in fig. 2 is merely a schematic diagram of the structure of the DMA controller 100, and the DMA controller 100 may also include more or fewer components than shown in fig. 2, or have a different configuration than shown in fig. 2. The components shown in fig. 2 may be implemented in hardware, software, or a combination thereof.
Next, taking the DMA controller 100 in fig. 1 as an execution subject, a data processing method provided by the embodiment of the present application is exemplarily described with reference to a flowchart. Specifically, fig. 3 is a schematic flow chart of a data processing method according to an embodiment of the present application, please refer to fig. 3, where the method includes:
step S20, determining whether the cache memory meets the preset data storage condition according to the data size and the data type of the data to be processed;
the data to be processed comprises external data and/or storage data stored in a large-capacity dynamic random access memory;
optionally, the external data is data sent by an external device, and the external data may be sent to the DMA controller through the input interface module; the stored data is data that is stored in a large-capacity dynamic random access memory and needs to be subjected to data processing.
In this embodiment, the embedded system may have both the received external data and the stored data stored in the large-capacity dram, may have only the external data without the stored data stored in the large-capacity dram, and may have only the stored data stored in the large-capacity dram without the external data.
Step S21, if the cache memory meets the preset data storage condition, storing the data to be processed into the cache memory so that the CPU can acquire the data to be processed from the cache memory and process the data to be processed;
and step S22, if the cache memory does not meet the preset data storage condition, storing the data to be processed into the high-capacity dynamic random access memory.
Alternatively, the data storage condition may be stored in the DMA controller in advance for determining whether to store the pending data into the cache memory at present.
Optionally, if the data to be processed is external data, storing the data to be processed into the large-capacity dynamic random access memory under the condition that the cache memory is determined not to meet the preset data storage condition; and in the case that the cache memory meets the preset data storage condition, storing the data to be processed into the cache memory.
Optionally, if the data to be processed is storage data, under the condition that the cache memory is determined not to meet the preset data storage condition, the storage location of the data to be processed is not updated, and the data to be processed is continuously stored in the large-capacity dynamic random access memory; and under the condition that the cache memory meets the preset data storage condition, taking the data to be processed out of the large-capacity dynamic random access memory and storing the data to be processed into the cache memory.
In a possible implementation manner, a status register may be further provided in the DMA controller, and the DMA controller may update the status in the status register after storing the data to be processed into the cache memory, so that the CPU determines whether the data to be processed is currently stored in the cache memory by querying the status register, thereby processing the data to be processed.
In another possible implementation manner, after the data to be processed is stored in the cache memory, the DMA controller may generate an interrupt signal and send the interrupt signal to the CPU, so that the CPU determines whether the data to be processed is stored in the current cache memory according to the received interrupt signal, thereby processing the data to be processed.
It can be understood that, in this embodiment, whether the external data or the stored data is, the DMA will finally store the data to be processed, which needs the CPU to perform data processing, into the cache memory, so that it can be ensured that the CPU hits the cache memory every time the CPU obtains the data to be processed, and therefore, the CPU can be prevented from obtaining the data to be processed from the high-capacity dynamic random access memory, thereby improving the efficiency of obtaining the data by the CPU and improving the read-write performance of the CPU.
According to the data processing method provided by the embodiment of the application, the data to be processed comprises external data and/or storage data stored in a high-capacity dynamic random access memory, the DMA controller can determine whether a cache memory meets a preset data storage condition according to the data size and the data type of the data to be processed, if so, the data to be processed is stored in the cache memory, so that a CPU (central processing unit) can obtain the data to be processed from the cache memory and process the data to be processed; if not, storing the data to be processed in a large-capacity dynamic random access memory. By the method, the DMA can store the external data and/or the storage data stored in the large-capacity dynamic random access memory in the cache memory under the condition that the cache memory meets the preset data storage condition, and the CPU can directly acquire the data to be processed from the cache memory for data processing without acquiring the external data from the large-capacity dynamic random access memory, so that the data acquisition efficiency of the CPU can be improved, and the performance of the embedded system is improved.
Alternatively, in order to ensure the functions of receiving, storing, sending and the like of data by the DMA controller, the CPU may first perform an initialization operation for the DMA controller before the DMA controller enters the working state.
Specifically, the CPU may set a plurality of data channels between the DMA controller and the large capacity dram, cache memory, depending on the type of data during initialization. In a possible implementation manner, the CPU may set at least one data channel for each data type according to the data type of the data to be processed, and is used to receive and send the data to be processed corresponding to the data type.
Alternatively, the CPU may also set a storage space with continuous addresses in the cache memory and the large-capacity dynamic random access memory for each data channel, respectively, during initialization, to store the data to be processed sent through the channel in the cache memory and the large-capacity dynamic random access memory.
In one possible implementation manner, the size of the storage space of each data channel in the cache memory may be a first preset size, and the size of the storage space of each data channel in the large-capacity dynamic random access memory may be a second preset size. The first preset capacity and the second preset capacity can be preset according to actual application conditions.
Optionally, in order to facilitate the DMA controller to monitor the storage condition of each storage space, the CPU may further set a corresponding channel register in the DMA controller for each data channel during the initialization process, where each channel register may store a read-write pointer.
Optionally, the CPU may set the read-write pointer in each channel register to a default value during initialization, and in one possible implementation, the default value may be 0.
In addition, in order to facilitate the CPU to determine whether or not data to be processed is stored in the cache memory and to facilitate the DMA controller to determine whether or not data to be transmitted exists, the CPU may also set a control register and a status register for each data channel in the DMA controller during initialization. The status register is used for storing status information for indicating whether the data to be processed is stored in the cache memory, and the control register is used for storing status information for indicating whether the data to be processed exists.
It can be understood that the DMA controller may determine a corresponding data channel according to a data type of the data to be processed, so as to store the data to be processed into a corresponding storage space in the cache memory through the data channel, modify the read-write pointer in the channel register corresponding to the data channel, and then modify the status register corresponding to the data channel, so that the CPU obtains the data to be processed from the corresponding storage space in the cache memory for processing. In a possible implementation manner, if the data type corresponds to multiple data channels, one data channel may be randomly selected for data storage, or one data channel may be determined from the multiple data channels corresponding to the data type for data storage according to other data information, which is not limited in this application.
It can be understood that after initialization, a plurality of data channels are arranged between the DMA controller and the high-capacity dynamic random access memory and the cache memory, wherein each data channel corresponds to one data type; the cache memory is provided with a storage space with continuous addresses according to a first preset capacity for each data channel, and the high-capacity dynamic random access memory is provided with a storage space with continuous addresses according to a second preset capacity for each data channel; a corresponding channel register is arranged in the DMA controller for each data channel and is used for storing a read-write pointer; the read-write pointer is used for representing the storage condition of the storage space corresponding to the data to be processed.
It is understood that after initialization, each data type may correspond to at least one data channel, each data channel may correspond to a storage space in a cache memory and a storage space in a large-capacity dynamic random access memory, and each data channel may also correspond to a channel register in a DMA, and each channel register is provided with a corresponding read/write pointer.
Alternatively, the CPU may control the receive data enable switch in the DMA controller to be turned on after performing the initialization operation so that the DMA controller receives external data.
Optionally, in order to facilitate the DMA controller to monitor the status of the write data in the cache memory, the status of the write data in the large capacity dynamic random access memory, and the status of the processing data by the CPU, the read/write pointer may include a cache write pointer for recording the status of the write data by the DMA controller in the cache memory, a processing completion write pointer for recording the status of the processing of the data by the CPU, and a large capacity dynamic random access memory write pointer for recording the status of the write data by the DMA controller in the large capacity dynamic random access memory.
Optionally, the data storage condition includes a first data storage condition, where the first data storage condition includes that the remaining capacity of the storage space corresponding to the external data in the cache memory is greater than or equal to the data size of the external data, and the large-capacity dynamic random access memory does not store the storage data corresponding to the data type of the external data.
It is understood that the remaining capacity of the storage space corresponding to the external data in the cache memory can be determined according to the total capacity of the storage space corresponding to the external data in the cache memory, the cache memory write pointer corresponding to the external data, and the processing completion write pointer; whether the storage data corresponding to the data type of the external data is stored in the large-capacity dynamic random access memory or not can be determined according to a cache memory write pointer corresponding to the external data and a large-capacity dynamic random access memory write pointer.
That is, if the cache memory write pointer corresponding to the external data is the same as the large capacity dram write pointer, it indicates that the large capacity dram does not store the stored data corresponding to the data type of the external data, and the stored data corresponding to all the data types of the external data is stored in the cache memory.
Alternatively, the remaining capacity of the storage space refers to a capacity of the storage space that can be used for storing the data to be processed, and it is understood that the used capacity of the storage space corresponding to the external data in the cache memory can be determined by the cache memory write pointer corresponding to the external data and the processing completion write pointer, and then the remaining capacity of the storage space corresponding to the external data in the cache memory can be determined according to the total capacity of the storage space corresponding to the external data in the cache memory and the used capacity.
In this embodiment, since the CPU needs to process the data to be processed according to the receiving sequence of the DMA controller, if the large-capacity dram stores the stored data corresponding to the data type of the external data, it indicates that the data to be processed is still needed before the external data, but the data to be processed is not yet stored in the cache memory, and therefore the external data should be stored in the large-capacity dram first to wait for processing.
On this basis, if the cache memory in the step S21 satisfies the preset data storage condition, the data to be processed is stored in the cache memory, and the method can be implemented by the following steps:
if the cache memory meets the first data storage condition, storing the external data into the corresponding storage space in the cache memory through the data channel corresponding to the external data; the cache write pointer corresponding to the external data is updated.
Optionally, if the cache memory meets the first data storage condition, the DMA controller may determine a corresponding data channel according to a data type of the external data, and store the external data into a corresponding storage space in the cache memory through the data channel.
It will be appreciated that at this point, the cache write pointer in the channel register corresponding to the external data also needs to be synchronously modified to point to the next available address.
Furthermore, the step S22 can be realized by: if the cache memory does not meet the first data storage condition, storing the external data into a corresponding storage space in the high-capacity dynamic random access memory through a data channel corresponding to the external data; and updating a high-capacity dynamic random access memory write pointer corresponding to the external data.
Alternatively, if the cache memory does not satisfy the first data condition, the corresponding data channel may be determined by the data type of the external data, and the external data is stored into the corresponding storage space in the large-capacity dynamic random access memory through the data channel. It can be understood that, at this time, it is also necessary to synchronously modify the large-capacity dynamic random access memory write pointer in the channel register corresponding to the external data, so that the large-capacity dynamic random access memory write pointer points to the next available address.
In this embodiment, in order to ensure that the CPU can hit the cache memory each time it acquires data to be processed for processing, the DMA controller needs to restore the data to be processed stored in the large-capacity dynamic random access memory to the cache memory.
On the basis, the read-write pointer further comprises an additional read-write pointer, and the data storage condition comprises a second data storage condition, wherein the second data storage condition comprises that the residual capacity of a storage space corresponding to the storage data in the cache memory is larger than or equal to the data size of the storage data; the storage data is determined according to the read-in-addition pointer in each channel register; optionally, the read-in-addition pointer is used to characterize the stored data currently needed to be stored in the cache.
In one possible implementation, the DMA controller may determine at intervals for different data types of stored data whether the cache memory satisfies the second data storage condition, and in another possible implementation, the DMA controller may determine in real time for different data types of stored data whether the cache memory satisfies the second data storage condition.
It will be appreciated that if the second data storage condition is satisfied, the DMA controller may retrieve the stored data from the bulk dynamic random access memory and restore it to the cache memory.
Based on this, if the cache memory satisfies the preset data storage condition in the above step S21, the data to be processed is stored into the cache memory, which may be implemented by the following steps:
if the cache memory meets the second data storage condition, acquiring storage data from the high-capacity dynamic random access memory, and storing the storage data into a corresponding storage space in the cache memory through a data channel corresponding to the storage data; and updating an alternate memory read pointer and a cache memory write pointer corresponding to the stored data.
Alternatively, the DMA controller may obtain the memory data through the bus according to the read-as-write pointer, and then write the memory data into the corresponding memory space in the cache memory through the corresponding data channel.
In this embodiment, it can be understood that, after the DMA controller needs to write the storage data into the cache again, the DMA controller updates the read-in-memory pointer and the cache write pointer corresponding to the storage data, so that the updated read-in-memory pointer points to the next storage data that needs to be stored into the cache, and the updated write-in-cache pointer points to the next available address in the storage space.
Alternatively, considering that the CPU may generate the processing completion data after processing the data to be processed, and the processing completion data may or may not need to be transmitted to the external device, the DMA controller may transmit the processing completion data that needs to be transmitted to the external device, and in order to improve the read bandwidth of the DMA, improve the read efficiency, and improve the transmission performance, a buffer with a certain depth may be disposed in the DMA controller for storing the data that needs to be transmitted.
Alternatively, the CPU may perform parameter setting on the buffer during initialization.
It will be appreciated that, on this basis, the read and write pointers may also include a CPU read pointer for characterizing data stored in the cache memory that has not yet been processed by the CPU.
It is understood that after the step S21, the method further comprises:
acquiring processing completion information sent by a CPU (Central processing Unit), and determining whether the processing completion data is forwarding data or not according to the processing completion information;
the processed data is data obtained after the CPU performs data processing on the data to be processed;
optionally, after processing the data to be processed, the CPU may notify the DMA controller according to whether the data to be processed is forwarding data.
Alternatively, the processing completion information may be information in a control register or information directly sent to the DMA controller.
In a possible implementation manner, if the processing completion data is forwarding data, the CPU may set processing completion information in a control register corresponding to the processing completion data, so that the DMA controller determines that there is data to be sent currently when querying that the processing completion information exists in the control register. In another possible implementation manner, if the processing completion data is not forwarding data, the CPU may directly send processing completion information to the DMA controller after receiving the processing completion information, and indicate that the processing of the data to be processed is completed and does not need to be sent currently, and the DMA controller may modify the corresponding CPU read pointer and the processing completion write pointer.
Optionally, if the processing completion data is forwarding data, the processing completion information further includes information such as a storage location and a data size of the processing completion data.
If the processed data is forwarding data, updating a CPU read pointer corresponding to the processed data, and acquiring the processed data according to the storage position information in the processed information; storing the processed data into a buffer, and updating a processing completion write pointer corresponding to the processed data; transmitting the processed data in the buffer to an external device through an output interface module;
alternatively, since the CPU may need to perform multiple reads while continuing data processing, the DMA controller may modify the CPU read pointer after determining that the processing of the data to be processed by the CPU is complete.
Optionally, in order to improve the efficiency of obtaining the processing completion data by the DMA, the CPU may restore the processing completion data to the original address, that is, the address stored before the processing completion data is stored in the corresponding to-be-processed data, when determining that the processing completion data is the forwarding data.
Alternatively, considering that the data size of the processed data may be increased compared to the data to be processed, when the data to be processed is stored, the DMA controller may allocate a storage space larger than the actual data size of the data to be processed according to a preset expansion manner.
For example, if the actual data size of the data to be processed is 10, the DMA controller may generate a virtual data size, for example 12, for the data to be processed according to a preset expansion manner, and at this time, when it is determined whether the cache memory satisfies the preset data storage condition for the data to be processed, it may determine whether the cache memory satisfies the preset data storage condition based on the virtual data size, so as to store the data to be processed into the cache memory.
It is to be understood that, since the processing completion data is stored at the address of the corresponding data to be processed, the processing completion write pointer may not be modified for the first time in order to avoid the situation that the processing completion data is not sent, i.e., is overwritten.
Optionally, when sending the processed data, the DMA controller may first store the processed data into the buffer according to the storage location information in the processing completion information, and then send the processed data in the buffer to the external device through the output interface module.
Optionally, after storing the processing completion data in the buffer, the DMA controller may update the processing completion write pointer corresponding to the processing completion data to point to the next data in the processing state.
It can be understood that, since the DMA controller can obtain the processing completion data from the cache memory for transmission, rather than obtaining the processing completion data from the large-capacity dynamic random access memory for transmission, the efficiency of the DMA controller in obtaining the processing completion data can be improved to some extent, and the transmission performance can be improved.
And if the processed data is non-forwarded data, updating a CPU read pointer and a processed write pointer corresponding to the processed data.
Optionally, if the processing completion data is non-forwarding data, the DMA controller may directly update the CPU read pointer and the processing completion write pointer corresponding to the processing completion data.
In the present embodiment, it is understood that the DMA controller receiving data state can be collectively reflected by the CPU read pointer, the process completion write pointer, the cache memory write pointer, and the large capacity dynamic random access memory write pointer. Specifically, fig. 4 is a schematic diagram of an initialization read-write pointer, please refer to fig. 4, when the CPU initializes, the CPU read pointer, the processing completion write pointer, the cache memory write pointer, and the high-capacity dram write pointer may all be set to 0, and it can be seen that at this time, data to be processed is not stored in both the cache memory and the high-capacity dram, the CPU does not read data for processing, and there is no processing completion data.
Optionally, fig. 5 is a schematic diagram of a read-write pointer after the DMA controller receives data, please refer to fig. 5, where data between a processing completion write pointer and a CPU read pointer is to-be-processed data being processed by the CPU in the cache memory, data between the CPU read pointer and a cache memory write pointer is to-be-processed data not yet processed by the CPU, data between the cache memory write pointer and a high-capacity dynamic random access memory write pointer is to-be-processed data not yet processed by the CPU, data between the processing completion write pointer and the high-capacity dynamic random access memory write pointer is to-be-processed data stored in the high-capacity dynamic random access memory, and data between the processing completion write pointer and the high-capacity dynamic random access memory write pointer is to be written into the high-capacity dynamic random access memory or the cache memory by the DMA controller, and data not yet processed by the CPU.
From fig. 5, it is obvious that if the cache write pointer and the mass dram write pointer are in the same position, it indicates that there is no pending data in the mass dram that is not stored in the cache.
Optionally, the CPU may also generate some data autonomously during the operation process, and the autonomous data may also need to be sent to the external device, in this case, the CPU may send autonomous data information to the CPU when generating the autonomous data that needs to be sent to the external device, so that the DMA determines that the data that needs to be sent currently exists according to the autonomous data information, thereby acquiring the corresponding autonomous data to send.
Alternatively, the spontaneous data information may include storage location information, data size information, and the like, which characterize the attribute of the spontaneous data.
As can be understood, the DMA controller may obtain the autonomous data information sent by the CPU, and obtain the autonomous data according to the storage location information in the autonomous data information; the spontaneous data is data spontaneously generated by the CPU; storing the spontaneous data in a buffer; and transmitting the spontaneous data in the buffer to the external equipment through the output interface module.
Alternatively, the autonomous data may be stored in a large dynamic random access memory, or may be stored in a cache memory.
Alternatively, since the delay time for reading data in the large-capacity dynamic random access memory is long, in order to increase the bandwidth for reading data by the DMA controller, the read data may be stored in a buffer, and based on this, the DMA controller may not need to wait for the previous read operation to return a result, but continuously initiate a read data operation.
In this embodiment, the DMA controller may store the acquired autonomous data in the buffer, and then send the autonomous data in the buffer to the external device through the corresponding output interface module.
Optionally, in order to improve the overall processing efficiency of the embedded system, the DMA controller may determine the operating state of the CPU in advance, and specifically, the DMA controller may receive the operation information sent by the monitoring module and determine the current operating state of the CPU according to the operation information.
Optionally, the CPU may further set processing parameters to the monitoring module during initialization, and after the embedded system starts to work, the monitoring module may monitor the operating state of the CPU in real time and send operation information to the DMA controller.
Alternatively, the operation information may be a key interface signal of the CPU connection bus, such as a read-write valid flag, a read-write address, a data length, and the like, and the DMA controller may analyze the operation information according to configuration information of each register inside the DMA controller, such as an address allocated to a channel, and thereby determine the current operation state of the CPU. For example, the information of which registers the CPU is currently reading and writing is determined by analyzing the operation information, and the like.
It can be understood that, by the method, the DMA controller does not need to wait for the CPU operation to obtain the operation information of the CPU, but can judge the operation state of the CPU in advance, and based on the operation, the whole data processing efficiency of the embedded system can be improved.
Optionally, the monitoring module may also obtain operational information of the DMA controller and send it to the CPU so that the CPU determines the operational status of the DMA controller.
In order to perform the corresponding steps in the above embodiments and various possible manners, an implementation manner of the data processing apparatus is given below. Further, referring to fig. 6, fig. 6 is a functional block diagram of a data processing apparatus according to an embodiment of the present disclosure. It should be noted that the basic principle and the resulting technical effect of the data processing apparatus provided in the present embodiment are the same as those of the above embodiments, and for the sake of brief description, no part of the present embodiment is mentioned, and reference may be made to the corresponding contents in the above embodiments. The data processing apparatus includes: a determination module 800, and a storage module 810.
The determining module 800 is configured to determine whether the cache memory meets a preset data storage condition according to the data size and the data type of the data to be processed; the data to be processed comprises external data and/or storage data stored in a large-capacity dynamic random access memory;
it is to be understood that the determining module 800 may also be configured to perform the step S20;
the storage module 810 is configured to store the data to be processed into the cache memory if the cache memory meets a preset data storage condition, so that the CPU obtains the data to be processed from the cache memory and performs data processing on the data to be processed;
it is understood that the storage module 810 can also be used for executing the step S21;
the storage module 810 is further configured to store the data to be processed into the high-capacity dynamic random access memory if the cache memory does not satisfy the preset data storage condition.
It is understood that the storage module 810 can also be used for executing the step S22.
Optionally, the storing module 810 is further configured to store the external data into a corresponding storage space in the cache memory through a data channel corresponding to the external data if the cache memory meets the first data storage condition; the cache write pointer corresponding to the external data is updated.
Optionally, the storage module 810 is further configured to store the external data into a corresponding storage space in the large-capacity dynamic random access memory through a data channel corresponding to the external data if the cache memory does not satisfy the first data storage condition; and updating a high-capacity dynamic random access memory write pointer corresponding to the external data.
Optionally, the storage module 810 is further configured to, if the cache memory meets the second data storage condition, obtain storage data from the high-capacity dynamic random access memory, and store the storage data into a corresponding storage space in the cache memory through a data channel corresponding to the storage data; and updating an alternate memory read pointer and a cache memory write pointer corresponding to the stored data.
Optionally, on the basis of fig. 6, fig. 7 is another functional block diagram of a data processing apparatus provided in the embodiment of the present application, where the data processing apparatus further includes a sending module 820.
The sending module 820 is configured to obtain processing completion information sent by the CPU, and determine whether the processing completion data is forwarding data according to the processing completion information; the processed data is data obtained after the CPU performs data processing on the data to be processed; if the processed data is forwarding data, updating a CPU read pointer corresponding to the processed data, and acquiring the processed data according to the storage position information in the processed information; storing the processed data into a buffer, and updating a processing completion write pointer corresponding to the processed data; transmitting the processed data in the buffer to the external equipment through the output interface module; and if the processed data is non-forwarded data, updating a CPU read pointer and a processed write pointer corresponding to the processed data.
Optionally, the sending module 820 is further configured to obtain the autonomous data information sent by the CPU, and obtain the autonomous data according to the storage location information in the autonomous data information; the spontaneous data is data spontaneously generated by the CPU; storing the spontaneous data in a buffer; and transmitting the spontaneous data in the buffer to the external equipment through the output interface module.
Optionally, the determining module 800 is further configured to receive operation information sent by the monitoring module, and determine the current operating state of the CPU according to the operation information.
According to the data processing device provided by the embodiment of the application, whether the cache memory meets the preset data storage condition is determined through the determining module according to the data size and the data type of the data to be processed; the data to be processed comprises external data and/or storage data stored in a large-capacity dynamic random access memory; under the condition that the cache memory meets preset data storage conditions, the storage module stores the data to be processed into the cache memory so that a CPU (central processing unit) can obtain the data to be processed from the cache memory and process the data to be processed; and storing the data to be processed into the large-capacity dynamic random access memory under the condition that the cache memory does not meet the preset data storage condition. By the device, DMA can store external data and/or stored data stored in a large-capacity dynamic random access memory in the cache memory under the condition that the cache memory meets the preset data storage condition, and a CPU can directly acquire data to be processed from the cache memory for data processing without acquiring the external data from the large-capacity dynamic random access memory, so that the efficiency of acquiring the data by the CPU can be improved, and the performance of an embedded system is improved.
Alternatively, the modules may be stored in the memory shown in fig. 2 in the form of software or Firmware (Firmware) or be fixed in an Operating System (OS) of the DMA controller, and may be executed by the processor in fig. 2. Meanwhile, data, codes of programs, and the like required to execute the above modules may be stored in the memory.
Embodiments of the present application further provide a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, can implement the data processing method provided by the embodiments of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative and, for example, the flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the methods according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (12)

1. A data processing method is characterized in that the method is applied to a DMA controller in an embedded system, the embedded system also comprises a CPU, a cache memory and a large-capacity dynamic random access memory, and the DMA controller is respectively connected with the CPU, the cache memory and the large-capacity dynamic random access memory through buses, the method comprises the following steps:
determining whether the cache memory meets a preset data storage condition or not according to the data size and the data type of the data to be processed; the data to be processed comprises external data and/or storage data stored in a large-capacity dynamic random access memory;
if the cache memory meets the preset data storage condition, storing the data to be processed into the cache memory so that the CPU can acquire the data to be processed from the cache memory and process the data to be processed;
and if the cache memory does not meet the preset data storage condition, storing the data to be processed into the large-capacity dynamic random access memory.
2. The method according to claim 1, wherein a plurality of data channels are provided between the DMA controller and the high capacity dram and the cache memory, wherein each data channel corresponds to one data type;
the high-capacity dynamic random access memory is provided with a storage space with continuous addresses according to a second preset capacity for each data channel;
a corresponding channel register is arranged in the DMA controller for each data channel, and the channel register is used for storing a read-write pointer; the read-write pointer is used for representing the storage condition of the storage space corresponding to the data to be processed.
3. The method of claim 2, wherein the embedded system further comprises an input interface module, the DMA controller further electrically connected to the input interface module, the input interface module configured to obtain the external data; the read-write pointer comprises a cache memory write pointer, a processing completion write pointer and a high-capacity dynamic random access memory write pointer, and the data storage condition comprises a first data storage condition;
the first data storage condition comprises that the residual capacity of the storage space corresponding to the external data in the cache memory is larger than or equal to the data size of the external data, and the large-capacity dynamic random access memory does not store the storage data corresponding to the data type of the external data;
determining the residual capacity of the storage space corresponding to the external data in the cache memory according to the total capacity of the storage space corresponding to the external data in the cache memory, a cache memory write pointer corresponding to the external data and a processing completion write pointer; whether storage data corresponding to the data type of the external data is stored in the high-capacity dynamic random access memory or not is determined according to a cache memory write pointer corresponding to the external data and a high-capacity dynamic random access memory write pointer;
if the cache memory meets a preset data storage condition, storing the data to be processed into the cache memory, including:
if the cache memory meets the first data storage condition, storing the external data into a corresponding storage space in the cache memory through a data channel corresponding to the external data;
and updating a cache memory write pointer corresponding to the external data.
4. The method according to claim 3, wherein the storing the data to be processed into the large-capacity dynamic random access memory if the cache memory does not satisfy a preset data storage condition comprises:
if the cache memory does not meet the first data storage condition, storing the external data into a corresponding storage space in the high-capacity dynamic random access memory through a data channel corresponding to the external data;
and updating a high-capacity dynamic random access memory write pointer corresponding to the external data.
5. The method of claim 2, wherein the data storage condition comprises a second data storage condition, and wherein the read and write pointers comprise an append read pointer and a cache write pointer;
the second data storage condition comprises that the residual capacity of the storage space corresponding to the storage data in the cache memory is larger than or equal to the data size of the storage data; wherein the storage data is determined according to the read-in-addition pointer in each channel register;
if the cache memory meets a preset data storage condition, storing the data to be processed into the cache memory, including:
if the cache memory meets the second data storage condition, acquiring the storage data from the high-capacity dynamic random access memory, and storing the storage data into a corresponding storage space in the cache memory through a data channel corresponding to the storage data;
and updating an additional memory read pointer and a cache memory write pointer corresponding to the stored data.
6. The method according to claim 2, wherein the embedded system further comprises an output interface module, the DMA controller is further electrically connected to the output interface module, and a buffer is disposed in the DMA controller; the read-write pointer comprises a CPU read pointer and a processing completion write pointer, and after the data to be processed is stored in the cache memory, the method further comprises the following steps:
acquiring processing completion information sent by the CPU, and determining whether the processing completion data is forwarding data or not according to the processing completion information; the processed data is data obtained after the CPU performs data processing on the data to be processed;
if the processed data is forwarding data, updating a CPU read pointer corresponding to the processed data, and acquiring the processed data according to storage position information in the processed information;
storing the processing completion data into the buffer, and updating a processing completion write pointer corresponding to the processing completion data;
transmitting the processed data in the buffer to an external device through the output interface module;
and if the processed data is non-forwarded data, updating a CPU read pointer and a processed write pointer corresponding to the processed data.
7. The method of claim 6, further comprising:
acquiring spontaneous data information sent by the CPU, and acquiring spontaneous data according to storage position information in the spontaneous data information; the spontaneous data is data spontaneously generated by the CPU;
storing the autonomous data into the buffer;
and sending the spontaneous data in the buffer to an external device through the output interface module.
8. The method of claim 1, wherein a monitoring module is disposed at an interface between the CPU and the bus, the method further comprising:
and receiving the operation information sent by the monitoring module, and determining the current operation state of the CPU according to the operation information.
9. A data processing device is characterized in that the device is applied to a DMA controller in an embedded system, the embedded system also comprises a CPU, a cache memory and a large-capacity dynamic random access memory, and the DMA controller is respectively connected with the CPU, the cache memory and the large-capacity dynamic random access memory through buses, the device comprises:
the determining module is used for determining whether the cache memory meets a preset data storage condition according to the data size and the data type of the data to be processed; the data to be processed comprises external data and/or storage data stored in a large-capacity dynamic random access memory;
the storage module is used for storing the data to be processed into the cache memory if the cache memory meets a preset data storage condition, so that the CPU can acquire the data to be processed from the cache memory and process the data to be processed;
the storage module is further configured to store the data to be processed into the high-capacity dynamic random access memory if the cache memory does not meet a preset data storage condition.
10. A DMA controller comprising a processor and a memory, the memory storing a computer program executable by the processor, the processor being operable to execute the computer program to implement the method of any of claims 1 to 8.
11. An embedded system comprising the DMA controller of claim 10.
12. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1-8.
CN202211729886.4A 2022-12-30 2022-12-30 Data processing method and related device Pending CN115982068A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211729886.4A CN115982068A (en) 2022-12-30 2022-12-30 Data processing method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211729886.4A CN115982068A (en) 2022-12-30 2022-12-30 Data processing method and related device

Publications (1)

Publication Number Publication Date
CN115982068A true CN115982068A (en) 2023-04-18

Family

ID=85973840

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211729886.4A Pending CN115982068A (en) 2022-12-30 2022-12-30 Data processing method and related device

Country Status (1)

Country Link
CN (1) CN115982068A (en)

Similar Documents

Publication Publication Date Title
US9015695B2 (en) Information processing apparatus and information processing method
US7328326B2 (en) Storage device
CN111046072A (en) Data query method, system, heterogeneous computing acceleration platform and storage medium
EP3142014B1 (en) Method, device and user equipment for reading/writing data in nand flash
CN111625181B (en) Data processing method, redundant array controller of independent hard disk and data storage system
US8473707B2 (en) Method for achieving sequential I/O performance from a random workload
US20230384979A1 (en) Data processing method, apparatus, and system
CN110888602A (en) Method and device for improving reading performance based on solid state disk and computer equipment
CN112214240A (en) Executing device and method for host computer output and input command and computer readable storage medium
CN107526533B (en) Storage management method and equipment
US11487428B2 (en) Storage control apparatus and storage control method
CN111208935A (en) Data storage device and data access method
WO2017054714A1 (en) Method and apparatus for reading disk array
CN115982068A (en) Data processing method and related device
CN109753225B (en) Data storage method and equipment
CN116089124A (en) Communication method, device and medium of simulation system
EP4040279A1 (en) Method and apparatus for accessing solid state disk
US10372623B2 (en) Storage control apparatus, storage system and method of controlling a cache memory
CN112732176B (en) SSD (solid State disk) access method and device based on FPGA (field programmable Gate array), storage system and storage medium
CN112988034B (en) Distributed system data writing method and device
US11175830B2 (en) Storage system and data restoration method
CN113986134B (en) Method for storing data, method and device for reading data
CN112650445B (en) Method and device for executing Trim by solid state disk
EP4123470A1 (en) Data access method and apparatus
US20220179579A1 (en) Monitoring performance of remote distributed storage

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination