CN110781104A - Data processing system, method and device - Google Patents

Data processing system, method and device Download PDF

Info

Publication number
CN110781104A
CN110781104A CN201911048060.XA CN201911048060A CN110781104A CN 110781104 A CN110781104 A CN 110781104A CN 201911048060 A CN201911048060 A CN 201911048060A CN 110781104 A CN110781104 A CN 110781104A
Authority
CN
China
Prior art keywords
data
address
register
cache queue
valid
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911048060.XA
Other languages
Chinese (zh)
Inventor
刘均
刘权列
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Launch Technology Co Ltd
Original Assignee
Shenzhen Launch Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Launch Technology Co Ltd filed Critical Shenzhen Launch Technology Co Ltd
Priority to CN201911048060.XA priority Critical patent/CN110781104A/en
Publication of CN110781104A publication Critical patent/CN110781104A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/10Program control for peripheral devices
    • G06F13/12Program control for peripheral devices using hardware independent of the central processor, e.g. channel or peripheral processor
    • G06F13/124Program control for peripheral devices using hardware independent of the central processor, e.g. channel or peripheral processor where hardware is a sequential transfer control unit, e.g. microprocessor, peripheral processor or state-machine
    • G06F13/126Program control for peripheral devices using hardware independent of the central processor, e.g. channel or peripheral processor where hardware is a sequential transfer control unit, e.g. microprocessor, peripheral processor or state-machine and has means for transferring I/O instructions and statuses between control unit and main processor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • G06F13/42Bus transfer protocol, e.g. handshake; Synchronisation
    • G06F13/4282Bus transfer protocol, e.g. handshake; Synchronisation on a serial bus, e.g. I2C bus, SPI bus

Abstract

The embodiment of the application discloses a data processing system, a method and a device, when a first register receives data, a DMA controller is triggered to transmit the data from the first register to a first cache queue, so that a processor can read the data from the first cache queue according to an effective data address. By triggering the DMA controller to transmit the data from the register to the cache queue, the processor does not need to enter an interrupt program, and the data receiving efficiency is effectively improved. When the processor detects that the second cache queue receives data, the processor sends an effective data address and an effective length to the DMA controller; and the DMA controller reads the target data from the second cache queue according to the effective data address and the effective length and transmits the target data to the second register. The DMA controller is matched with the cache queue to carry out efficient data receiving and sending processing, so that data loss is reduced, and the data receiving and sending processing efficiency is effectively improved.

Description

Data processing system, method and device
Technical Field
The present application relates to the field of data transmission technologies, and in particular, to a data processing system, method, and apparatus.
Background
In the existing data transmission, it is common to receive data by using an interrupt, put the data into a First In First Out (FIFO) queue maintained by software through the interrupt, and then read a FIFO buffer In a main program to perform data processing. The serial port baud rate uses 1.5M BPS, namely 6.7us/byte, and because the rate is too fast, the interrupt is frequently entered, so that the CPU processing efficiency is reduced. And the phenomenon that data is lost after waiting for other interrupt processing easily occurs when the serial port receives the interrupt with lower priority.
When data is transmitted, a mode of waiting for inquiry or interrupting transmission is adopted. If the data is in the waiting query mode, when a plurality of data bytes are sent, the data is filled into the sending register, then the sending is waited for to be finished, and next data is sent, so that the waste of CPU resources is caused, and the data transmission efficiency is low. If an interrupt sending mode is adopted, software is used for maintaining a sending FIFO queue, when data are sent, the data are temporarily stored in an FIFO cache, then the data in the FIFO are inquired, sending interrupt is started, the data are sent in the interrupt, and the sending FIFO is maintained. And an interrupt sending mode is also used, and the interrupt is frequently entered due to the high serial port speed, so that the data transmission efficiency is low.
Therefore, how to improve the transmission efficiency of data is a problem to be solved by those skilled in the art.
Disclosure of Invention
The embodiment of the application aims to provide a data processing system, a data processing method and a data processing device, which can improve the transmission efficiency of data.
In order to solve the foregoing technical problem, an embodiment of the present application provides a data processing system, including a processor, a DMA controller connected to the processor, and a first cache queue, a second cache queue, a first register, and a second register, which are respectively connected to the DMA controller;
when the first register receives data, triggering the DMA controller to transmit the data from the first register to the first cache queue, so that a processor reads the data from the first cache queue according to a valid data address;
the processor is further configured to send a valid data address and a valid length to the DMA controller when detecting that the second cache queue receives data; correspondingly, the DMA controller is configured to read target data from the second cache queue according to the valid data address and the valid length, and transmit the target data to the second register.
Optionally, the first buffer queue and the second buffer queue are both FIFO queues.
Optionally, the valid data address includes a valid data header address and a valid data trailer address;
the DMA controller is specifically configured to add 1 to an effective data header address each time a data packet is written into the first cache queue; and adding 1 to the tail address of the effective data every time one data message is read from the second cache queue.
Optionally, the DMA controller is further configured to, when the valid data header address reaches the preset maximum address, return the initial head address record data packet of the first cache queue again.
Optionally, the DMA controller is further configured to read a data packet from the second cache queue according to the preset maximum address and the valid data tail address when the sum of the valid data tail address and the valid length is greater than the preset maximum address.
The embodiment of the present application further provides a data processing method, which is applicable to a DMA controller, and the method includes:
when the first register receives data, the data is transmitted from the first register to a first cache queue, so that a processor can read the data from the first cache queue according to a valid data address;
and reading target data from the second buffer queue according to the effective data address and the effective length transmitted by the processor and transmitting the target data to the second register.
Optionally, the first buffer queue and the second buffer queue are both FIFO queues.
Optionally, the valid data address includes a valid data header address and a valid data trailer address;
correspondingly, the transmitting the data from the first register to the first buffer queue comprises:
adding 1 to the effective data head address when writing a data message into the first cache queue;
the reading of the target data from the second buffer queue and the transmission of the target data to the second register comprises:
and adding 1 to the tail address of the effective data every time one data message is read from the second cache queue.
Optionally, when writing a data packet into the first buffer queue, adding 1 to an effective data header address further includes:
and when the effective data head address reaches the preset maximum address, returning the initial head address record data message of the first cache queue again.
Optionally, the reading, according to the effective data address and the effective length transmitted by the processor, the target data from the second buffer queue and transmitting the target data to the second register includes:
and when the accumulated sum of the effective data tail address and the effective length is larger than a preset maximum address, reading a data message from the second cache queue according to the preset maximum address and the effective data tail address.
The embodiment of the application also provides a data processing device which is suitable for the DMA controller and comprises a transmission unit and a reading unit;
the transmission unit is used for transmitting the data from the first register to the first cache queue when the first register receives the data, so that the processor can read the data from the first cache queue according to the effective data address;
and the reading unit is used for reading target data from the second cache queue according to the effective data address and the effective length transmitted by the processor and transmitting the target data to the second register.
Optionally, the first buffer queue and the second buffer queue are both FIFO queues.
Optionally, the valid data address includes a valid data header address and a valid data trailer address;
correspondingly, the transmission unit is specifically configured to add 1 to an effective data header address each time a data packet is written into the first cache queue;
the reading unit is specifically configured to add 1 to the tail address of the valid data every time one data packet is read from the second cache queue.
Optionally, a return unit is further included;
and the return unit is used for returning the initial first address record data message of the first cache queue again when the effective data head address reaches the preset maximum address.
Optionally, the reading unit is specifically configured to, when the sum of the valid data tail address and the valid length is greater than a preset maximum address, read a data packet from the second cache queue according to the preset maximum address and the valid data tail address.
According to the technical scheme, the data processing system comprises a processor, a DMA controller connected with the processor, a first cache queue, a second cache queue, a first register and a second register, wherein the first cache queue, the second cache queue, the first register and the second register are respectively connected with the DMA controller; when the first register receives the data, the DMA controller is triggered to transmit the data from the first register to the first cache queue, so that the processor can read the data from the first cache queue according to the effective data address. By triggering the DMA controller to transmit the data from the register to the cache queue, the processor does not need to enter an interrupt program, and the data receiving efficiency is effectively improved. The processor is further used for sending the effective data address and the effective length to the DMA controller when detecting that the second cache queue receives data; so that the DMA controller reads the target data from the second buffer queue according to the effective data address and the effective length and transmits the target data to the second register. The DMA controller is matched with the cache queue to carry out high-efficiency data sending and receiving processing, so that data loss is reduced, and the processing efficiency of data receiving and sending is effectively improved.
Drawings
In order to more clearly illustrate the embodiments of the present application, the drawings needed for the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings can be obtained by those skilled in the art without inventive effort.
Fig. 1 is a schematic structural diagram of a data processing system according to an embodiment of the present application;
fig. 2 is a flowchart of a data processing method according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without any creative effort belong to the protection scope of the present application.
In order that those skilled in the art will better understand the disclosure, the following detailed description will be given with reference to the accompanying drawings.
Next, a data processing system provided in an embodiment of the present application is described in detail. Fig. 1 is a schematic structural diagram of a data processing system according to an embodiment of the present disclosure, which includes a processor 10, a Direct Memory Access (DMA) controller 11 connected to the processor 10, and a first buffer queue 12, a second buffer queue 13, a first register 14, and a second register 15 respectively connected to the DMA controller 11.
When the first register 14 receives the data, the DMA controller 11 is triggered to transfer the data from the first register 14 to the first buffer queue 12, so that the processor 10 reads the data from the first buffer queue 12 according to the valid data address.
The processor 10 may employ a Central Processing Unit (CPU).
The first register 14 is used for receiving data, and when the first register 14 receives data, the DMA controller 11 is triggered, and at this time, the DMA controller 11 transfers the data from the first register 14 to the first buffer queue 12 according to the configuration, and the application program can read the data from the first buffer queue 12 by using a u32usart _ ReadData (u8 × data, u32 len) function. If there is no data, 0 is returned, and if there is data, the received data length is returned.
The second register 15 is used to store data to be transmitted. The application program sends data by using a u32usart _ SendData (u8 data, u32 len) function, and returns the length of the sent data. When the second buffer queue 13 receives data, the processor 10 calculates the current valid data address of the second buffer queue 13, and then sends the valid data address and the valid length of the required read data to the DMA controller 11. Accordingly, the DMA controller 11 can read the target data from the second buffer queue 13 to the second register 15 according to the valid data address and the valid length.
In the embodiment of the present application, the first buffer queue 12 and the second buffer queue 13 may both adopt FIFO queues.
The valid data addresses may include a valid data head address and a valid data tail address, depending on the FIFO queue FIFO behavior. In the initial state, that is, in the state where the FIFO queue does not buffer data, the effective data head address and the effective data tail address are equal and both can be set to 0.
For data reception, the DMA controller 11 may increment the valid data header address by 1 each time a data packet is written to the first buffer queue 12. For data transmission, the DMA controller 11 may increment the valid data tail address by 1 each time a data packet is read from the second buffer queue 13.
The data amount buffered by the FIFO queue is limited, and the maximum address can be set by combining the buffer space size of the FIFO queue according to the accumulation principle of effective data head addresses.
In the data receiving process, when the valid data header address reaches the preset maximum address, it indicates that the first buffer queue 12 has been buffered to the last position, and at this time, for a newly received data packet, the DMA controller 11 may return to the initial first address recorded data packet of the first buffer queue 12 again.
In the data reading process, the effective data tail address represents the starting position of the read data, and the specific data to be read can be determined according to the starting position and the effective length to be read.
However, in practical applications, a situation that the sum of the valid data tail address and the valid length is greater than the preset maximum address may occur, and for such a situation, the data packet may be read from the second cache queue 13 according to the preset maximum address and the valid data tail address.
For example, the maximum address is 10, the valid data tail address is 8, the valid length is 4, the cumulative sum of the valid data tail address and the valid length is 12, and exceeds the maximum address 10, and at this time, the valid length needs to be adjusted to 10-8-2, that is, 2 data packets are read backward from the position where the valid data tail address is 8. And then, 2 data messages are read backwards from the initial first address of the second buffer queue 13, and 4 data messages are obtained through two times of reading.
According to the technical scheme, the data processing system comprises a processor, a DMA controller connected with the processor, a first cache queue, a second cache queue, a first register and a second register, wherein the first cache queue, the second cache queue, the first register and the second register are respectively connected with the DMA controller; when the first register receives the data, the DMA controller is triggered to transmit the data from the first register to the first cache queue, so that the processor can read the data from the first cache queue according to the effective data address. By triggering the DMA controller to transmit the data from the register to the cache queue, the processor does not need to enter an interrupt program, and the data receiving efficiency is effectively improved. The processor is further used for sending the effective data address and the effective length to the DMA controller when detecting that the second cache queue receives data; so that the DMA controller reads the target data from the second buffer queue according to the effective data address and the effective length and transmits the target data to the second register. The DMA controller is matched with the cache queue to carry out high-efficiency data sending and receiving processing, so that data loss is reduced, and the processing efficiency of data receiving and sending is effectively improved.
Fig. 2 is a flowchart of a data processing method provided in an embodiment of the present application, which is applicable to a DMA controller, and the method includes:
s201: when the first register receives the data, the data is transmitted from the first register to the first buffer queue, so that the processor can read the data from the first buffer queue according to the effective data address.
In this embodiment, a register for receiving data may be referred to as a first register, and a buffer queue for buffering data in cooperation with the first register may be referred to as a first buffer queue. The register for sending out data is referred to as a second register, and the buffer queue for buffering data in cooperation with the second register is referred to as a second buffer queue.
When the first register receives data, the DMA controller is triggered to transfer the data from the first register to the first buffer queue, and the application program can read the data from the first buffer queue by using a u32usart _ ReadData (u8 data, u32 len) function.
S202: and reading target data from the second buffer queue according to the effective data address and the effective length transmitted by the processor and transmitting the target data to the second register.
The valid data address indicates a start position of data to be read, and the valid length indicates a data length of the data to be read. The DMA controller can read the target data from the second cache queue according to the effective data address and the effective length and transmit the target data to the second register.
Optionally, the first buffer queue and the second buffer queue are both FIFO queues.
Optionally, the valid data address comprises a valid data header address and a valid data trailer address;
accordingly, transferring data from the first register to the first buffer queue comprises:
adding 1 to the effective data head address when writing a data message into the first cache queue;
reading the target data from the second buffer queue and transmitting the target data to the second register comprises the following steps:
and adding 1 to the tail address of the effective data every time one data message is read from the second cache queue.
Optionally, when writing a data packet into the first buffer queue, adding 1 to the effective data header address further includes:
and when the effective data head address reaches the preset maximum address, returning to the initial first address record data message of the first cache queue again.
Optionally, reading the target data from the second buffer queue to be transmitted to the second register according to the effective data address and the effective length transmitted by the processor includes:
and when the accumulated sum of the effective data tail address and the effective length is larger than the preset maximum address, reading the data message from the second cache queue according to the preset maximum address and the effective data tail address.
The description of the features in the embodiment corresponding to fig. 2 may refer to the related description of the embodiment corresponding to fig. 1, and is not repeated here.
According to the technical scheme, when the first register receives data, the DMA controller transmits the data from the first register to the first cache queue, so that the processor reads the data from the first cache queue according to the effective data address. By triggering the DMA controller to transmit the data from the register to the cache queue, the processor does not need to enter an interrupt program, and the data receiving efficiency is effectively improved. And the DMA controller reads the target data from the second cache queue according to the effective data address and the effective length transmitted by the processor and transmits the target data to the second register. The DMA controller is matched with the cache queue to carry out high-efficiency data sending and receiving processing, so that data loss is reduced, and the processing efficiency of data receiving and sending is effectively improved.
Fig. 3 is a schematic structural diagram of a data processing apparatus suitable for a DMA controller according to an embodiment of the present application, where the apparatus includes a transmission unit 31 and a reading unit 32;
the transmission unit 31 is used for transmitting the data from the first register to the first buffer queue when the first register receives the data, so that the processor can read the data from the first buffer queue according to the effective data address;
and the reading unit 32 is configured to read the target data from the second buffer queue to be transmitted to the second register according to the effective data address and the effective length transmitted by the processor.
Optionally, the first buffer queue and the second buffer queue are both FIFO queues.
Optionally, the valid data address comprises a valid data header address and a valid data trailer address;
correspondingly, the transmission unit is specifically configured to add 1 to the effective data header address each time a data packet is written into the first cache queue;
the reading unit is specifically configured to add 1 to the tail address of the valid data every time one data packet is read from the second cache queue.
Optionally, a return unit is further included;
and the return unit is used for returning the initial first address record data message of the first cache queue again when the effective data head address reaches the preset maximum address.
Optionally, the reading unit is specifically configured to, when the sum of the valid data tail address and the valid length is greater than the preset maximum address, read the data packet from the second cache queue according to the preset maximum address and the valid data tail address.
The description of the features in the embodiment corresponding to fig. 3 may refer to the related description of the embodiment corresponding to fig. 1, and is not repeated here.
According to the technical scheme, the transmission unit of the DMA controller is used for transmitting the data from the first register to the first cache queue when the first register receives the data, so that the processor can read the data from the first cache queue according to the effective data address. The data are transmitted from the register to the cache queue by triggering the transmission unit of the DMA controller, the processor does not need to enter an interrupt program, and the data receiving efficiency is effectively improved. And the reading unit of the DMA controller is used for reading the target data from the second cache queue and transmitting the target data to the second register according to the effective data address and the effective length transmitted by the processor. The DMA controller is matched with the cache queue to carry out high-efficiency data sending and receiving processing, so that data loss is reduced, and the processing efficiency of data receiving and sending is effectively improved.
A data processing system, a method and an apparatus provided in the embodiments of the present application are described in detail above. The embodiments are described in a progressive manner in the specification, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The method disclosed by the embodiment corresponds to the system disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the system part for description. It should be noted that, for those skilled in the art, it is possible to make several improvements and modifications to the present application without departing from the principle of the present application, and such improvements and modifications also fall within the scope of the claims of the present application.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.

Claims (10)

1. A data processing system is characterized by comprising a processor, a Direct Memory Access (DMA) controller connected with the processor, a first cache queue, a second cache queue, a first register and a second register, wherein the first cache queue, the second cache queue, the first register and the second register are respectively connected with the DMA controller;
when the first register receives data, triggering the DMA controller to transmit the data from the first register to the first cache queue, so that a processor reads the data from the first cache queue according to a valid data address;
the processor is further configured to send a valid data address and a valid length to the DMA controller when detecting that the second cache queue receives data; correspondingly, the DMA controller is configured to read target data from the second cache queue according to the valid data address and the valid length, and transmit the target data to the second register.
2. The system of claim 1, wherein the first buffer queue and the second buffer queue are first-in-first-out (FIFO) queues.
3. The system of claim 2, wherein the valid data addresses comprise a valid data header address and a valid data trailer address;
the DMA controller is specifically configured to add 1 to an effective data header address each time a data packet is written into the first cache queue; and adding 1 to the tail address of the effective data every time one data message is read from the second cache queue.
4. The system of claim 3, wherein the DMA controller is further configured to return the initial first address recorded data packet of the first buffer queue when the valid data header address reaches a preset maximum address.
5. The system of claim 3, wherein the DMA controller is further configured to read data packets from the second cache queue according to the predetermined maximum address and the valid data tail address when the cumulative sum of the valid data tail address and the valid length is greater than the predetermined maximum address.
6. A data processing method adapted for use in a DMA controller, the method comprising:
when the first register receives data, the data is transmitted from the first register to a first cache queue, so that a processor can read the data from the first cache queue according to a valid data address;
and reading target data from the second buffer queue according to the effective data address and the effective length transmitted by the processor and transmitting the target data to the second register.
7. The method of claim 6, wherein the first buffer queue and the second buffer queue are both FIFO queues.
8. The method of claim 7, wherein the valid data addresses comprise a valid data header address and a valid data trailer address;
correspondingly, the transmitting the data from the first register to the first buffer queue comprises:
adding 1 to the effective data head address when writing a data message into the first cache queue;
the reading of the target data from the second buffer queue and the transmission of the target data to the second register comprises:
and adding 1 to the tail address of the effective data every time one data message is read from the second cache queue.
9. The method of claim 8, wherein adding 1 to a valid header address each time a data packet is written to the first buffer queue further comprises:
and when the effective data head address reaches the preset maximum address, returning the initial head address record data message of the first cache queue again.
10. A data processing apparatus adapted for use in a DMA controller, the apparatus comprising a transfer unit and a read unit;
the transmission unit is used for transmitting the data from the first register to the first cache queue when the first register receives the data, so that the processor can read the data from the first cache queue according to the effective data address;
and the reading unit is used for reading target data from the second cache queue according to the effective data address and the effective length transmitted by the processor and transmitting the target data to the second register.
CN201911048060.XA 2019-10-30 2019-10-30 Data processing system, method and device Pending CN110781104A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911048060.XA CN110781104A (en) 2019-10-30 2019-10-30 Data processing system, method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911048060.XA CN110781104A (en) 2019-10-30 2019-10-30 Data processing system, method and device

Publications (1)

Publication Number Publication Date
CN110781104A true CN110781104A (en) 2020-02-11

Family

ID=69387944

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911048060.XA Pending CN110781104A (en) 2019-10-30 2019-10-30 Data processing system, method and device

Country Status (1)

Country Link
CN (1) CN110781104A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112380154A (en) * 2020-11-12 2021-02-19 海光信息技术股份有限公司 Data transmission method and data transmission device
CN115309676A (en) * 2022-10-12 2022-11-08 浪潮电子信息产业股份有限公司 Asynchronous FIFO read-write control method, system and electronic equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040133710A1 (en) * 2003-01-06 2004-07-08 Lsi Logic Corporation Dynamic configuration of a time division multiplexing port and associated direct memory access controller
CN101158930A (en) * 2007-11-19 2008-04-09 中兴通讯股份有限公司 Method and device for external controlling DMA controller
CN101339541A (en) * 2008-08-11 2009-01-07 北京中星微电子有限公司 DMA data-transmission method and DMA controller
CN104123250A (en) * 2013-04-25 2014-10-29 上海联影医疗科技有限公司 Data transmission method based on DMA
US9032112B1 (en) * 2009-05-22 2015-05-12 Marvell International Ltd. Automatic direct memory access (DMA)
CN106776393A (en) * 2016-12-26 2017-05-31 北京旋极信息技术股份有限公司 A kind of serial data method of reseptance and device without interruption

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040133710A1 (en) * 2003-01-06 2004-07-08 Lsi Logic Corporation Dynamic configuration of a time division multiplexing port and associated direct memory access controller
CN101158930A (en) * 2007-11-19 2008-04-09 中兴通讯股份有限公司 Method and device for external controlling DMA controller
CN101339541A (en) * 2008-08-11 2009-01-07 北京中星微电子有限公司 DMA data-transmission method and DMA controller
US9032112B1 (en) * 2009-05-22 2015-05-12 Marvell International Ltd. Automatic direct memory access (DMA)
CN104123250A (en) * 2013-04-25 2014-10-29 上海联影医疗科技有限公司 Data transmission method based on DMA
CN106776393A (en) * 2016-12-26 2017-05-31 北京旋极信息技术股份有限公司 A kind of serial data method of reseptance and device without interruption

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
何晓蒙: "《G.729 语音编解码算法研究及 DSP 实现》", 《中国优秀硕士学位论文全文数据库(信息科技辑)》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112380154A (en) * 2020-11-12 2021-02-19 海光信息技术股份有限公司 Data transmission method and data transmission device
CN115309676A (en) * 2022-10-12 2022-11-08 浪潮电子信息产业股份有限公司 Asynchronous FIFO read-write control method, system and electronic equipment
CN115309676B (en) * 2022-10-12 2023-02-28 浪潮电子信息产业股份有限公司 Asynchronous FIFO read-write control method, system and electronic equipment

Similar Documents

Publication Publication Date Title
EP3707882B1 (en) Multi-path rdma transmission
CN110278157B (en) Congestion control method and network equipment
US20180217952A1 (en) Communication interface for interfacing a transmission circuit with an interconnection network, and corresponding system and integrated circuit
CN107783727B (en) Access method, device and system of memory device
CN110781104A (en) Data processing system, method and device
CN103746938A (en) Method and device for transmitting data packet
US7616566B2 (en) Data flow control apparatus and method of mobile terminal for reverse communication from high speed communication device to wireless network
US7747796B1 (en) Control data transfer rates for a serial ATA device by throttling values to control insertion of align primitives in data stream over serial ATA connection
CN113572582B (en) Data transmission and retransmission control method and system, storage medium and electronic device
US9544401B2 (en) Device and method for data communication using a transmission ring buffer
CN111352888A (en) Interrupt signal generating method and device for asynchronous transceiver
CN104052676A (en) Transmitting channel and data processing method thereof
CN100512218C (en) Transmitting method for data message
CN104022961A (en) Data transmission method, apparatus and system
CN115550442A (en) Data packet transmission method and device, electronic equipment and storage medium
CN112351049B (en) Data transmission method, device, equipment and storage medium
CN113992608B (en) Network transceiver packet path optimization method, device and storage medium
CN101610477B (en) Multimedia messaging service processing system and method
US7855954B2 (en) Speculative credit data flow control
CN108595351B (en) DMA (direct memory access) sending control method oriented to network forwarding processing
CN115176453A (en) Message caching method, memory distributor and message forwarding system
CN114766090A (en) Message caching method, integrated circuit system and storage medium
US11558309B1 (en) Expandable queue
WO2022188807A1 (en) Data transmission system and related device
CN115776475A (en) Message processing method and device, electronic equipment and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200211

RJ01 Rejection of invention patent application after publication