CN116893993A - Method, system, chip and storage medium for accessing host by AXI bus - Google Patents

Method, system, chip and storage medium for accessing host by AXI bus Download PDF

Info

Publication number
CN116893993A
CN116893993A CN202310907235.8A CN202310907235A CN116893993A CN 116893993 A CN116893993 A CN 116893993A CN 202310907235 A CN202310907235 A CN 202310907235A CN 116893993 A CN116893993 A CN 116893993A
Authority
CN
China
Prior art keywords
host
command
data
write
read
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310907235.8A
Other languages
Chinese (zh)
Inventor
萧启阳
摆海龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Yunbao Intelligent Co ltd
Original Assignee
Shenzhen Yunbao Intelligent Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Yunbao Intelligent Co ltd filed Critical Shenzhen Yunbao Intelligent Co ltd
Priority to CN202310907235.8A priority Critical patent/CN116893993A/en
Publication of CN116893993A publication Critical patent/CN116893993A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/36Handling requests for interconnection or transfer for access to common bus or bus system
    • G06F13/368Handling requests for interconnection or transfer for access to common bus or bus system with decentralised access control
    • G06F13/37Handling requests for interconnection or transfer for access to common bus or bus system with decentralised access control using a physical-position-dependent priority, e.g. daisy chain, round robin or token passing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/76Architectures of general purpose stored program computers
    • G06F15/78Architectures of general purpose stored program computers comprising a single central processing unit
    • G06F15/7807System on chip, i.e. computer system on a single chip; System in package, i.e. computer system on one or more chips in a single package
    • G06F15/781On-chip cache; Off-chip memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2213/00Indexing scheme relating to interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F2213/0038System on Chip

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention discloses a method for accessing host by adopting an AXI bus, which comprises the following steps: receiving a write operation command aiming at a host through an input interface, and caching the write operation command in a command list corresponding to the host in a command virtual output queue; scheduling the write operation command in a round robin scheduling manner; caching the data in the write operation command in a data list corresponding to the host in a data virtual output queue; the write address information and the control parameters of the write operation command are sent to the corresponding host by adopting a mode of combining exclusive distribution and shared distribution so as to carry out write operation execution processing; and receiving a write operation execution response returned by host, and sending the write operation execution response to the output interface. The invention also discloses a corresponding system, a chip and a storage medium. By implementing the invention, the AXI access efficiency and security can be improved.

Description

Method, system, chip and storage medium for accessing host by AXI bus
Technical Field
The invention relates to the technical field of data interaction between a System on Chip (SoC) and a plurality of hosts host connected with the SoC, in particular to a method, a System, a Chip and a storage medium for accessing the host by adopting an advanced extensible interface (Advanced eXtensible Interface, AXI) bus.
Background
In SoC chips, access between devices is often achieved using an AXI bus, e.g., in a system there may be multiple external devices or peripherals in communication with the SoC. Each device connected to the AXI bus can be considered a host; in general, host may be used to describe an external device or peripheral device connected to the SoC, such as a processor, memory, sensor, display, etc. In a multi-host architecture, the SoC typically acts as a central device acting as a scheduler and coordinator, which is responsible for managing and coordinating communication and data flow between the various hosts. When a host needs to send a read or write transaction, the host sends the transaction to the SoC, and the SoC performs corresponding operation and routing according to the type of the transaction and the target device. The SoC may forward the transaction to the appropriate host, or pass the data from one host to another depending on its destination.
As shown in fig. 1, a schematic block diagram of a write operation in a prior art access host over an AXI bus is shown. As shown in connection with fig. 1, in general, a write operation may include the steps of:
caching the write operation command received by the input interface in a command cache area (such as cmd_fifo), and refusing the command to receive when cmd_fifo is full;
The write commands are read out sequentially one by one from cmd_fifo, are sent to a write data channel (axi_w) and a write address channel (axi_aw) respectively according to the AXI protocol, and receive a response of host through a write operation execution response channel (axi_b).
For each Write transaction, firstly, an Address Write Identification (AWID) is allocated to a command read by cmd_fifo according to the state of each bit of an Address Write identification bitmap (AWID_bitmap); after awid is obtained, corresponding data in a write operation command is cached in a data cache area (data_fifo), and after the data_fifo is fully written, cmd_fifo is not read any more; simultaneously, taking the AWIDs as addresses, storing the IDs carried by the input interfaces into an incomplete transaction table (osttable), and recording the type, the completion state and the ID information of the corresponding input interface of the transaction corresponding to each AWID;
after the AXI_B channel returns a host response, reading the OST_table data according to a write response identifier (bid) in the response, converging the OST_table data with write response data (brsp) data, and sending the data to an output interface; while releasing the state of the bits of the write transaction in the awid_bitmap.
In fig. 1, the if_to_axi module is a module for converting from an interface, which is responsible for converting other types of interfaces into AXI interfaces. For example, IF other types of interfaces exist in the system (e.g., AHB or APB), the if_to_axi module may convert these interfaces TO AXI interfaces for data interaction with components using the AXI bus.
As shown in fig. 2, a schematic block diagram of a read operation in a prior art access to host over an AXI bus is shown. As shown in connection with fig. 1, in general, a read operation may include the steps of:
caching the read operation command received by the input interface in cmd_fifo, and refusing the command to receive when cmd_fifo is full;
the command read from cmd_fifo is sent to the AXI_AR channels according to the AXI protocol, and the response data of host is received through the AXI_R channels; the axi_ar channel is used to send address and control information of the read operation.
According to the state of each bit in the Address reading identification bitmap (ARID_bitmap), an Address Reading Identification (ARID) is allocated to a command Read out by cmd_fifo, and an ID carried by an input interface is stored in an unfinished transaction table (osttable) by taking the ARID as an Address;
when an AXI_R channel obtains host return data, writing the data into a data buffer (data_table) according to an identification (rid) responded by a read operation, reading the data in the OSt_table and the data_table by the rid to aggregate when a preset length is reached, and sending the data to an output interface; while releasing the state of the bits of the read transaction in the arid_bitmap.
The existing access host system through the AXI bus has the following defects:
when any one of the plurality of hosts accessed by AXI is dead, the host is directly back-pressed to the input interface, so that the input interface cannot be used; even if other host functions are normal at this time, AXI access cannot be realized, and the efficiency and security of AXI access are reduced.
Disclosure of Invention
The invention aims to solve the technical problem of providing a method, a system, a chip and a storage medium for accessing host by using an AXI bus, which can improve the access efficiency and the security.
To solve the above technical problem, as one aspect of the present invention, there is provided a method for accessing host using AXI bus, including the steps of:
receiving a write operation command aiming at a host through an input interface, and caching the write operation command in a command list corresponding to the host in a command virtual output queue; wherein each host corresponds to a command list;
scheduling the write operation command in a round robin scheduling manner;
caching the data in the write operation command in a data list corresponding to the host in a data virtual output queue, wherein each host corresponds to a data list;
The write address information and the control parameters of the write operation command are sent to the corresponding host to perform write operation execution processing;
and receiving a write operation execution response returned by host, and sending the write operation execution response to the output interface.
The method includes the steps of sending write address information and control parameters of the write operation command to a corresponding host to perform write operation execution processing, and further includes:
when judging that the exclusive distribution times of the host corresponding to the current write command in the AXI_AW channel are not used up, sending write address information and control parameters of the write operation command to the corresponding host through the AXI_AW channel so as to execute write operation execution processing;
and after the application is successful, the write address information and the control parameters of the write operation command are sent to the corresponding host through an AXI_AW channel so as to perform write operation execution processing.
Wherein, further include:
after judging that one command list is full, carrying out back pressure treatment on the command list;
after judging that one data list is fully written, carrying out back pressure treatment on a command list corresponding to the command virtual output queue;
After judging that the exclusive distribution and the shared distribution times corresponding to a host are all used up, carrying out back pressure processing on a command list corresponding to the host in a command virtual output list;
and after judging that the single-share distribution and the shared distribution times of all host are used up, controlling a round-robin scheduling function for suspending the output of the command virtual output queue.
Accordingly, another aspect of the present invention provides a method for accessing host using an AXI bus, comprising the steps of:
receiving a read operation command aiming at a host through an input interface, and caching the read operation command in a command list corresponding to the host in a command virtual output queue; wherein each host corresponds to a command list;
scheduling the read operation command in a round robin scheduling mode;
transmitting the read address information and the control parameters of the read operation command to a corresponding host so as to perform read operation execution processing;
storing data returned by host in a data list corresponding to the host in a data virtual output queue, wherein each host corresponds to a data list;
and carrying out round-robin scheduling on the data in the data virtual output queue, and sending the successfully scheduled data to an output interface.
The method includes the steps of sending read address information and control parameters of the read operation command to a corresponding host to perform read operation execution processing, and further includes:
when judging that the exclusive distribution times of the host corresponding to the current read command in the AXI_AR channel are not used up, sending the read address information and the control parameters of the read operation command to the corresponding host through the AXI_AR channel so as to perform read operation execution processing;
and after the application is successful, the read address information and the control parameters of the read operation command are sent to the corresponding host through an AXI_AR channel so as to perform read operation execution processing.
Wherein, further include:
after judging that one command list is full, carrying out back pressure treatment on the command list;
after judging that the exclusive distribution and the shared distribution times corresponding to a host are all used up, carrying out back pressure processing on a command list corresponding to the host in a command virtual output list;
and after judging that the single-share distribution and the shared distribution times of all host are used up, controlling a round-robin scheduling function for suspending the output of the command virtual output queue.
Accordingly, in yet another aspect of the present invention, there is also provided a system for accessing host using AXI bus, including:
a write command buffer unit, configured to receive a write operation command for a host through an input interface, and buffer the write operation command in a command list corresponding to the host in a command virtual output queue; wherein each host corresponds to a command list;
a write command scheduling processing unit for scheduling the write operation command in a round robin scheduling manner;
the write data caching unit is used for caching the data in the write operation command into a data list corresponding to the host in a data virtual output queue, wherein each host corresponds to a data list;
the write command distribution processing unit is used for sending the write address information and the control parameters of the write operation command to the corresponding host so as to perform write operation execution processing;
and the write response processing unit is used for receiving the write operation execution response returned by host and sending the write operation execution response to the output interface.
Wherein the write command distribution processing unit further includes:
the write command exclusive distribution unit is used for sending write address information and control parameters of the write operation command to the corresponding host through the AXI_AW channel to perform write operation execution processing when judging that the exclusive distribution times of the host corresponding to the current write command in the AXI_AW channel are not used up;
And the write command sharing and distributing unit is used for applying for the sharing and distributing times in a round-robin scheduling mode when judging that the unique sharing and distributing times of the host corresponding to the current write command are used up, and sending the write address information and the control parameters of the write operation command to the corresponding host through an AXI_AW channel after the application is successful so as to execute the write operation.
Wherein, further include:
the first back pressure processing unit is used for carrying out back pressure processing on a command list corresponding to the command virtual output queue after judging that one data list is full; after judging that the exclusive distribution and the shared distribution times corresponding to a host are all used up, carrying out back pressure processing on a command list corresponding to the host in a command virtual output list; and controlling the round robin scheduling function of suspending the output of the virtual output queue of the command after judging that the exclusive distribution and the shared distribution times of all host are used up.
Accordingly, in yet another aspect of the present invention, there is also provided a system for accessing host using AXI bus, including:
a read command buffer unit, configured to receive a read operation command for a host through an input interface, and buffer the read operation command in a command list corresponding to the host in a command virtual output queue; wherein each host corresponds to a command list;
A read command scheduling processing unit for scheduling the read operation command in a round-robin scheduling manner;
the read command distribution processing unit is used for sending the read address information and the control parameters of the read operation command to the corresponding host so as to perform read operation execution processing;
the data reading cache unit is used for receiving data returned by host and storing the data in a data list corresponding to the host in a data virtual output queue, wherein each host corresponds to a data list;
and the read response processing unit is used for carrying out round-robin scheduling on the data in the data virtual output queue and sending the successfully scheduled data to the output interface.
Wherein the read command distribution processing unit further includes:
the read command exclusive distribution unit is used for sending the read address information and the control parameters of the read operation command to the corresponding host through the AXI_AR channel to perform read operation execution processing when judging that the exclusive distribution times of the host corresponding to the current read command in the AXI_AR channel are not used up;
and the read command sharing and distributing unit is used for applying for the shared distribution times in a round-robin scheduling mode when judging that the unique distribution times of the host corresponding to the read command are used up, and sending the read address information and the control parameters of the read operation command to the corresponding host through an AXI_AR channel after the application is successful so as to perform read operation execution processing.
Wherein, further include:
the second back pressure processing unit is used for carrying out back pressure processing on the command list after judging that one command list is full; after judging that the exclusive distribution and the shared distribution times corresponding to a host are all used up, carrying out back pressure processing on a command list corresponding to the host in a command virtual output list; and controlling the round robin scheduling function of suspending the output of the virtual output queue of the command after judging that the exclusive distribution and the shared distribution times of all host are used up.
Accordingly, in a further aspect of the present invention, there is also provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the method as described above.
Accordingly, in yet another aspect of the present invention, there is also provided a chip on which the system for accessing host using AXI bus as described above is disposed.
The implementation of the invention has the following beneficial effects:
the invention provides a method, a system, a storage medium and a chip for accessing host by adopting an AXI bus. Setting a plurality of lists corresponding to host in the command virtual output queue and the data virtual output queue; when an access operation command for a host is received through an input interface, the access operation command is cached in a list corresponding to a command virtual output queue, and data needing to be written into the host or data needing to be read out from the host are cached in a data list corresponding to a data virtual output queue; simultaneously, dispatching the access operation command in a round-robin dispatching mode, and sending address information and control parameters related to the access operation command to a corresponding host in a sharing and distributing mode combined with exclusive distribution so as to perform read-write operation execution processing; therefore, when multiple host is accessed through the AXI bus, even if one host is hung up, other hosts can be normally accessed, and the situation of back pressure on the whole access path is avoided. By implementing the invention, the AXI access efficiency and the security are improved.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions of the prior art, the drawings which are required in the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the description below are only some embodiments of the invention, and that it is within the scope of the invention to one skilled in the art to obtain other drawings from these drawings without inventive faculty.
FIG. 1 is a schematic diagram of a prior art schematic diagram of a write operation in an AXI bus access host;
FIG. 2 is a schematic diagram of a prior art schematic diagram of a read operation during an AXI bus access host;
FIG. 3 is a schematic diagram illustrating the main flow of an embodiment of a method for accessing host using an AXI bus according to the present invention;
FIG. 4 is a schematic diagram of the principle framework of the write operation referred to in FIG. 3;
FIG. 5 is a schematic diagram illustrating the main flow of another embodiment of a method for accessing host using an AXI bus according to the present invention;
FIG. 6 is a schematic diagram of the principle framework of the write operation referred to in FIG. 5;
FIG. 7 is a schematic diagram illustrating one embodiment of a system for accessing host using an AXI bus according to the present invention;
Fig. 8 is a schematic structural diagram of an embodiment of a system for accessing host using AXI bus according to the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings, for the purpose of making the objects, technical solutions and advantages of the present invention more apparent.
Fig. 3 is a schematic diagram showing the main flow of an embodiment of a method for accessing host using AXI bus according to the present invention, and is shown in fig. 4. In this embodiment, the method at least includes the following steps:
step S10, receiving a write operation command for a host through an input interface, and caching the write operation command in a command list (list) corresponding to the host in a command virtual output queue (cmd_ voq); wherein, each host corresponds to a command list, as shown in fig. 4, that is, the write operation command corresponding to host1 is cached in list1, and the write operation command corresponding to host n is cached in list n; when a certain command list (list) is full, the input end of the list can be controlled to generate back pressure, and the corresponding list is controlled to be in a command writing prohibition state by utilizing a back pressure signal (even an enabling signal) to reject command reception; if all lists are fully written, back pressure is generated on the whole input port, and command writing is forbidden;
Step S11, scheduling the write operation command in a Round-Robin (RR) scheduling mode, and distributing an Address Write Identification (AWID) for the write operation command after the scheduling is successful by using an address write identification bitmap (AWID_bitmap); storing write transaction state information corresponding to the write operation command into an incomplete transaction table (osttable);
in this embodiment, when selecting and allocating commands in a round robin scheduling manner, the commands in the multiple command queues are sequentially selected and allocated with resources in a round robin manner; therefore, each command queue can process the commands to be processed according to the sequence, and certain queue is prevented from being processed preferentially all the time, so that the fair distribution of the commands is realized;
while the AWID_bitmap plays an important role in AXI communications, it is used to track and manage transactions in the AXI protocol and determine which AWIDs are valid or assigned. Where AWID is an identifier that uniquely identifies the AXI transaction.
For example, assuming that the range of AWID is 0-7, then AWID_bitmap will contain 8 bits. Each bit may represent an allocation status of an AWID, e.g., in one example, the value of awid_bitmap is:
AWID_bitmap:11100101
It will be appreciated that in this example, the AWID_bitmap is an 8-bit bitmap, wherein bits 0, 1, 2, 5, 7, 8 are set to 1, indicating that the AWIDs corresponding to these bits have been allocated or are in a valid state, and the other bits are 0, indicating that the AWIDs corresponding to these bits have not been allocated; when the AWID of the corresponding bit is allocated, the value of the corresponding bit in the AWID_bitmap is required to be modified to be 1, and when the AWID is released or deleted, the value of the corresponding bit in the AWID_bitmap is required to be modified to be 0.
The AWID may be represented as a binary, decimal, or hexadecimal number, depending on the actual implementation and design requirements. In general, the number of bits of the AWID is generally fixed, such as 3 bits, 4 bits, 8 bits, or 16 bits, etc. In one example, assume that the AWID is a 3-bit binary number, in this example, a total of 8 different AWIDs are available.
AWID represents an example:
AWID=000
AWID=010
AWID=111。
and an outstanding transaction table (osttable) is a table or data structure for tracking and managing the currently outstanding transactions. In general, it may be a data structure containing a plurality of entries. Wherein each entry corresponds to a transaction to be processed and stores information related to the transaction, such as the AWID of the transaction, the transaction type, the address of the transaction operation, the status of the transaction, and the ID information of the input port.
Through the osttable structure, a particular transaction may be quickly located according to the AWID (or other identification associated with the AWID), and information related to the transaction may be looked up or updated. In this way, the system can track and manage outstanding transactions and perform the necessary operations and processing.
Step S12, caching the data in the write operation command in a data list corresponding to the host in a data virtual output queue (data_ voq) connected with an AXI_W channel, wherein each host corresponds to a data list;
in this embodiment, further comprising:
after judging that one data list is fully written, carrying out back pressure treatment on a command list corresponding to the command virtual output queue;
step S13, adopting a mode of combining exclusive distribution and shared distribution to send write address information and control parameters of the write operation command to a corresponding host through an AXI_AW channel so as to perform write operation execution processing;
in a specific example, the step S13 further includes:
when judging that the exclusive distribution times of the host corresponding to the current write command in the AXI_AW channel are not used up, sending write address information and control parameters of the write operation command to the corresponding host through the AXI_AW channel so as to execute write operation execution processing;
When judging that the single sharing distribution times of the host corresponding to the current write command are used up, applying for the sharing distribution times in a round robin scheduling mode, and after the application is successful, sending write address information and control parameters of the write operation command to the corresponding host through an AXI_AW channel to perform write operation execution processing, wherein the control parameters comprise: AW LEN, AW SIZE, etc.
It will be appreciated that in this embodiment, a single-share dispatch (i.e., host OST dispatch in FIG. 4) refers to dispatching outstanding transactions in the AXI_AW channel to different hosts, each of which monopolizes a portion of the outstanding transactions. In one system, when multiple host send write transactions simultaneously using the axi_aw channel, each host may be assigned a specific number of outstanding transactions. These outstanding transactions are counters that are used to track the state and progress of each transaction. An exclusive dispatch means that each host monopolizes a portion of outstanding transactions, which means that when a particular host's exclusive outstanding transactions are exhausted, the host will not be able to continue to send new transactions.
When a single outstanding transaction of a host is exhausted, the outstanding transaction is applied for sharing (i.e., shared OST dispatch in FIG. 4) by a round robin arbitration mechanism. In round robin scheduling, individual host in turn applies for shared outstanding transactions, ensuring that each host has an opportunity to acquire additional outstanding transactions to send more transactions. Among other common implementations of round robin scheduling is to use a mapping table or an array to keep track of the number and status of outstanding transactions per host. In particular, the distribution and management of outstanding transactions may be implemented using related algorithms and logic. For example, when a single outstanding transaction of a certain host is exhausted, round robin scheduling may toggle between different hosts, allocating shared outstanding transactions in a certain order or priority. In particular by means of counters, pointers or other control logic.
Wherein, further include:
after judging that the exclusive distribution and the shared distribution times corresponding to a host are all used up, carrying out back pressure processing on a command list corresponding to the host in a command virtual output list;
after judging that the single-share distribution and the shared distribution times of all host are used up, controlling the round-robin scheduling function of suspending the output of the command virtual output queue, specifically, the round-robin scheduling function can be realized through the enabling of the back pressure RR.
Step S14, receiving a write operation execution response returned by the AXI_B channel, simultaneously reading data of a corresponding write transaction in the incomplete transaction table, carrying out data aggregation, then sending the data to an output interface, and simultaneously releasing a corresponding flag bit in an address write identification bitmap.
Specifically, when axi_b returns, retrieving data (type, state, corresponding input port ID, etc.) of the write transaction in the osttable data with a write response identifier (bid), aggregating with write operation execution response data (brsp), and sending to the output interface; and releasing the state of the identification bit corresponding to the AWID in the AWID_bitmap (modifying the value from 1 to 0). It will be appreciated that the write response identification is pre-associated with either the AWID or the input port ID.
Fig. 5 is a schematic diagram of the main flow of another embodiment of a method for accessing host using AXI bus according to the present invention. In this embodiment, as shown in fig. 6, the method at least includes the following steps:
step S20, receiving a read operation command for a host through an input interface, and caching the read operation command in a command list corresponding to the host in a command virtual output queue (cmd_ voq); wherein each host corresponds to a command list; as shown in fig. 4, i.e., the read operation command corresponding to host1 is cached in list1, and the read operation command corresponding to host n is cached in list n; after judging a command list (list), carrying out back pressure processing on the command list, and controlling the corresponding list to be in a command writing forbidden state by using a back pressure signal (even an energy signal) to reject command receiving;
step S21, dispatching the read operation command in a round-robin dispatching mode, and distributing an address read identification for the read operation command after dispatching success by using an address read identification bitmap (ARID_bitmap); storing read transaction state information corresponding to the read operation command into an unfinished transaction table (osttable); the address read identification bitmap (arid_bitmap) is similar to the address write identification bitmap described above, and reference is made to the foregoing description;
Step S22, the read address information and the control parameters of the read operation command are sent to the corresponding host through an AXI_AR channel by adopting a mode of combining exclusive distribution with shared distribution so as to perform read operation execution processing;
in a specific example, the step S22 further includes:
when judging that the sharing distribution times of the host corresponding to the current read command in the AXI_AR channel are not used up, sending the read address information and the control parameters of the read operation command to the corresponding host through the AXI_AR channel so as to perform read operation execution processing;
and after the shared distribution times of the host corresponding to the read command are judged to be used up, applying for the shared distribution times in a round robin scheduling mode, and after the application is successful, sending the read address information and control parameters of the read operation command to the corresponding host through an AXI_AR channel to perform read operation execution processing, wherein the control parameters comprise AR LEN, AR SIZE, AR BURST, AR LOCK and the like.
In this process, further comprising: after judging that the exclusive distribution and the shared distribution times corresponding to a host are all used up, carrying out back pressure processing on a command list corresponding to the host in a command virtual output list;
And after judging that the single-share distribution and the shared distribution times of all host are used up, controlling a round-robin scheduling function for suspending the output of the command virtual output queue.
Step S23, receiving data returned by host on an AXI_R channel and storing the data in a data list corresponding to the host in a data virtual output queue (data_ voq) connected with the AXI_R channel, wherein each host corresponds to a data list;
and step S24, carrying out round robin scheduling on the data in the data virtual output queue (data_ voq), carrying out data aggregation on the data which is successfully scheduled and the data corresponding to the read transaction in the incomplete transaction table, sending the data to an output interface, and releasing the corresponding flag bit in the address reading identification bitmap.
Specifically, when the data in the data_ voq is dequeued, firstly, after the scheduling is successful, reading the data (type, state, corresponding input port ID and the like) of the read transaction of the OSt_table data by using a read response identifier (rid), converging the read transaction and read operation execution response data (brsp), and sending the read transaction execution response data (brsp) to an output interface; and releases the state of the identification bit corresponding to the ARID in the arid_map (changes its value from 1 to 0). It will be appreciated that the read response identification is pre-associated with an ARID or input port ID.
As shown in fig. 7, a schematic diagram of a system 1 using AXI bus access host according to the present invention is shown; in this embodiment, the system 1 at least includes:
a write command buffer unit 10, configured to receive a write operation command for a host through an input interface, and buffer the write operation command in a command list corresponding to the host in a command virtual output queue (cmd_ voq); wherein each host corresponds to a command list;
a write command scheduling processing unit 11, configured to schedule the write operation command in a round robin manner, and allocate an address write identifier to the write operation command after the scheduling is successful by using an address write identifier bitmap (awid_bitmap); storing write transaction state information corresponding to the write operation command into an incomplete transaction table (osttable);
a write data caching unit 12, configured to cache data in the write operation command in a data list corresponding to the host in a data virtual output queue (data_ voq) connected to the axi_w channel, where each host corresponds to a data list;
a write command distribution processing unit 13, configured to send write address information and control parameters of the write operation command to a corresponding host through an axi_aw channel by adopting a mode of combining exclusive distribution and shared distribution, so as to perform write operation execution processing;
The write response processing unit 14 is configured to receive a write operation execution response returned by the axi_b channel, simultaneously read data corresponding to the write transaction in the incomplete transaction table, perform data aggregation, send the data to the output interface, and simultaneously release a corresponding flag bit in the address write identification bitmap.
More specifically, in the embodiment of the present invention, the write command distribution processing unit 13 further includes:
the write command exclusive distribution unit is used for sending write address information and control parameters of the write operation command to the corresponding host through the AXI_AW channel to perform write operation execution processing when judging that the exclusive distribution times of the host corresponding to the current write command in the AXI_AW channel are not used up;
and the write command sharing and distributing unit is used for applying for the sharing and distributing times in a round-robin scheduling mode when judging that the unique sharing and distributing times of the host corresponding to the current write command are used up, and sending the write address information and the control parameters of the write operation command to the corresponding host through an AXI_AW channel after the application is successful so as to execute the write operation.
Wherein, further include:
the first back pressure processing unit is used for carrying out back pressure processing on a command list corresponding to the command virtual output queue after judging that one data list is full; after judging that the exclusive distribution and the shared distribution times corresponding to a host are all used up, carrying out back pressure processing on a command list corresponding to the host in a command virtual output list; and controlling the round robin scheduling function of suspending the output of the virtual output queue of the command after judging that the exclusive distribution and the shared distribution times of all host are used up.
For more details, reference is made to and the description of fig. 3 and fig. 4 is combined with the foregoing description, and details are not repeated here.
As shown in fig. 8, a schematic diagram of a system 2 using AXI bus access host according to the present invention is shown; in this embodiment, the system 2 at least includes:
a read command buffer unit 20, configured to receive a read operation command for a host through an input interface, and buffer the read operation command in a command list corresponding to the host in a command virtual output queue (cmd_ voq); wherein each host corresponds to a command list;
a read command scheduling processing unit 21, configured to schedule the read operation command in a round robin manner, and allocate an address read identifier to the read operation command after the scheduling is successful by using an address read identifier bitmap (arid_bitmap); storing read transaction state information corresponding to the read operation command into an unfinished transaction table (osttable);
the read command distribution processing unit 22 is configured to send the read address information and the control parameter of the read operation command to a corresponding host through an axi_ar channel by adopting a mode of combining exclusive distribution and shared distribution, so as to perform read operation execution processing;
A read data buffer unit 23, configured to receive data returned by host on an axi_r channel, and store the data in a data list corresponding to the host in a data virtual output queue (data_ voq) connected to the axi_r channel, where each host corresponds to a data list;
the read response processing unit 24 is configured to perform round robin scheduling on data in the data virtual output queue (data_ voq), perform data aggregation on data successfully scheduled and data corresponding to a read transaction in the incomplete transaction table, send the data to the output interface, and release a corresponding flag bit in the address read identification bitmap.
More specifically, in the present embodiment, the read command distribution processing unit 22 further includes:
the read command exclusive distribution unit is used for sending read address information and control parameters of the read operation command to the corresponding host through the AXI_AR channel when judging that the sharing distribution times of the host corresponding to the current read command in the AXI_AR channel are not used up, so as to perform read operation execution processing;
and the read command sharing and distributing unit is used for applying for the sharing and distributing times in a round robin scheduling mode when judging that the sharing and distributing times of the host corresponding to the read command are used up, and sending the read address information and the control parameters of the read operation command to the corresponding host through an AXI_AR channel after the application is successful so as to perform read operation execution processing.
Wherein, further include:
the second back pressure processing unit is used for carrying out back pressure processing on the command list after judging that one command list is full; after judging that the exclusive distribution and the shared distribution times corresponding to a host are all used up, carrying out back pressure processing on a command list corresponding to the host in a command virtual output list; and controlling the round robin scheduling function of suspending the output of the virtual output queue of the command after judging that the exclusive distribution and the shared distribution times of all host are used up.
For more details, reference is made to and the description of fig. 5 and 6 is incorporated in the foregoing description, and details are not repeated here.
Accordingly, in a further aspect of the present invention, there is also provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the method as described in the preceding figures 3 to 6. For more details, reference is made to and the description of fig. 3 to 6 is incorporated in the foregoing, and no further description is given here.
Accordingly, in yet another aspect of the present invention, there is also provided a chip on which a system employing AXI bus access host as described in the foregoing fig. 7 or/and fig. 8 is disposed. For more details, reference is made to and the description of fig. 7 to 8 is incorporated in the foregoing, and no further description is given here.
The implementation of the invention has the following beneficial effects:
the invention provides a method, a system, a storage medium and a chip for accessing host by adopting an AXI bus. Setting a plurality of lists corresponding to host in the command virtual output queue and the data virtual output queue; when an access operation command for a host is received through an input interface, the access operation command is cached in a list corresponding to a command virtual output queue, and data needing to be written into the host or data needing to be read out from the host are cached in a data list corresponding to a data virtual output queue; simultaneously, dispatching the access operation command in a round-robin dispatching mode, and sending address information and control parameters related to the access operation command to a corresponding host in a sharing and distributing mode combined with exclusive distribution so as to perform read-write operation execution processing; therefore, when multiple host is accessed through the AXI bus, even if one host is hung up, other hosts can be normally accessed, and the situation of back pressure on the whole access path is avoided. By implementing the invention, the AXI access efficiency and the security are improved. In addition, by adopting the command virtual output queue and the data virtual output queue, the instructions can be processed in parallel, and meanwhile, the resources are saved.
It will be apparent to those skilled in the art that embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above disclosure is only a preferred embodiment of the present invention, and it is needless to say that the scope of the invention is not limited thereto, and therefore, the equivalent changes according to the claims of the present invention still fall within the scope of the present invention.

Claims (14)

1. A method for accessing host using an AXI bus, comprising the steps of:
receiving a write operation command aiming at a host through an input interface, and caching the write operation command in a command list corresponding to the host in a command virtual output queue; wherein each host corresponds to a command list;
scheduling the write operation command in a round robin scheduling manner;
caching the data in the write operation command in a data list corresponding to the host in a data virtual output queue, wherein each host corresponds to a data list;
the write address information and the control parameters of the write operation command are sent to the corresponding host to perform write operation execution processing;
and receiving a write operation execution response returned by host, and sending the write operation execution response to the output interface.
2. The method of claim 1, wherein sending the write address information and the control parameters of the write operation command to the corresponding host for write operation execution processing, further comprises:
When judging that the exclusive distribution times of the host corresponding to the current write command in the AXI_AW channel are not used up, sending write address information and control parameters of the write operation command to the corresponding host through the AXI_AW channel so as to execute write operation execution processing;
and after the application is successful, the write address information and the control parameters of the write operation command are sent to the corresponding host through an AXI_AW channel so as to perform write operation execution processing.
3. The method as recited in claim 2, further comprising:
after judging that one command list is full, carrying out back pressure treatment on the command list;
after judging that one data list is fully written, carrying out back pressure treatment on a command list corresponding to the command virtual output queue;
after judging that the exclusive distribution and the shared distribution times corresponding to a host are all used up, carrying out back pressure processing on a command list corresponding to the host in a command virtual output list;
and after judging that the single-share distribution and the shared distribution times of all host are used up, controlling a round-robin scheduling function for suspending the output of the command virtual output queue.
4. A method for accessing host using an AXI bus, comprising the steps of:
receiving a read operation command aiming at a host through an input interface, and caching the read operation command in a command list corresponding to the host in a command virtual output queue; wherein each host corresponds to a command list;
scheduling the read operation command in a round robin scheduling mode;
transmitting the read address information and the control parameters of the read operation command to a corresponding host so as to perform read operation execution processing;
storing data returned by host in a data list corresponding to the host in a data virtual output queue, wherein each host corresponds to a data list;
and carrying out round-robin scheduling on the data in the data virtual output queue, and sending the successfully scheduled data to an output interface.
5. The method of claim 4, wherein sending the read address information and control parameters of the read operation command to the corresponding host for read operation execution processing, further comprises:
when judging that the exclusive distribution times of the host corresponding to the current read command in the AXI_AR channel are not used up, sending the read address information and the control parameters of the read operation command to the corresponding host through the AXI_AR channel so as to perform read operation execution processing;
And after the application is successful, the read address information and the control parameters of the read operation command are sent to the corresponding host through an AXI_AR channel so as to perform read operation execution processing.
6. The method as recited in claim 5, further comprising:
after judging that one command list is full, carrying out back pressure treatment on the command list;
after judging that the exclusive distribution and the shared distribution times corresponding to a host are all used up, carrying out back pressure processing on a command list corresponding to the host in a command virtual output list;
and after judging that the single-share distribution and the shared distribution times of all host are used up, controlling a round-robin scheduling function for suspending the output of the command virtual output queue.
7. A system for accessing host using an AXI bus, comprising:
a write command buffer unit, configured to receive a write operation command for a host through an input interface, and buffer the write operation command in a command list corresponding to the host in a command virtual output queue; wherein each host corresponds to a command list;
A write command scheduling processing unit for scheduling the write operation command in a round robin scheduling manner;
the write data caching unit is used for caching the data in the write operation command into a data list corresponding to the host in a data virtual output queue, wherein each host corresponds to a data list;
the write command distribution processing unit is used for sending the write address information and the control parameters of the write operation command to the corresponding host so as to perform write operation execution processing;
and the write response processing unit is used for receiving the write operation execution response returned by host and sending the write operation execution response to the output interface.
8. The system of claim 7, wherein the write command distribution processing unit further comprises:
the write command exclusive distribution unit is used for sending write address information and control parameters of the write operation command to the corresponding host through the AXI_AW channel to perform write operation execution processing when judging that the exclusive distribution times of the host corresponding to the current write command in the AXI_AW channel are not used up;
and the write command sharing and distributing unit is used for applying for the sharing and distributing times in a round-robin scheduling mode when judging that the unique sharing and distributing times of the host corresponding to the current write command are used up, and sending the write address information and the control parameters of the write operation command to the corresponding host through an AXI_AW channel after the application is successful so as to execute the write operation.
9. The system as recited in claim 8, further comprising:
the first back pressure processing unit is used for carrying out back pressure processing on a command list corresponding to the command virtual output queue after judging that one data list is full; after judging that the exclusive distribution and the shared distribution times corresponding to a host are all used up, carrying out back pressure processing on a command list corresponding to the host in a command virtual output list; and controlling the round robin scheduling function of suspending the output of the virtual output queue of the command after judging that the exclusive distribution and the shared distribution times of all host are used up.
10. A system for accessing host using an AXI bus, comprising:
a read command buffer unit, configured to receive a read operation command for a host through an input interface, and buffer the read operation command in a command list corresponding to the host in a command virtual output queue; wherein each host corresponds to a command list;
a read command scheduling processing unit for scheduling the read operation command in a round-robin scheduling manner;
the read command distribution processing unit is used for sending the read address information and the control parameters of the read operation command to the corresponding host so as to perform read operation execution processing;
The data reading cache unit is used for receiving data returned by host and storing the data in a data list corresponding to the host in a data virtual output queue, wherein each host corresponds to a data list;
and the read response processing unit is used for carrying out round-robin scheduling on the data in the data virtual output queue and sending the successfully scheduled data to the output interface.
11. The system of claim 10, wherein the read command distribution processing unit further comprises:
the read command exclusive distribution unit is used for sending the read address information and the control parameters of the read operation command to the corresponding host through the AXI_AR channel to perform read operation execution processing when judging that the exclusive distribution times of the host corresponding to the current read command in the AXI_AR channel are not used up;
and the read command sharing and distributing unit is used for applying for the shared distribution times in a round-robin scheduling mode when judging that the unique distribution times of the host corresponding to the read command are used up, and sending the read address information and the control parameters of the read operation command to the corresponding host through an AXI_AR channel after the application is successful so as to perform read operation execution processing.
12. The system as recited in claim 11, further comprising:
The second back pressure processing unit is used for carrying out back pressure processing on the command list after judging that one command list is full; after judging that the exclusive distribution and the shared distribution times corresponding to a host are all used up, carrying out back pressure processing on a command list corresponding to the host in a command virtual output list; and controlling the round robin scheduling function of suspending the output of the virtual output queue of the command after judging that the exclusive distribution and the shared distribution times of all host are used up.
13. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method according to any one of claims 1 to 3 or 4 to 6.
14. A chip having disposed thereon a system according to any one of claims 7 to 9 or 10 to 12 employing AXI bus access host.
CN202310907235.8A 2023-07-21 2023-07-21 Method, system, chip and storage medium for accessing host by AXI bus Pending CN116893993A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310907235.8A CN116893993A (en) 2023-07-21 2023-07-21 Method, system, chip and storage medium for accessing host by AXI bus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310907235.8A CN116893993A (en) 2023-07-21 2023-07-21 Method, system, chip and storage medium for accessing host by AXI bus

Publications (1)

Publication Number Publication Date
CN116893993A true CN116893993A (en) 2023-10-17

Family

ID=88311962

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310907235.8A Pending CN116893993A (en) 2023-07-21 2023-07-21 Method, system, chip and storage medium for accessing host by AXI bus

Country Status (1)

Country Link
CN (1) CN116893993A (en)

Similar Documents

Publication Publication Date Title
US9935899B2 (en) Server switch integration in a virtualized system
EP0991999B1 (en) Method and apparatus for arbitrating access to a shared memory by network ports operating at different data rates
US7269179B2 (en) Control mechanisms for enqueue and dequeue operations in a pipelined network processor
US7337275B2 (en) Free list and ring data structure management
US7234004B2 (en) Method, apparatus and program product for low latency I/O adapter queuing in a computer system
CN103810133A (en) Dynamic shared read buffer management
CN102298561A (en) Method for conducting multi-channel data processing to storage device and system and device
US20130088965A1 (en) Buffer manager and methods for managing memory
US10146468B2 (en) Addressless merge command with data item identifier
US12019572B2 (en) Bridging module, data transmission system, and data transmission method
JPH02238552A (en) Processor mutual connector
US5347514A (en) Processor-based smart packet memory interface
CN111258932A (en) Method for accelerating UFS protocol processing and storage controller
CN112035898A (en) Multi-node multi-channel high-speed parallel processing method and system
JPH01142964A (en) Memory management
CN112311696A (en) Network packet receiving device and method
WO2006124460A2 (en) Concurrent read response acknowledge enhanced direct memory access unit
US9846662B2 (en) Chained CPP command
US9665519B2 (en) Using a credits available value in determining whether to issue a PPI allocation request to a packet engine
JP2023504441A (en) Apparatus and method for managing packet forwarding across memory fabric physical layer interfaces
CN111970213A (en) Queuing system
CN116893993A (en) Method, system, chip and storage medium for accessing host by AXI bus
US11960427B2 (en) Bridging module, data transmission system, and data transmission method
US11841819B2 (en) Peripheral component interconnect express interface device and method of operating the same
US20020108004A1 (en) Enhancement of transaction order queue

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination