CN117931700A - Controller, chip and electronic equipment namely data reading method - Google Patents

Controller, chip and electronic equipment namely data reading method Download PDF

Info

Publication number
CN117931700A
CN117931700A CN202311679266.9A CN202311679266A CN117931700A CN 117931700 A CN117931700 A CN 117931700A CN 202311679266 A CN202311679266 A CN 202311679266A CN 117931700 A CN117931700 A CN 117931700A
Authority
CN
China
Prior art keywords
data
read
read address
time
target data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311679266.9A
Other languages
Chinese (zh)
Inventor
王一
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
KT MICRO Inc
Original Assignee
KT MICRO Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by KT MICRO Inc filed Critical KT MICRO Inc
Priority to CN202311679266.9A priority Critical patent/CN117931700A/en
Publication of CN117931700A publication Critical patent/CN117931700A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/16Handling requests for interconnection or transfer for access to memory bus
    • G06F13/1605Handling requests for interconnection or transfer for access to memory bus based on arbitration
    • G06F13/161Handling requests for interconnection or transfer for access to memory bus based on arbitration with latency improvement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/16Handling requests for interconnection or transfer for access to memory bus
    • G06F13/1668Details of memory controller
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F5/00Methods or arrangements for data conversion without changing the order or content of the data handled
    • G06F5/06Methods or arrangements for data conversion without changing the order or content of the data handled for changing the speed of data flow, i.e. speed regularising or timing, e.g. delay lines, FIFO buffers; over- or underrun control therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1016Performance improvement
    • G06F2212/1024Latency reduction

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multi Processors (AREA)

Abstract

The application provides a controller, a chip, electronic equipment and a data reading method, and relates to the field of integrated circuits. The controller includes a first-in first-out queue configured to store data from an external memory; the queue logic controller is configured to receive the latest read address sent by the processor, wherein the latest read address is used for reading target data in the external memory; judging whether a new data reading flow aiming at the latest reading address needs to be initiated or not; under the condition that a new data reading flow does not need to be initiated, acquiring target data based on the data read by the current data reading flow; the new data reading flow comprises the following steps: the current data reading flow is stopped, the data are sequentially read from the memory by taking the latest read address as a starting point and according to the increasing sequence of the addresses, and the first-in first-out queue is stored, so that the problem that the reading and accessing efficiency is lower when the existing processor sends the new address to the memory through the controller is solved.

Description

Controller, chip and electronic equipment namely data reading method
Technical Field
The application belongs to the field of integrated circuits, and particularly relates to a controller, a chip, electronic equipment and a data reading method.
Background
As the requirements for the configuration of the hardware devices become higher, the requirements for the read-write efficiency of the processors in the hardware devices become higher, and the performance requirements for the processors become stricter. When the memory is read-accessed in XIP (eXecute In Place, i.e., on-chip execution) mode, after the memory receives the addresses sent by the processor, the data corresponding to each address is written into the controller according to the increasing order of the addresses, and the controller reads the data to the processor in the first-in first-out order, but if the processor sends new addresses to the memory again, the read flow based on the original addresses is interrupted and starts a new read flow with the new addresses, so that when the processor sends the new addresses to the memory through the controller, the read access efficiency is lower.
Disclosure of Invention
In view of the above, the present application is directed to a controller, a chip, an electronic device and a data reading method, so as to improve the efficiency of read access when the existing processor sends a new address to a memory through the controller.
Embodiments of the present application are implemented as follows:
in a first aspect, an embodiment of the present application provides a controller, including: a first-in first-out queue configured to store data from an external memory; a queue logic controller configured to receive a latest read address sent by a processor, wherein the latest read address is used for reading target data in the external memory; judging whether a new data reading flow aiming at the latest reading address needs to be initiated or not; acquiring the target data based on the data read by the current data reading flow under the condition that a new data reading flow does not need to be initiated; the new data reading flow comprises the following steps: and stopping the current data reading flow, sequentially reading data from a memory by taking the latest reading address as a starting point according to the increasing order of the addresses, and storing the data in the first-in first-out queue.
In the embodiment of the application, since the new data reading process aiming at the latest reading address needs to suspend the current data reading process, the target data corresponding to the latest reading address is read from the memory according to the increasing sequence of the address by taking the latest reading address as a starting point, and the efficiency of reading the target data is lower, in order to improve the reading efficiency of reading the target data, the target data is directly read based on the current data reading process by judging whether the new data reading process needs to be initiated or not under the condition that the new data reading process does not need to be initiated, and the speed of reading the target data is higher than the speed of initiating the new data reading process to read the target data, so that the reading efficiency of reading the target data corresponding to the latest reading address can be improved based on the current data reading process.
With reference to a possible implementation manner of the embodiment of the first aspect, the queue logic controller is further configured to determine whether the first-in first-out queue stores the target data, where if the first-in first-out queue stores the target data, a new data reading procedure does not need to be initiated.
In the embodiment of the application, under the condition that the queue logic controller receives the latest read address sent by the processor, by judging whether the latest read address corresponding target data is stored in the first-in first-out queue, under the condition that the target data is contained in the first-in first-out queue, the external memory is characterized to be written into the first-in first-out queue based on the current data reading flow in advance, so that the time required for initiating the new data reading flow aiming at the latest read address to read the target data is excessive under the condition that the target data is contained in the first-in first-out queue, and therefore, under the condition that the target data is contained in the first-in first-out queue, the new data reading flow is not required to be initiated, the target data is directly obtained from the first-in first-out queue, and the efficiency of the processor for reading the target data is improved.
With reference to a possible implementation manner of the first aspect embodiment, the queue logic controller is further configured to: determining that the first-in first-out queue stores the target data under the condition that the latest read address is not smaller than the current read address of the read pointer in the first-in first-out queue and the difference value between the latest read address and the current read address is not larger than the length of the first-in first-out queue; or under the condition that the latest read address is smaller than the current read address and the data of the position of the target data in the first-in first-out queue is not updated, determining that the target data is stored in the first-in first-out queue.
In the embodiment of the application, the current read address is an address corresponding to data read from the first-in first-out queue to the processor based on the current read flow. When the latest read address is not smaller than the current read address where the read pointer is located in the first-in first-out queue, and the difference between the latest read address and the current read address is not larger than the length of the first-in first-out queue, the external memory can be considered to be the condition that the written data is full based on the current data read flow, but the target data is not read out to the processor by the first-in first-out queue based on the current data read flow, and when the latest read address is smaller than the current read address, the latest read address can be considered to be the address corresponding to the data which is already read out by the latest read address based on the current data read flow, and when the written address of the external memory is received, the first-in first-out queue can be considered to be the condition that the written data is full, so that the first-in first-out queue is required to be updated, and when the first-in first-out queue is full of the data is full, the data is required to be stored in the first-in first-out queue, and further, when the latest read address is smaller than the current read address and the data in the first-in first-out queue is not full, the target data can be accurately stored in the first-in queue.
With reference to a possible implementation manner of the embodiment of the first aspect, the queue logic controller is further configured to compare, in a case where the target data is not stored in the fifo queue, a size of the latest read address with a size of a current read address where a read pointer in the fifo queue is located; if the latest read address is smaller than the current read address, a new data reading process needs to be initiated.
In the embodiment of the application, under the condition that the target data is not stored in the first-in first-out queue and the latest read address is smaller than the current read address, the data representing the position of the target data in the first-in first-out queue is updated, and whether a new read flow needs to be initiated can be quickly judged by comparing the latest read address with the current read address of the read pointer in the first-in first-out queue.
With reference to a possible implementation manner of the embodiment of the first aspect, the queue logic controller is further configured to compare the sizes of the first time and the second time in a case that the latest read address is greater than the current read address; if the first time is less than the second time, a new data reading process needs to be initiated; if the first time is not less than the second time, a new data reading process does not need to be initiated; the first time is the time required by the new data reading flow to read the target data, and the second time is the time required by the current data reading flow to read the target data.
In the embodiment of the application, the target data is read by comparing the sizes of the first time and the second time and selecting the corresponding data reading flow with small time in the first time and the second time, and the target information can be read by selecting the data reading flow with shorter time, so that the efficiency of reading the target data from the external memory by the processor is improved.
With reference to a possible implementation manner of the embodiment of the first aspect, the queue logic controller is further configured to control, in a case where the target data is stored in the fifo queue, a read pointer of the fifo queue to jump from the data corresponding to the current read address to the target data, and read out the target data.
In the embodiment of the application, when the logic controller receives the latest read address sent by the processor and determines that the target data is stored in the first-in first-out queue, the read pointer of the first-in first-out queue is controlled to jump from the data corresponding to the current read address to the target data and read the target data, so that the time for initiating a new data reading flow to read the target data is saved, and the reading and accessing efficiency of the processor for reading the data in the external memory is further improved.
With reference to a possible implementation manner of the first aspect embodiment, the queue logic controller is further configured to wait for the current data reading procedure to read and output the target data if the first time is not less than the second time.
In the embodiment of the application, under the condition that the first time is not less than the second time, the time required for reading the target data by the new data reading flow is not less than the time required for reading the target data by the current data reading flow, and the reading efficiency of the processor on the target data can be improved by waiting for the current data reading flow to read and output the target data.
With reference to a possible implementation manner of the first aspect embodiment, the queue logic controller is further configured to: determining that the first-in first-out queue stores the target data under the condition that the latest read address is not smaller than the current read address of the read pointer in the first-in first-out queue and the difference value between the latest read address and the current read address is not larger than the length of the first-in first-out queue; or under the condition that the latest read address is smaller than the current read address and the data of the position of the target data in the first-in first-out queue is not updated, determining that a new data reading flow does not need to be initiated.
In the embodiment of the application, under the condition that the latest read address is not smaller than the current read address of the read pointer in the first-in first-out queue and the difference between the latest read address and the current read address is not larger than the length of the first-in first-out queue, or under the condition that the latest read address is smaller than the current read address and the data of the position of the target data in the first-in first-out queue is not updated, the external memory is characterized by writing the target data into the first-in first-out queue based on the current data reading flow in advance, so that the time required for reading the target data by initiating the new data reading flow aiming at the latest read address is excessive under the condition that the target data is contained in the first-in first-out queue, the target data is not required to be directly obtained from the first-in first-out queue, and the efficiency of reading the target data by the processor is improved.
With reference to a possible implementation manner of the first aspect embodiment, the queue logic controller is further configured to: comparing the first time and the second time under the condition that the latest read address is larger than the current read address of the read pointer in the first-in first-out queue; if the first time is less than the second time, a new data reading process needs to be initiated; if the first time is not less than the second time, a new data reading process does not need to be initiated; the first time is the time required by the new data reading flow to read the target data, and the second time is the time required by the current data reading flow to read the target data.
In the embodiment of the application, the target data is read by comparing the sizes of the first time and the second time and selecting the corresponding data reading flow with small time in the first time and the second time, and the target information can be read by selecting the data reading flow with shorter time, so that the efficiency of reading the target data from the external memory by the processor is improved.
In a second aspect, an embodiment of the present application provides a data reading method, including: receiving a latest read address sent by a processor, wherein the latest read address is used for reading target data in an external memory; judging whether a new data reading flow aiming at the latest reading address needs to be initiated or not; acquiring the target data based on the data read by the current data flow under the condition that a new data reading flow does not need to be initiated; the new data reading flow comprises the following steps: and stopping the current data reading flow, sequentially reading data from a memory by taking the latest reading address as a starting point according to the increasing order of the addresses, and storing the data into a first-in first-out queue.
With reference to one possible implementation manner of the second aspect embodiment, determining whether a new data reading procedure for the latest read address needs to be initiated includes: judging whether the first-in first-out queue stores the target data or not; if the target data is stored in the first-in first-out queue, a new data reading flow does not need to be initiated.
With reference to a possible implementation manner of the second aspect embodiment, determining whether the target data is stored in the fifo queue includes: determining that the first-in first-out queue stores the target data under the condition that the latest read address is not smaller than the current read address of the read pointer in the first-in first-out queue and the difference value between the latest read address and the current read address is not larger than the length of the first-in first-out queue; or under the condition that the latest read address is smaller than the current read address and the data of the position of the target data in the first-in first-out queue is not updated, determining that the target data is stored in the first-in first-out queue.
With reference to a possible implementation manner of the second aspect embodiment, after determining whether the target data is stored in the fifo queue, the method further includes: comparing the latest read address with the current read address under the condition that the target data is not stored in the first-in first-out queue; if the latest read address is smaller than the current read address, a new data reading process needs to be initiated.
With reference to a possible implementation manner of the embodiment of the second aspect, after comparing the size of the latest read address with the size of the current read address, the method further includes: comparing the magnitudes of the first time and the second time; if the first time is less than the second time, a new data reading process needs to be initiated; if the first time is not less than the second time, a new data reading process does not need to be initiated; the first time is the time required by the new data reading flow to read the target data, and the second time is the time required by the current data reading flow to read the target data.
With reference to a possible implementation manner of the second aspect embodiment, acquiring the target data based on the data read by the current data reading flow includes: and under the condition that the target data is stored in the first-in first-out queue, controlling a read pointer of the first-in first-out queue to jump from the data corresponding to the current read address to the target data, and reading the target data.
With reference to a possible implementation manner of the second aspect embodiment, acquiring the target data based on the data read by the current data reading flow includes: and under the condition that the first time is not less than the second time, waiting for the current data reading flow to read the target data, and outputting the target data.
With reference to one possible implementation manner of the second aspect embodiment, determining whether a new data reading procedure for the latest read address needs to be initiated includes: determining that the first-in first-out queue stores the target data under the condition that the latest read address is not smaller than the current read address of the read pointer in the first-in first-out queue and the difference value between the latest read address and the current read address is not larger than the length of the first-in first-out queue; or under the condition that the latest read address is smaller than the current read address and the data of the position of the target data in the first-in first-out queue is not updated, determining that a new data reading flow does not need to be initiated.
With reference to one possible implementation manner of the second aspect embodiment, determining whether a new data reading procedure for the latest read address needs to be initiated includes:
Comparing the first time and the second time when the latest read address is larger than the current read address of the read pointer in the first-in first-out queue and the difference between the latest read address and the current read address is larger than the length of the first-in first-out queue; if the first time is less than the second time, a new data reading process needs to be initiated; if the first time is not less than the second time, a new data reading process does not need to be initiated; the first time is the time required by the new data reading flow to read the target data, and the second time is the time required by the current data reading flow to read the target data.
In a third aspect, an embodiment of the present application further provides a chip, where the chip includes: a processor, a memory and a controller as provided by the embodiments of the first aspect and/or any one of the possible implementations in combination with the embodiments of the first aspect, the processor being connected to the controller, the controller being connected to the memory.
In a fourth aspect, embodiments of the present application also provide an electronic device including at least one processor and at least one memory, the processor coupled to the memory, the memory configured to store a program; the processor is configured to invoke the program stored in the memory, to implement the embodiments of the second aspect and/or the method provided in connection with any possible implementation of the embodiments of the second aspect when executing the computer program stored in the memory.
It should be understood that, the second to fourth aspects of the embodiments of the present invention are consistent with the technical solutions of the first aspect of the embodiments of the present invention, and the beneficial effects obtained by each aspect and the corresponding possible implementation manner are similar, and are not repeated.
Additional features and advantages of the application will be set forth in the description which follows. The objectives and other advantages of the application may be realized and attained by the structure particularly pointed out in the written description and drawings.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art. The above and other objects, features and advantages of the present application will become more apparent from the accompanying drawings.
Fig. 1 shows a schematic structural diagram of a controller according to an embodiment of the present application.
Fig. 2a is a schematic diagram illustrating a first fifo queue performing a read process according to an embodiment of the application.
Fig. 2b is a schematic diagram illustrating a second fifo queue performing a read process according to an embodiment of the application.
Fig. 2c is a schematic diagram illustrating a third fifo queue performing a read process according to an embodiment of the application.
Fig. 3 is a schematic diagram illustrating a fourth fifo queue performing a read process according to an embodiment of the application.
Fig. 4 is a schematic diagram illustrating a fifth fifo queue performing a read process according to an embodiment of the application.
Fig. 5 is a schematic diagram illustrating a sixth fifo queue performing a read process according to an embodiment of the application.
Fig. 6 is a schematic diagram of a seventh fifo queue performing a read process according to an embodiment of the application.
Fig. 7 is a schematic diagram of an eighth fifo queue performing a read process according to an embodiment of the application.
Fig. 8a is a schematic diagram illustrating a ninth fifo queue performing a read process according to an embodiment of the application.
Fig. 8b is a schematic diagram illustrating a read process performed by the tenth fifo queue according to an embodiment of the application.
Fig. 8c is a schematic diagram illustrating a read process performed by the eleventh fifo queue according to an embodiment of the application.
Fig. 9 is a schematic diagram of a twelfth fifo queue performing a read process according to an embodiment of the application.
Fig. 10 is a schematic diagram illustrating a data reading flow of a controller according to an embodiment of the present application.
Fig. 11 shows a schematic structural diagram of a controller according to an embodiment of the present application.
Fig. 12 is a schematic flow chart of a data reading method provided by the application.
Fig. 13 shows a schematic structural diagram of a chip according to an embodiment of the present application.
Fig. 14 shows a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the accompanying drawings in the embodiments of the present application. It will be apparent that the described embodiments are only some, but not all, embodiments of the application. The following examples are given by way of illustration for more clearly illustrating the technical solution of the present application, and are not to be construed as limiting the scope of the application. Those skilled in the art will appreciate that the embodiments described below and features of the embodiments can be combined with one another without conflict.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures. Meanwhile, relational terms such as "first," "second," and the like may be used solely to distinguish one entity or action from another entity or action in the description of the application without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In order to solve the problem of low read access efficiency when the processor sends a new address to the memory through the controller in the prior art, please refer to fig. 1, fig. 1 is a schematic diagram of a structure of a controller according to an embodiment of the application. As shown in fig. 1, the controller 1 is connected to the processor 2 and the external memory 3, respectively, and the controller 1 includes a first-in first-out queue 10 and a queue logic controller 20.
The controller 1 may be a QSPI controller (Quad SPI, six-wire serial interface controller).
The processor 2 may be an on-chip processor or a general-purpose processor, and has signal processing capability, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), a microprocessor, and the like; but may also be a digital signal Processor (DIGITAL SIGNAL Processor, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), field programmable gate array (Field Programmable GATE ARRAY, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components.
The external Memory 3 may be QSPI FLASH (Quad SPI flash) or may be random access Memory (Random Access Memory, RAM), read Only Memory (ROM), programmable Read Only Memory (Programmable Read-Only Memory, PROM), erasable Read Only Memory (Erasable Programmable Read-Only Memory, EPROM), electrically erasable Read Only Memory (Electric Erasable Programmable Read-Only Memory, EEPROM), or the like.
The first-in first-out queue 10 is configured to store data from an external memory. The queue logic controller 20 is configured to receive the latest read address sent by the processor 2.
The latest read address is used for reading target data in the external memory; judging whether a new data reading flow aiming at the latest reading address needs to be initiated or not; and under the condition that a new data reading flow does not need to be initiated, acquiring target data based on the data read by the current data reading flow. The new data reading flow comprises the following steps: and stopping the current data reading flow, sequentially reading data from the memory by taking the latest reading address as a starting point according to the increasing order of the addresses, and storing the data in the first-in first-out queue.
In this embodiment, since the controller 1 accesses the external memory 3 in XIP mode, the current data flow performed by the controller 1 before the queue logic controller 20 receives the latest read address sent by the processor 2 is: the queue logic controller 20 in the controller 1 sends the read address to the external memory 3 based on the read address sent from the processor 2, and the external memory 3 sequentially writes data into the fifo queue 10 in the order of increasing address based on the read address with the read address as the start address, and sequentially reads data out to the processor 2 based on the fifo queue 10 in the order of increasing address.
For a better understanding, please refer to fig. 2, fig. 2 shows a schematic diagram of a read flow performed by a fifo queue. As can be seen from fig. 2a, assuming that the read address received by the queue logic controller 20 is address 0, the queue logic controller 20 transmits the address 0 to the external memory 3, and the external memory 3 writes data corresponding to address 0, data corresponding to address 1, data corresponding to address 2, and the like into the fifo queue 10 in the order of increasing address based on the address 0.
In the case where the external memory 3 sequentially writes data into the fifo queue 10 in the order of increasing addresses, fig. 2b shows the order in which the fifo queue writes data and reads data based on the fifo rule, as shown in fig. 2 b. It can be seen that the fifo queue writes data corresponding to address 0 first, and reads data corresponding to address 0 first.
Referring to fig. 2c, in the case of the fifo 10, when the write pointer points to the address 3 and the read pointer points to the address 0, the length of the fifo 10 is 3, which indicates that the external memory 3 writes the data corresponding to the address 3 into the fifo 10, and the fifo 10 reads the data corresponding to the address 0 to the processor 10.
Further, the target data is the data corresponding to the latest read address, and the target data is obtained directly based on the data read by the current data read flow under the condition that the latest read address is not required to be initiated by judging whether the latest read address is required to be initiated, so that the time required for re-initiating the latest read address is saved, and the read access efficiency of the processor 2 to the data in the external memory 3 is improved.
In the case that the queue logic controller 20 in the controller 1 receives the latest read address sent by the processor 2, in order to be able to quickly determine whether a new data read flow for the latest read address needs to be initiated, in one embodiment, the queue logic controller 20 is further configured to determine whether the first-in-first-out queue 10 stores target data.
If the fifo queue 10 stores the target data, a new data reading process does not need to be initiated.
In this embodiment, when the queue logic controller 20 receives the latest read address sent by the processor 2, by determining whether the fifo 10 stores the target data corresponding to the latest read address, if the fifo 10 includes the target data, the external memory 3 is characterized as having written the target data into the fifo 10 based on the current data reading flow in advance, so that if the fifo 10 includes the target data, the time required to initiate the new data reading flow for the latest read address to read the target data is excessive, and therefore, if the fifo 10 includes the target data, the new data reading flow is not required to be initiated, and the target data is directly obtained from the fifo 10, thereby improving the efficiency of the processor 2 for reading the target data.
Since the current data reading flow is that when the fifo queue 10 writes the read data in order of increasing address, the difference between the address corresponding to the data being written from the external memory 3 and the address corresponding to the data being read to the processor 2 is the length of the fifo queue 10, and it can be considered that the written data is not read immediately, so in order to more quickly determine whether the fifo queue stores the target data, in one embodiment, the queue logic controller 20 is further configured to determine that the fifo queue 10 stores the target data when the latest read address is not less than the current read address where the read pointer in the fifo queue 10 is located, and the difference between the latest read address and the current read address is not greater than the length of the fifo queue 10; or if the latest read address is smaller than the current read address and the data of the position of the target data in the first-in first-out queue 10 is not updated, determining that the target data is stored in the first-in first-out queue 10.
In this embodiment, the current read address is an address corresponding to data read from the fifo queue 10 to the processor 2 based on the current read flow. In the case where the latest read address is not smaller than the current read address where the read pointer is located in the fifo 10 and the difference between the latest read address and the current read address is not larger than the length of the fifo, it can be considered that the external memory 3 has written the target data into the fifo 10 based on the current data reading flow, but the fifo 10 has not read the target data to the processor 2 based on the current data reading flow.
For better understanding that the latest read address is not smaller than the current read address where the read pointer is located in the fifo queue 10, and that the difference between the latest read address and the current read address is not greater than the length of the fifo queue 10, please refer to fig. 3, fig. 3 shows a schematic diagram of a fifo queue executing a read flow. As shown in fig. 3, assuming that the length of the fifo 10 is 4, the latest read address is address 0×06, the fifo 10 read pointer is located at data data_0x03 corresponding to address 0×03, so the current read address is address 0×03, address 0×06 as the latest read address is not smaller than address 0×03 as the current read address, and the difference 3 between address 0×06 and address 0×03 is not greater than the length 4 of fifo 10, the current fifo write pointer is located at data_0x07, indicating that the external memory 3 is writing data_0x07 to fifo 10, and since the external memory 3 writes data to fifo 10 in order of increasing address, fifo 10 must currently store target data data_0x06 corresponding to address 0×06.
Further, in the case where the latest read address is smaller than the current read address, the latest read address may be considered as an address corresponding to data that has been read by the first-in first-out queue 10 based on the current data reading flow. Since the amount of data storable in the fifo 10 is limited, when receiving the address written by the external memory 3, the fifo 10 will be full of the written data, so when the fifo 10 is full of the data, the fifo 10 needs to be updated in order to match the order of writing the data into the external memory 3, and the principle of increasing the address is adopted, so that the fifo 10 stores the target data when the data at the location of the target data in the fifo 10 is not updated.
For better understanding that the latest read address is smaller than the current read address and the data at the location of the target data in the fifo 10 is not updated, fig. 4 is a schematic diagram of a fifo execution read flow. As shown in fig. 4, assuming that the latest read address is address 0×00, the buffer size of the fifo 10 is 10 data, and the fifo 10 read pointer is located at data data_0×03 corresponding to address 0×03, so the current read address is address 0×03. Since the address 0×00 as the latest read address is smaller than the address 0×03 as the current read address and the data_0×00 corresponding to the address 0×00 has not been updated yet, the fifo queue 10 currently stores the target data data_0×00 corresponding to the address 0×00.
In order to quickly read out the target data in the case where the target data is stored in the fifo queue 10, in one embodiment, the queue logic controller 20 is further configured to control the read pointer of the fifo queue 20 to jump from the data corresponding to the current read address to the target data and read out the target data in the case where the target data is stored in the fifo queue 20.
In this embodiment, when the logic controller receives the latest read address sent by the processor 1 and determines that the target data is stored in the fifo queue 10, the read pointer of the fifo queue 10 is controlled to jump from the data corresponding to the current read address to the target data and read the target data, so that the time for initiating a new data reading process to read the target data is saved, and the read access efficiency of the processor 1 for reading the data in the external memory 3 is further improved.
Further, since the target data is stored in the fifo 10, two different cases are included, and the two different cases will be described in detail below.
Case one: referring to fig. 5, fig. 5 is a schematic diagram illustrating a data reading process performed by the fifo queue. In the case where the latest read address is not smaller than the current read address where the read pointer is located in the fifo 10 and the difference between the latest read address and the current read address is not greater than the length of the fifo 10, it is assumed that the length of the fifo 10 is 4, the latest read address is address 0×06, the fifo 10 read pointer is located at data data_0×03 corresponding to address 0×03, and therefore the current read address is address 0×03, address 0×06 as the latest read address is not smaller than address 0×03 as the current read address, and the difference 3 between address 0×06 and address 0×03 is not greater than the length 4 of the fifo 10, and therefore the fifo 10 currently stores target data data_0×06 corresponding to address 0×06, and in order to read target data data_0×06 quickly, the read pointer is jumped from data_0×03 to data_0×06, and target data data_0×06 is read.
And a second case: referring to fig. 6, fig. 6 is a schematic diagram illustrating a read process performed by a fifo queue. As shown in fig. 6, it is assumed that the latest read address is address 0×00, and the buffer size of the fifo 10 is 10 data, because the current read address is 0×03, the position of the read pointer of the fifo 10 is located in data data_0×03 corresponding to address 0×03. Since the address 0×00 as the latest read address is smaller than the address 0×03 as the current read address and the data_0x00 corresponding to the address 0×00 is not updated and written with new data at this time, the fifo queue 10 currently stores the target data data_0x00 corresponding to the address 0×00, and the read pointer is shifted from data_0x03 to data_0x00 to read the target data data_0x00 for fast reading of the target data data_0x00.
Since the read pointer of the fifo 10 can be directly controlled by the queue logic controller 20 to jump the target data from the data corresponding to the current read address when the fifo 10 stores the target data, the target data cannot be read from the fifo 10 when the fifo 10 does not store the target data, and in order to improve the efficiency of reading the target data, in an embodiment, when the fifo 10 does not store the target data, the latest read address is compared with the current read address of the fifo 10 where the read pointer is located. If the latest read address is smaller than the current read address, a new data reading process needs to be initiated.
In this embodiment, in the case where the latest read address is smaller than the current read address, the latest read address can be regarded as an address corresponding to data that has been read by the first-in first-out queue 10 based on the current data reading flow. Since the amount of data storable in the fifo 10 is limited, when receiving the address written by the external memory 2, the fifo 10 will be full of the written data, and therefore, when the fifo 10 is full of the data, the fifo 10 needs to be updated, and when the data at the location of the target data in the fifo 10 is updated, the target data cannot be read from the fifo 10, and in this state, a new data reading flow for the latest read address needs to be initiated to read the target data.
For better understanding, referring to fig. 7, fig. 7 is a schematic diagram illustrating a principle of data reading of a fifo queue, and two fifo queues 10 included in fig. 7 are operating states of the same fifo queue at different moments, and do not represent 2 fifo queues 10. As shown in fig. 7, assuming that the latest read address is address 0×00, the buffer size of the fifo 10 is 8 data, and the fifo 10 read pointer is located at data data_0×03 corresponding to address 0×03, so the current read address is address 0×03. The address 0x 00 as the latest read address is smaller than the address 0x 03 as the current read address. Since the buffer size of the fifo 10 is 8 data, after the external memory 3 writes the data data_0x07 corresponding to the address 0x 07 into the fifo 10, the data data_0x08 corresponding to the address 0x 08 is to be written, but the fifo 10 does not have any redundant position to store the data_0x08, the data_0x08 will replace the data_0x00, store the data_0x00 in the position where the data_0x00 is located, and further store the target data data_0x08 in the fifo 10. At this time, a new data read flow for the latest read address needs to be initiated to read the target data data_0×00.
Further, in order to improve the efficiency of reading the target data in the case where the fifo queue 10 does not store the target data, in one embodiment, the queue logic controller 20 is further configured to compare the magnitudes of the first time and the second time in the case where the latest read address is greater than the current read address; if the first time is smaller than the second time, a new data reading flow needs to be initiated; if the first time is not less than the second time, a new data reading flow does not need to be initiated.
The first time is the time required by the new data reading flow to read the target data, and the second time is the time required by the current data reading flow to read the target data.
In this embodiment, the size of the first time and the second time are compared, and the corresponding data reading flow with the smaller time is selected from the first time and the second time to read the target data, so that the data reading flow with the shorter time can be selected to read the target information, and the efficiency of the processor 2 for reading the target data from the external memory 3 is improved.
For better understanding, please refer to fig. 8, fig. 8 shows a schematic diagram of a fifo data reading principle, as shown in fig. 8a, and fig. 8a shows a schematic diagram of a fifo data reading principle at the current time. If the current time is T5, the queue logic controller 20 receives the address 0×09, which is the latest read address and is sent by the processor 2, and the data data_0x03 corresponding to the address 0×03 is currently pointed to by the read pointer of the fifo queue 10, and the target data data_0x09 corresponding to the address 0×09, which is the latest read address, is not yet written into the fifo queue 10. As shown in fig. 8b, fig. 8b shows a schematic diagram of a principle of data reading of the fifo queue, and two fifo queues 10 included in fig. 8b are the same fifo queues in different time operating states, and do not represent 2 fifo queues 10, wherein the fifo queue 10 on the left in fig. 8b represents a data reading state of the fifo queue 10 at the current time T5, and the fifo queue 10 on the right in fig. 8b represents a state of reading target data data_0x09 according to the current data reading flow. It can be seen that the target data data_0x09 can be read out to the processor 2 at time T6 by reading data_0x09 according to the current data reading flow. As shown in fig. 8c, fig. 8c shows a schematic diagram of a principle of data reading in a fifo queue, and two fifo queues 10 included in fig. 8c are the same fifo queues in different time operating states, and do not represent 2 fifo queues 10, wherein the fifo queue 10 on the left in fig. 8c represents a data reading state of the fifo queue 10 at the current time T5, and the fifo queue 10 on the right in fig. 8c represents a state of restarting a new data reading process to read the target data data_0x09. It can be seen that the target data data_0×09 can be read out to the processor 2 at time T7. Further, in the case where the time T7 is greater than the time T6, the efficiency of reading the target data data_0×09 in accordance with the current data reading flow is higher than the efficiency of restarting the new data reading flow to read the target data data_0×09, and thus the target data data_0×09 is selected based on the current data reading flow.
In one embodiment, the queue logic controller 20 is further configured to wait for the current data reading flow to read the target data and output if the first time is not less than the second time.
In this embodiment, for better understanding, referring to fig. 9, fig. 9 shows a schematic diagram of a fifo queue data reading principle, if the current time is T5, the queue logic controller 20 receives the address 0×09 as the latest read address sent by the processor 2, the fifo queue 10 read pointer currently points to the data data_0×03 corresponding to the address 0×03, the target data data_0×09 corresponding to the address 0×09 as the latest read address is not written into the fifo queue 10 yet, and if the current data reading process waits for reading the target data data_0×09, the external memory 3 waits for writing the target data data_0×09 into the fifo queue 10 first, and if the target data data_0×09 is written into the fifo queue 10, the queue logic controller 20 controls the read pointer of the fifo queue 10 to point to the target data data_0×09, and reads the target data data_0×09 out to the processor 2.
In order to enhance the efficiency of reading out the target data corresponding to the latest read address to the processor 2 in case the latest read address sent by the processor 2 is received by the queue logic controller 20, in one embodiment, the queue logic controller 20 is further configured to determine that the target data is stored in the first-in first-out queue 10 in case the latest read address is not smaller than the current read address where the read pointer is located in the first-in first-out queue 10, and the difference between the latest read address and the current read address is not greater than the length of the first-in first-out queue 10; or in the case that the latest read address is smaller than the current read address and the data of the position of the target data in the first-in first-out queue 10 is not updated, it is determined that the new data reading flow does not need to be initiated.
In this embodiment, when the latest read address is not smaller than the current read address where the read pointer in the fifo 10 is located and the difference between the latest read address and the current read address is not greater than the length of the fifo 10, or when the latest read address is smaller than the current read address and the data at the location where the target data in the fifo 10 is located is not updated, the external memory 3 is characterized to have written the target data into the fifo 10 based on the current data reading flow in advance, so that the time required for initiating the new data reading flow for the latest read address to read the target data is excessive if the target data is contained in the fifo 10, and therefore, when the target data is contained in the fifo 10, the new data reading flow is not required to be initiated, the target data is directly obtained from the fifo 10, and the efficiency of the processor 2 for reading the target data is improved.
Further, as a possible implementation, the queue logic controller 20 is further configured to compare the magnitudes of the first time and the second time if the latest read address is greater than the current read address where the read pointer is located in the first-in-first-out queue; if the first time is less than the second time, a new data reading flow needs to be initiated; if the first time is not less than the second time, a new data reading flow does not need to be initiated.
The first time is the time required by the new data reading flow to read the target data, and the second time is the time required by the current data reading flow to read the target data.
In this embodiment, the size of the first time and the second time are compared, and the corresponding data reading flow with the smaller time is selected from the first time and the second time to read the target data, so that the data reading flow with the shorter time can be selected to read the target information, and the efficiency of the processor 2 for reading the target data from the external memory 3 is improved.
Since the controller 1 is in XIP mode and the processor 2 needs to make read access to the external memory 3 via the bus interface, the controller 1 further includes the bus interface 30, the instruction address logic converter 40, the instruction address send queue 50, and the data sort-back logic controller 60. Bus interfaces include, but are not limited to, an AHB bus interface, an AXI bus interface, an APB bus interface, and the like.
In order to better understand the above-mentioned data reading procedure, in a possible implementation manner, as shown in fig. 10, the above-mentioned data reading procedure of the controller may first determine whether the target data is stored in the fifo queue 10 when the latest read address is received and the latest read address is not less than the current read address where the read pointer in the fifo queue is located, and the difference between the latest read address and the current read address is not greater than the length of the fifo queue, or when the latest read address is less than the current read address, and the data at the location where the target data is located in the fifo queue is not updated, and it may be determined that the target data is stored in the fifo queue 10.
In the case that it is determined that the target data is stored in the fifo 10, the read pointer of the fifo 10 is controlled to jump from the data corresponding to the current read address to the target data and read the target data.
Further, under the condition that the target data is stored in the first-in first-out queue 10, comparing the current read address of the read pointer in the latest first-in first-out queue 10, and under the condition that the latest read address is larger than the current read address, comparing the first time with the second time, wherein under the condition that the first time is smaller than the second time, initiating a new data reading flow; waiting for the current read data reading flow to read the target data under the condition that the first time is not less than the second time; and under the condition that the latest read address is smaller than the current read address, initiating a new data reading flow.
In this embodiment, for better understanding, please refer to fig. 11, fig. 11 shows a schematic structural diagram of a controller according to an embodiment of the present application. When the external memory 3 is required to read the data corresponding to the latest read address, the processor 2 sends the latest read instruction including the read address to the bus interface 30 in the control 1 through the bus, the bus interface 30 analyzes the read address by using the instruction address conversion logic algorithm in the instruction address logic converter 40, and sends the analyzed read address to the instruction address sending queue 50, the instruction address sending queue 50 sends the read instruction including the read address to the external memory 3 through a si interface (serial input interface), the external memory 3 writes the data into the fifo queue 10 continuously through a so interface (serial output interface) according to the increasing order of the address based on the read address in the read instruction until the queue logic controller 20 pulls up the chip select signal sent to the external memory 3, and the external memory 3 stops feeding back the data to the so interface. The fifo queue 10 reads the written data to the data sort-back logic controller 60 in fifo order, and the data sort-back logic controller 60 sorts the data into a format readable by the processor 2, and then feeds the data back to the processor 2 via the bus interface 30.
Referring to fig. 12, fig. 12 is a schematic flow chart of a data reading method provided by the present application, and a detailed description is given below of a specific flow chart of the specific data reading method in fig. 12.
Step S101: the latest read address sent from the processor is received.
Wherein the latest read address is used for reading target data in the external memory.
Step S102: it is determined whether a new data read flow for the latest read address needs to be initiated.
In one embodiment, a method for implementing a determination of whether a new data read procedure for a latest read address needs to be initiated may be to determine whether the fifo queue stores target data.
If the first-in first-out queue stores the target data, a new data reading flow does not need to be initiated.
As a possible implementation manner, the method for implementing the determination of whether the fifo queue stores the target data may be to determine that the fifo queue stores the target data when the latest read address is not less than the current read address where the read pointer in the fifo queue is located and the difference between the latest read address and the current read address is not greater than the length of the fifo queue; or under the condition that the latest read address is smaller than the current read address and the data of the position of the target data in the first-in first-out queue is not updated, determining that the target data is stored in the first-in first-out queue.
In the case that the fifo queue does not store the target data, the data reading method further includes: the size of the latest read address is compared with the size of the current read address.
If the latest read address is smaller than the current read address, a new data reading process needs to be initiated.
After the latest read address and the current read address are of the same size, the data reading method further comprises the following steps: comparing the magnitudes of the first time and the second time; if the first time is less than the second time, a new data reading flow needs to be initiated; if the first time is not less than the second time, a new data reading flow does not need to be initiated.
The first time is the time required by the new data reading flow to read the target data, and the second time is the time required by the current data reading flow to read the target data.
In still another embodiment, the method for implementing the determination of whether to initiate a new data reading procedure for the latest read address may be to determine that the fifo queue stores the target data when the latest read address is not less than the current read address where the read pointer is located in the fifo queue and the difference between the latest read address and the current read address is not greater than the length of the fifo queue; or under the condition that the latest read address is smaller than the current read address and the data of the position of the target data in the first-in first-out queue is not updated, determining that the new data reading flow does not need to be initiated.
In still another embodiment, the method for implementing the determination of whether to initiate a new data reading procedure for the latest read address may be to compare the first time and the second time when the latest read address is greater than the current read address where the read pointer is located in the fifo queue and the difference between the latest read address and the current read address is greater than the length of the fifo queue; if the first time is smaller than the second time, a new data reading flow needs to be initiated; if the first time is not less than the second time, a new data reading flow does not need to be initiated.
The first time is the time required by the new data reading flow to read the target data, and the second time is the time required by the current data reading flow to read the target data.
Step S103: and under the condition that a new data reading flow does not need to be initiated, acquiring target data based on the data read by the current data flow.
In one embodiment, the method for obtaining the target data based on the data read by the current data flow may be to control the read pointer of the first-in first-out queue to jump from the data corresponding to the current read address to the target data and read the target data when the target data is stored in the first-in first-out queue.
In still another embodiment, the method for obtaining the target data based on the data read by the current data flow may be to wait for the current data read by the current data read flow and output the target data when the first time is not less than the second time.
As shown in fig. 13, fig. 13 shows a block diagram of a chip 100 according to an embodiment of the present application. The chip 100 includes: processor 110, memory 120, and controller 130.
The processor 110 is configured to send the latest read address to the controller 130.
The controller 130 is configured to receive the latest read address sent by the processor; judging whether a new data reading flow aiming at the latest reading address needs to be initiated or not; under the condition that a new data reading flow does not need to be initiated, acquiring target data based on the data read by the current data reading flow; the latest read address is used for reading target data in the external memory; the new data reading flow comprises the following steps: and stopping the current data reading flow, sequentially reading data from the memory by taking the latest reading address as a starting point according to the increasing order of the addresses, and storing the data in the first-in first-out queue.
The memory 120 is configured to send data corresponding to the latest read address to the controller 130.
Optionally, the controller 130 is specifically configured to determine whether the first-in-first-out queue stores the target data; if the first-in first-out queue stores the target data, a new data reading flow does not need to be initiated.
Optionally, the controller 130 is specifically configured to determine that the fifo queue stores the target data when the latest read address is not less than the current read address where the read pointer is located in the fifo queue, and the difference between the latest read address and the current read address is not greater than the length of the fifo queue; or under the condition that the latest read address is smaller than the current read address and the data of the position of the target data in the first-in first-out queue is not updated, determining that the target data is stored in the first-in first-out queue.
Optionally, the controller 130 is further configured to compare the size of the latest read address with the current read address in case the first-in first-out queue does not store the target data. If the latest read address is smaller than the current read address, a new data reading process needs to be initiated.
Optionally, the controller 130 is further configured to compare the magnitudes of the first time and the second time; if the first time is smaller than the second time, a new data reading flow needs to be initiated; if the first time is not less than the second time, a new data reading flow does not need to be initiated. The first time is the time required by the new data reading flow to read the target data, and the second time is the time required by the current data reading flow to read the target data.
Optionally, the controller 130 is specifically configured to control the read pointer of the first-in-first-out queue to jump from the data corresponding to the current read address to the target data and read out the target data, in the case that the target data is stored in the first-in-first-out queue.
Optionally, the controller 130 is specifically configured to wait for the current data reading procedure to read the target data and output the target data if the first time is not less than the second time.
Optionally, the controller 130 is specifically configured to determine that the fifo queue stores the target data when the latest read address is not less than the current read address where the read pointer is located in the fifo queue, and the difference between the latest read address and the current read address is not greater than the length of the fifo queue; or under the condition that the latest read address is smaller than the current read address and the data of the position of the target data in the first-in first-out queue is not updated, determining that the new data reading flow does not need to be initiated.
Optionally, the controller 130 is specifically configured to compare the magnitudes of the first time and the second time when the latest read address is greater than the current read address where the read pointer is located in the fifo queue, and the difference between the latest read address and the current read address is greater than the length of the fifo queue; if the first time is smaller than the second time, a new data reading flow needs to be initiated; if the first time is not less than the second time, a new data reading process does not need to be initiated; the first time is the time required by the new data reading flow to read the target data, and the second time is the time required by the current data reading flow to read the target data.
The chip 100 may be: logic chips, sensor chips, controller chips, etc. The specific structure of the chip is well known to those skilled in the art and will not be described in detail herein.
The processor 110 may be an integrated circuit chip with signal processing capability. The processor may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), a microprocessor, etc.; but may also be a digital signal Processor (DIGITAL SIGNAL Processor, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), field programmable gate array (Field Programmable GATE ARRAY, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The disclosed methods, steps, and logic blocks in the embodiments of the present application may be implemented or performed. Or the processor 120 may be any conventional processor or the like.
The chip 100 according to the embodiment of the present application has the same implementation principle and the same technical effects as those of the foregoing method embodiment, and for the sake of brevity, reference may be made to the corresponding content in the foregoing method embodiment where the device embodiment is not mentioned.
As shown in fig. 14, fig. 14 shows a schematic structural diagram of an electronic device 200 according to an embodiment of the present application. The electronic device 200 includes: transceiver 210, memory 220, communication bus 230, processor 240, and controller 250.
The transceiver 210, the memory 220, and the processor 240 are electrically connected directly or indirectly to each other to realize data transmission or interaction. For example, the components may be electrically coupled to each other via one or more communication buses 230 or signal lines. Wherein the transceiver 210 is configured to transmit and receive data. The memory 220 is used to store a computer program. The processor 240 is configured to execute executable modules stored in the memory 220, such as computer programs stored in the memory.
The Memory 220 may be, but is not limited to, a random access Memory (Random Access Memory, RAM), a Read Only Memory (ROM), a programmable Read Only Memory (Programmable Read-Only Memory, PROM), an erasable Read Only Memory (Erasable Programmable Read-Only Memory, EPROM), an electrically erasable Read Only Memory (Electric Erasable Programmable Read-Only Memory, EEPROM), etc., or the external Memory 3.
The processor 240 may be an integrated circuit chip with signal processing capabilities. The processor may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), a microprocessor, etc.; but may also be a digital signal Processor (DIGITAL SIGNAL Processor, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), field programmable gate array (Field Programmable GATE ARRAY, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The disclosed methods, steps, and logic blocks in the embodiments of the present application may be implemented or performed. Processor 240 may also be processor 2 described above.
The controller 250 may be a QSPI controller or the controller 1 described above.
The electronic device 200 includes, but is not limited to, a mobile phone, a computer, a tablet, a server, etc.
It should be noted that, in the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described as different from other embodiments, and identical and similar parts between the embodiments are all enough to be referred to each other.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. The apparatus embodiments described above are merely illustrative, for example, of the flowcharts and block diagrams in the figures that illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form a single part, or each module may exist alone, or two or more modules may be integrated to form a single part.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (20)

1. A controller, the controller comprising:
A first-in first-out queue configured to store data from an external memory;
A queue logic controller configured to receive a latest read address sent by a processor, wherein the latest read address is used for reading target data in the external memory; judging whether a new data reading flow aiming at the latest reading address needs to be initiated or not; acquiring the target data based on the data read by the current data reading flow under the condition that a new data reading flow does not need to be initiated;
the new data reading flow comprises the following steps: and stopping the current data reading flow, sequentially reading data from a memory by taking the latest reading address as a starting point according to the increasing order of the addresses, and storing the data in the first-in first-out queue.
2. The controller of claim 1, wherein the queue logic controller is further configured to determine whether the fifo queue stores the target data, wherein a new data read flow need not be initiated if the fifo queue stores the target data.
3. The controller of claim 2, wherein the queue logic controller is further configured to:
Determining that the first-in first-out queue stores the target data under the condition that the latest read address is not smaller than the current read address of the read pointer in the first-in first-out queue and the difference value between the latest read address and the current read address is not larger than the length of the first-in first-out queue; or under the condition that the latest read address is smaller than the current read address and the data of the position of the target data in the first-in first-out queue is not updated, determining that the target data is stored in the first-in first-out queue.
4. The controller of claim 2, wherein the queue logic controller is further configured to compare the latest read address to a current read address where a read pointer is located in the fifo queue if the fifo queue does not store the target data; if the latest read address is smaller than the current read address, a new data reading process needs to be initiated.
5. The controller of claim 4, wherein the queue logic controller is further configured to compare the magnitudes of a first time and a second time if the most recent read address is greater than the current read address; if the first time is less than the second time, a new data reading process needs to be initiated; if the first time is not less than the second time, a new data reading process does not need to be initiated;
The first time is the time required by the new data reading flow to read the target data, and the second time is the time required by the current data reading flow to read the target data.
6. The controller according to any one of claims 1-5, wherein the queue logic controller is further configured to control a read pointer of the fifo queue to jump from data corresponding to the current read address to the target data and read out the target data, if the fifo queue stores the target data.
7. The controller of claim 5, wherein the queue logic controller is further configured to wait for the current data read flow to read and output the target data if the first time is not less than the second time.
8. The controller of claim 1, wherein the queue logic controller is further configured to:
Determining that the first-in first-out queue stores the target data under the condition that the latest read address is not smaller than the current read address of the read pointer in the first-in first-out queue and the difference value between the latest read address and the current read address is not larger than the length of the first-in first-out queue; or under the condition that the latest read address is smaller than the current read address and the data of the position of the target data in the first-in first-out queue is not updated, determining that a new data reading flow does not need to be initiated.
9. The controller of claim 1, wherein the queue logic controller is further configured to:
Comparing the first time and the second time under the condition that the latest read address is larger than the current read address of the read pointer in the first-in first-out queue; if the first time is less than the second time, a new data reading process needs to be initiated; if the first time is not less than the second time, a new data reading process does not need to be initiated;
The first time is the time required by the new data reading flow to read the target data, and the second time is the time required by the current data reading flow to read the target data.
10. A chip, comprising:
A processor, a memory and a controller according to any one of claims 1-9, the processor being connected to the controller, the controller being connected to the memory.
11. An electronic device, comprising:
a processor, a memory and a controller as claimed in any one of claims 1 to 9.
12. A method of reading data, the method comprising:
receiving a latest read address sent by a processor, wherein the latest read address is used for reading target data in an external memory;
Judging whether a new data reading flow aiming at the latest reading address needs to be initiated or not;
Acquiring the target data based on the data read by the current data flow under the condition that a new data reading flow does not need to be initiated;
the new data reading flow comprises the following steps: and stopping the current data reading flow, sequentially reading data from a memory by taking the latest reading address as a starting point according to the increasing order of the addresses, and storing the data into a first-in first-out queue.
13. The method of claim 12, wherein determining whether a new data read flow for the most recent read address needs to be initiated comprises:
Judging whether the first-in first-out queue stores the target data or not; if the target data is stored in the first-in first-out queue, a new data reading flow does not need to be initiated.
14. The method of claim 12, wherein determining whether the fifo queue stores the target data comprises:
Determining that the first-in first-out queue stores the target data under the condition that the latest read address is not smaller than the current read address of the read pointer in the first-in first-out queue and the difference value between the latest read address and the current read address is not larger than the length of the first-in first-out queue; or under the condition that the latest read address is smaller than the current read address and the data of the position of the target data in the first-in first-out queue is not updated, determining that the target data is stored in the first-in first-out queue.
15. The method of claim 13, wherein after determining whether the fifo queue stores the target data, the method further comprises:
Comparing the latest read address with the current read address under the condition that the target data is not stored in the first-in first-out queue; if the latest read address is smaller than the current read address, a new data reading process needs to be initiated.
16. The method of claim 15, wherein after comparing the size of the most recent read address with the current read address, the method further comprises:
comparing the magnitudes of the first time and the second time;
If the first time is less than the second time, a new data reading process needs to be initiated; if the first time is not less than the second time, a new data reading process does not need to be initiated;
The first time is the time required by the new data reading flow to read the target data, and the second time is the time required by the current data reading flow to read the target data.
17. The method of claim 12, wherein obtaining the target data based on the data read by the current data read flow comprises:
And under the condition that the target data is stored in the first-in first-out queue, controlling a read pointer of the first-in first-out queue to jump from the data corresponding to the current read address to the target data, and reading the target data.
18. The method of claim 16, wherein obtaining the target data based on the data read by the current data read flow comprises:
and under the condition that the first time is not less than the second time, waiting for the current data reading flow to read the target data, and outputting the target data.
19. The method of claim 12, wherein determining whether a new data read flow for the most recent read address needs to be initiated comprises:
Determining that the first-in first-out queue stores the target data under the condition that the latest read address is not smaller than the current read address of the read pointer in the first-in first-out queue and the difference value between the latest read address and the current read address is not larger than the length of the first-in first-out queue; or under the condition that the latest read address is smaller than the current read address and the data of the position of the target data in the first-in first-out queue is not updated, determining that a new data reading flow does not need to be initiated.
20. The method of claim 12, wherein determining whether a new data read flow for the most recent read address needs to be initiated comprises:
Comparing the first time and the second time when the latest read address is larger than the current read address of the read pointer in the first-in first-out queue and the difference between the latest read address and the current read address is larger than the length of the first-in first-out queue; if the first time is less than the second time, a new data reading process needs to be initiated; if the first time is not less than the second time, a new data reading process does not need to be initiated;
The first time is the time required by the new data reading flow to read the target data, and the second time is the time required by the current data reading flow to read the target data.
CN202311679266.9A 2023-12-08 2023-12-08 Controller, chip and electronic equipment namely data reading method Pending CN117931700A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311679266.9A CN117931700A (en) 2023-12-08 2023-12-08 Controller, chip and electronic equipment namely data reading method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311679266.9A CN117931700A (en) 2023-12-08 2023-12-08 Controller, chip and electronic equipment namely data reading method

Publications (1)

Publication Number Publication Date
CN117931700A true CN117931700A (en) 2024-04-26

Family

ID=90754533

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311679266.9A Pending CN117931700A (en) 2023-12-08 2023-12-08 Controller, chip and electronic equipment namely data reading method

Country Status (1)

Country Link
CN (1) CN117931700A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118502323A (en) * 2024-07-16 2024-08-16 杭州康吉森自动化科技有限公司 Industrial Ethernet data transmission method and FPGA

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118502323A (en) * 2024-07-16 2024-08-16 杭州康吉森自动化科技有限公司 Industrial Ethernet data transmission method and FPGA

Similar Documents

Publication Publication Date Title
CN105159776B (en) Process handling method and device
US6968411B2 (en) Interrupt processing apparatus, system, and method
JP5500741B2 (en) Interrupt approval in data processing systems
CN117931700A (en) Controller, chip and electronic equipment namely data reading method
CN111104448A (en) Method and device for exporting large-data-volume Excel file, computer equipment and storage medium
KR101426575B1 (en) Distributed processing system and method
CN110851276A (en) Service request processing method, device, server and storage medium
WO2023066233A1 (en) Program flashing method and apparatus for controller, and controller and storage medium
US20090182798A1 (en) Method and apparatus to improve the effectiveness of system logging
CN116257472B (en) Interface control method, device, electronic equipment and storage medium
US8141077B2 (en) System, method and medium for providing asynchronous input and output with less system calls to and from an operating system
US8185676B2 (en) Transitions between ordered and ad hoc I/O request queueing
CN113778581A (en) Page loading method, electronic equipment and storage medium
US9378163B2 (en) Method to accelerate message signaled interrupt processing
CN107273082B (en) Image display method, device, terminal and storage medium
CN111857546A (en) Method, network adapter and computer program product for processing data
KR102326892B1 (en) Adaptive transaction handling method and device for same
US8706923B2 (en) Methods and systems for direct memory access (DMA) in-flight status
CN113220608A (en) NVMe command processor and processing method thereof
CN1332305C (en) Module life period managing method
CN108089900B (en) Character string processing method and device
CN110704782A (en) Page response method and device, electronic equipment and storage medium
CN112311843A (en) Data loading method and device
CN111562913B (en) Method, device and equipment for pre-creating view component and computer readable medium
CN111209058B (en) Method, device, medium and equipment for acquiring process name

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination