CN113553009B - Data reading method, data writing method and data reading and writing method - Google Patents

Data reading method, data writing method and data reading and writing method Download PDF

Info

Publication number
CN113553009B
CN113553009B CN202110850095.6A CN202110850095A CN113553009B CN 113553009 B CN113553009 B CN 113553009B CN 202110850095 A CN202110850095 A CN 202110850095A CN 113553009 B CN113553009 B CN 113553009B
Authority
CN
China
Prior art keywords
data
address
reading
data reading
request
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110850095.6A
Other languages
Chinese (zh)
Other versions
CN113553009A (en
Inventor
周光亮
王洪
曾纪国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Goke Microelectronics Co Ltd
Original Assignee
Hunan Goke Microelectronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Goke Microelectronics Co Ltd filed Critical Hunan Goke Microelectronics Co Ltd
Priority to CN202110850095.6A priority Critical patent/CN113553009B/en
Publication of CN113553009A publication Critical patent/CN113553009A/en
Application granted granted Critical
Publication of CN113553009B publication Critical patent/CN113553009B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/20Handling requests for interconnection or transfer for access to input/output bus
    • G06F13/28Handling requests for interconnection or transfer for access to input/output bus using burst mode transfer, e.g. direct memory access DMA, cycle steal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0659Command handling arrangements, e.g. command buffers, queues, command scheduling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • G06F3/0679Non-volatile semiconductor memory device, e.g. flash memory, one time programmable memory [OTP]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Memory System (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the invention discloses a data reading method, a data writing method and a data reading and writing method, wherein in the data reading method, firstly, the address space of a memory is divided into a preset number of storage areas, a one-to-one corresponding data processing queue is arranged for each storage area, then, all data reading addresses are obtained from a data reading request, and then, each data reading address is stored in the data processing queue of the corresponding storage area; reading data according to each data reading address in the data processing queue in a storage area corresponding to each data processing queue; and after the reading is finished, namely after each data processing queue finishes the data reading, merging the data in all the data reading channels, thereby finishing the data reading request. Therefore, the reading efficiency of the memory is improved.

Description

Data reading method, data writing method and data reading and writing method
Technical Field
The present invention relates to the field of data processing, and in particular, to a data reading method, a data writing method, and a data reading and writing method.
Background
In the current internet era, such as video encoding and decoding, image processing, AI (artificial intelligence) processing and the like, computers need to process a large amount of data interaction requests at every moment. When the central processing unit processes the data interaction request, the static random access memory is generally required to be accessed.
And during the access process of data in the static random access memory, the bit width of the static random access memory is fixed. If the access of the data is not processed, the access of the high-bit-width data and the access of the data under the condition of multi-channel and multi-type interactive data access need to be queued for access through multiple priorities, and then the access operation can be realized, so that the processing efficiency of the static random access memory is low.
Disclosure of Invention
In view of this, the present invention provides a data reading method, a data writing method and a data reading and writing method, which are used to improve the data reading efficiency of the static random access memory.
In a first aspect, an embodiment of the present invention provides a data reading method, including:
dividing the address space of the static random access memory into a predetermined number of storage areas, and setting a one-to-one corresponding data processing queue for each storage area, wherein the space size of each storage area is the same;
acquiring all data reading addresses from a data reading request, and storing each data reading address to a data processing queue of a corresponding storage area according to a preset field of each data reading address, wherein the data reading request comprises at least one data reading channel, and each data reading channel comprises at least one data reading address;
reading data according to each data reading address in the data processing queue and storing the read data to a corresponding data reading channel in a storage area corresponding to each data processing queue;
and after each data processing queue finishes data reading, merging the data in all the data reading channels to finish the data reading request.
Optionally, in an implementation manner provided by the embodiment of the present invention, after the step of acquiring all data read addresses from the data read request, before the step of storing each data read address in a data processing queue of a corresponding storage area according to a predetermined field of each data read address, the method further includes:
determining a channel mark field of each data reading address, wherein the channel mark field represents the position of the data reading address in the data reading channel;
storing the read data to a corresponding data reading channel, comprising:
and storing the read data to the corresponding data reading channel according to the channel mark field.
Further, in an implementation manner provided by the embodiment of the present invention, after the step of acquiring all data read addresses from the data read request, before the step of storing each data read address into the data processing queue of the corresponding storage area according to the predetermined field of each data read address, the method further includes:
determining the category of the data reading request, wherein the category of the data reading request comprises: a direct memory read request, a convolution kernel read request, a convolution layer read feature map request and a pooling layer read feature map request;
storing the read data to a corresponding data reading channel, comprising:
and storing the read data to a data reading channel of the corresponding data reading request according to the channel mark field of the data reading address and the type of the data reading request.
Optionally, in an implementation manner provided by the embodiment of the present invention, the predetermined number is 64;
storing each data reading address to a data processing queue of a corresponding storage area according to a predetermined field of each data reading address, comprising:
and storing each data reading address into a data processing queue of a corresponding storage area according to the 3 rd bit to the 8 th bit of each data reading address.
In a second aspect, an embodiment of the present invention provides a data writing method, including:
dividing the address space of the static random access memory into a predetermined number of storage areas, and setting a one-to-one corresponding data processing queue for each storage area, wherein the space size of each storage area is the same;
acquiring all data writing addresses from the data writing request, and storing each data writing address to a data processing queue of a corresponding storage area according to a preset field of each data writing address;
and writing the data corresponding to the data writing address in the storage area corresponding to each data processing queue according to each data reading address in the data processing queue to finish the data writing request.
Optionally, in an implementation manner provided by the embodiment of the present invention, the predetermined number is 64;
the data processing queue for storing each data writing address to the corresponding storage area according to the preset field of each data reading address and the preset field of each data writing address comprises:
and storing each data writing address into a data processing queue of a corresponding storage area according to the 3 rd bit to the 8 th bit of each data reading address.
In a third aspect, an embodiment of the present invention provides a data reading and writing method, including:
dividing the address space of the static random machine into a predetermined number of storage areas, and setting a one-to-one corresponding data processing queue for each storage area, wherein the space size of each storage area is the same;
when the data reading and writing request comprises a data reading request, acquiring all data reading addresses from the data reading request, and storing each data reading address to a data processing queue of a corresponding storage area according to a preset field of each data reading address, wherein the data reading request comprises at least one data reading channel, and each data reading channel comprises at least one data reading address;
reading data according to each data reading address in the data processing queue and storing the read data to a corresponding data reading channel in a storage area corresponding to each data processing queue;
and after each data processing queue finishes data reading, merging the data in all the data reading channels to finish the data reading request.
When the data read-write request comprises a data write-in request, acquiring all data write-in addresses from the data write-in request, and storing each data write-in address to a data processing queue of a corresponding storage area according to a preset field of each data write-in address;
and writing data corresponding to the data writing address in the storage area corresponding to each data processing queue according to each data writing address in the data processing queue to finish the data writing request.
Optionally, in an implementation manner provided by the embodiment of the present invention, the method further includes:
when at least two data read-write requests need to read and/or write data in the same storage area, all the data read-write requests are completed according to a preset data read priority sequence.
In a fourth aspect, an embodiment of the present invention provides a data reading apparatus, including:
the first dividing module is used for dividing the address space of the static random access memory into a preset number of storage areas and setting one-to-one corresponding data processing queues for each storage area, wherein the space size of each storage area is the same;
the first storage module is used for acquiring all data reading addresses from a data reading request and storing each data reading address to a data processing queue of a corresponding storage area according to a preset field of each data reading address, wherein the data reading request comprises at least one data reading channel, and each data reading channel comprises at least one data reading address;
the first reading module is used for reading data according to each data reading address in the data processing queue and storing the read data to a corresponding data reading channel;
and the first merging module is used for merging the data in all the data reading channels after each data processing queue finishes data reading, so as to finish the data reading request.
In a fifth aspect, an embodiment of the present invention provides a data writing apparatus, including:
the second division module is used for dividing the address space of the static random access memory into a preset number of storage areas and setting one-to-one corresponding data processing queues for each storage area, wherein the space size of each storage area is the same;
the second storage module is used for acquiring all data writing addresses from the data writing request and storing each data writing address to a data processing queue of a corresponding storage area according to a preset field of each data writing address;
and the first writing module is used for writing data corresponding to the data writing address according to each data writing address in the data processing queue to complete the data writing request.
In a sixth aspect, an embodiment of the present invention provides a data reading and writing apparatus, including:
the third dividing module is used for dividing the address space of the static random device into a preset number of storage areas and setting a one-to-one corresponding data processing queue for each storage area, wherein the space size of each storage area is the same;
the third storage module is used for acquiring all data reading addresses from the data reading request when the data reading and writing request comprises the data reading request, and storing each data reading address to a data processing queue of a corresponding storage area according to a preset field of each data reading address, wherein the data reading request comprises at least one data reading channel, and each data reading channel comprises at least one data reading address;
the second reading module is used for reading data according to each data reading address in the data processing queue and storing the read data to a corresponding data reading channel;
and the second merging module is used for merging the data in all the data reading channels after each data processing queue finishes data reading, so as to finish the data reading request.
The fourth storage module is used for acquiring all data writing addresses from the data writing request when the data reading and writing request comprises the data writing request, and storing each data writing address to a data processing queue of a corresponding storage area according to a preset field of each data writing address;
and the second writing module is used for writing data corresponding to the data writing address according to each data writing address in the data processing queue in the storage area corresponding to each data processing queue, and completing the data writing request.
In a seventh aspect, an embodiment of the present invention provides a computer device, including a memory and a processor, where the memory stores a computer program, and the computer program, when running on the processor, executes the data reading method according to any one of the first aspect, or the data writing method according to any one of the second aspect, or the data reading and writing method according to any one of the third aspect.
In an eighth aspect, an embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed on a processor, performs a data reading method according to any one of the first aspect, or a data writing method according to any one of the second aspect, or a data reading and writing method according to any one of the third aspect.
The data reading method provided by the embodiment of the invention comprises the steps of firstly dividing the address space of a memory into a preset number of storage areas, setting a one-to-one corresponding data processing queue for each storage area, then acquiring all data reading addresses from a data reading request, and then storing each data reading address into the data processing queue of the corresponding storage area; reading data according to each data reading address in the data processing queue in a storage area corresponding to each data processing queue; and after the reading is finished, namely after each data processing queue finishes the data reading, merging the data in all the data reading channels, thereby finishing the data reading request.
Therefore, when the memory needs to process the data reading addresses in the multiple data reading channels of the data reading request at one time, the data reading addresses can be distributed to the data processing queues of the corresponding storage areas, and the data are read by the storage areas of the memory according to the data processing queues, so that the multiple data reading addresses can be processed at one time, and the reading efficiency of the memory is improved. And after all the data processing channels read the data from the corresponding storage areas, all the data are merged in the data reading channels of the data reading requests, so that complete data are obtained, and further, when the data stored by continuous addresses need to be read, the data in all the addresses can be read and merged at one time, and the data reading efficiency is further improved.
Drawings
In order to more clearly illustrate the technical solution of the present invention, the drawings required in the embodiments will be briefly described below, and it should be understood that the following drawings only illustrate some embodiments of the present invention, and therefore should not be considered as limiting the scope of the present invention. Like components are numbered similarly in the various figures.
Fig. 1 is a schematic flow chart illustrating a data reading method according to an embodiment of the present invention;
fig. 2 is a schematic flow chart illustrating a data writing method according to an embodiment of the present invention;
fig. 3 is a schematic flow chart illustrating a data reading and writing method according to an embodiment of the present invention;
FIG. 4 is a schematic structural diagram of a data reading apparatus according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram illustrating a data writing apparatus according to an embodiment of the present invention;
fig. 6 shows a schematic structural diagram of a data reading and writing apparatus according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments.
The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
Hereinafter, the terms "including", "having", and their derivatives, which may be used in various embodiments of the present invention, are only intended to indicate specific features, numbers, steps, operations, elements, components, or combinations of the foregoing, and should not be construed as first excluding the existence of, or adding to, one or more other features, numbers, steps, operations, elements, components, or combinations of the foregoing.
Furthermore, the terms "first," "second," "third," and the like are used solely to distinguish one from another and are not to be construed as indicating or implying relative importance.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which various embodiments of the present invention belong. The terms (such as those defined in commonly used dictionaries) should be interpreted as having a meaning that is consistent with their contextual meaning in the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein in various embodiments of the present invention.
Example 1
Referring to fig. 1, fig. 1 shows a schematic flow chart of a data reading method provided by an embodiment of the present invention, where the data reading method provided by the embodiment of the present invention includes:
s110, dividing the address space of the static random access memory into a predetermined number of storage areas, and setting a one-to-one corresponding data processing queue for each storage area, wherein the space size of each storage area is the same.
It can be understood that a Static Random-Access Memory (SRAM) is divided into a plurality of Memory regions, that is, the complete SRAM is divided into a plurality of small SRAMs, that is, a plurality of banks. If each bank can perform data reading/data writing, the data processing efficiency, i.e., bit width, of the complete SRAM will be increased substantially. In the embodiment of the invention, the complete SRAM is divided into a plurality of banks, and data writing and reading are completed through each bank, so that the bit width of the whole SRAM is expanded, and the data throughput is improved.
Further, in order to improve the processing efficiency of each bank, in the embodiment of the present invention, a corresponding data processing queue is set for each bank, so that the bank can read data according to a data reading address in the data processing queue, or write data into the bank through a data writing address in the data processing queue, thereby further improving the data processing efficiency of the SRAM.
And S120, acquiring all data reading addresses from the data reading request, and storing each data reading address into a data processing queue of a corresponding storage area according to a predetermined field of each data reading address, wherein the data reading request comprises at least one data reading channel, and each data reading channel comprises at least one data reading address.
It is understood that the data reading request represents a request to read a picture, a request to read an array, a request to read a type of data, and so on, and the request includes a first address of data to be read/written. However, due to the limitation of the SRAM bit width, the SRAM repeatedly reads the data with the SRAM bit width according to the first address of the data to be read/written, and when the SRAM processes the data, the SRAM bit width added to the first address of the data to be read/written is continuously read, and the two SRAM bit width added to the first address … … until the data reading/writing is completed. Therefore, the data read address/data read/write address in the data read path/data write path in the data read request/data read/write request in the embodiment of the present invention may be understood as an address group in which the address of the data to be read/written has been divided in a manner that the first address, the first address plus an address with one SRAM bit width, and the first address plus two addresses with two SRAM bit widths.
In one implementation manner provided by the embodiment of the present invention, one data read request includes 8 data read channels, and each data read channel includes 8 data read addresses.
Optionally, in an implementation manner provided by the embodiment of the present invention, the predetermined number is 64;
storing each data reading address to a data processing queue of a corresponding storage area according to a predetermined field of each data reading address, comprising:
and storing each data reading address into a data processing queue of a corresponding storage area according to the 3 rd bit to the 8 th bit of each data reading address.
Specifically, the embodiment of the present invention divides the complete SRAM into 64 banks, and when data is read or written, the data is written through the 64 banks. In the embodiment of the present invention, the bank corresponding to each data read address is determined by the 3 rd bit to the 8 th bit of the data read address, considering the following reasons:
in an embodiment of the present invention, in which picture data is stored in an SRAM through a DMA (Direct Memory Access), a bit width of the DMA is 128 bits, and a bit width of each bank of the SRAM is 64 bits. Therefore, when the DMA stores data into the SRAM, the SRAM needs to execute data storage twice, when each bank can execute data storage, the two banks are used for data storage, and writing can be completed through one DMA request. And 128bit of DMA needs to be split into two 64bit data, the difference of two data addresses is 21' h8, so the 3 rd bit of the address needs to be one of the bits for distinguishing the bank.
Further, if the bank is divided by the high order bits of the data write addresses, and the high order bits of the multiple consecutive data write addresses are not different much, the data to which the data write addresses correspond will be stored in the same bank, so that when the data in the bank is read, the data is still queued for reading.
In addition, taking the writing of the picture data as an example, when the picture data is stored, the addresses of the picture data are continuous, and the address difference is not large. Therefore, the embodiment of the invention determines the bank to which the data to be read or the data to be written belongs through the 3 rd bit to the 8 th bit of the address.
And S130, reading data according to each data reading address in the data processing queue in the storage area corresponding to each data processing queue, and storing the read data to the corresponding data reading channel.
Therefore, after data are output by each bank, the data are correspondingly stored in the data reading channels, the data reading channels combine a plurality of data, and then the data reading channels can output the data acquired from the plurality of banks at one time, so that the output of high-bit-width data is realized.
And S140, merging the data in all the data reading channels after each data processing queue finishes data reading, and finishing the data reading request.
It can be understood that, after each bank reads and processes all the addresses in the corresponding data processing queue, and when all the data reading channels store corresponding data, a channel data reading completion flag is correspondingly given to indicate that data reading is completed.
Therefore, when the memory needs to process the data reading addresses in the multiple data reading channels of the data reading request at one time, the data reading addresses can be distributed to the data processing queues of the corresponding storage areas, and each storage area of the memory reads data according to the data processing queues, so that the multiple data reading addresses can be processed at one time, and the reading efficiency of the memory is improved. After all the data processing channels read the data from the corresponding storage areas, all the data are merged in the data reading channels of the data reading requests to obtain complete data, and then the data in all the addresses can be read and merged at one time when the data stored by continuous addresses need to be read, so that the data reading efficiency is further improved.
Optionally, in an implementation manner provided by the embodiment of the present invention, after the step of acquiring all data read addresses from the data read request, before the step of storing each data read address in a data processing queue of a corresponding storage area according to a predetermined field of each data read address, the method further includes:
determining a channel mark field of each data reading address, wherein the channel mark field represents the position of the data reading address in the data reading channel;
storing the read data to a corresponding data reading channel, comprising:
and storing the read data to the corresponding data reading channel according to the channel mark field.
It can be understood that, since the read data needs to be stored in the corresponding data reading channel, the data reading address is not used to indicate the location where the data to be read should be stored. If a plurality of data reading requests are simultaneously included, a data storage error may occur.
Therefore, the embodiment of the present invention provides such an implementation manner that, while determining which bank should process a data read address, a channel tag field of the data read address is given, so that when the bank outputs data, the computer device can store the data into a corresponding channel according to the channel tag field.
Exemplarily, in an implementation manner provided by the embodiment of the present invention, the data read request includes 8 channels, the channel flag field includes 6 bits, and the high 3-bit data indicates which channel of the 8 read channels the data read by the bank belongs to, and the low 3-bit data indicates which position of the read channel the data read by the bank belongs to.
Further, in an implementation manner provided by the embodiment of the present invention, after the step of acquiring all data read addresses from the data read request, before the step of storing each data read address into the data processing queue of the corresponding storage area according to the predetermined field of each data read address, the method further includes:
determining the category of the data reading request, wherein the category of the data reading request comprises: a direct memory read request, a convolution kernel read request, a convolution layer read feature map request and a pooling layer read feature map request;
storing the read data to a corresponding data reading channel, comprising:
and storing the read data to a data reading channel of the corresponding data reading request according to the channel mark field of the data reading address and the type of the data reading request.
In the implementation manner provided by the embodiment of the invention, 4 types of data reading requests are designed, namely Direct Memory Access (DMA) reading requests, namely SRAM data reading requests; a convolution kernel read request; the convolution layer reads a feature map request, and the feature map can be understood as picture data; the pooling layer reads the feature map request. And based on the category of the 4 data reading requests, the read request priority is set, so that when the SRAM receives a plurality of data reading requests, the SRAM can process the plurality of requests according to the read request priority, and it can be understood that the read request priority can be set according to actual situations.
It should be understood that, for different types of data read requests, the size of data read by one request may be different, and exemplarily, in one implementation manner provided in the embodiment of the present invention, the DMA reads out data in the SRAM, that is, a direct memory read request, the bit width is 128 bits, and the one request includes two data read addresses; the convolutional layer reads feature map data from the SRAM, that is, the convolutional layer reads a feature map request, where one data read request includes 8 channels, and each channel includes 8 consecutive data read addresses, that is, the bit width is 512 bits. It can be understood that the setting mode of the data reading requests of different types can be set according to actual situations.
It can be understood that the processing modes of different types of data are still processed by dividing different banks according to the 3 rd to 8 th bits of the read address.
Example 2
Referring to fig. 2, fig. 2 is a schematic flowchart illustrating a data writing method according to an embodiment of the present invention, where the data writing method according to the embodiment of the present invention includes:
s210, dividing the address space of the static random access memory into a predetermined number of storage areas, and setting a one-to-one corresponding data processing queue for each storage area, wherein the space size of each storage area is the same;
s220, acquiring all data writing addresses from the data writing request, and storing each data writing address to a data processing queue of a corresponding storage area according to a preset field of each data writing address;
and S230, writing data corresponding to the data reading address in the storage area corresponding to each data processing queue according to each data reading address in the data processing queue, and completing the data reading request.
Optionally, in an implementation manner provided by the embodiment of the present invention, the predetermined number is 64;
further S220 includes:
and writing the data to be written corresponding to each data writing address into the corresponding storage area according to the 3 rd bit to the 8 th bit of each data reading address.
It should be understood that the data writing method provided in embodiment 2 of the present invention corresponds to the data reading method provided in embodiment 1 of the present invention, and the principle and the beneficial effect of each specific step can be referred to in embodiment 1, and therefore, detailed description is omitted.
Further, similarly to the data reading method provided in embodiment 1, in an implementation manner provided in embodiment 2 of the present invention, the data reading request includes 4 types of data writing requests, that is, a direct memory writing request, a pooled layer writing feature map request, and a convolutional layer writing feature map request.
Example 3
Referring to fig. 3, fig. 3 shows a schematic flow chart of a data reading and writing method provided by an embodiment of the present invention, where the data reading and writing method provided by the embodiment of the present invention includes:
s310, dividing the address space of the static random device into a predetermined number of storage areas, and setting a one-to-one corresponding data processing queue for each storage area, wherein the space size of each storage area is the same;
s320, when the data reading and writing request comprises a data reading request, acquiring all data reading addresses from the data reading request, and storing each data reading address to a data processing queue of a corresponding storage area according to a preset field of each data reading address, wherein the data reading request comprises at least one data reading channel, and each data reading channel comprises at least one data reading address;
s330, reading data according to each data reading address in the data processing queue in a storage area corresponding to each data processing queue, and storing the read data to a corresponding data reading channel;
s340, merging the data in all the data reading channels after each data processing queue finishes data reading to finish the data reading request;
s350, when the data read-write request comprises a data write-in request, acquiring all data write-in addresses from the data write-in request, and storing each data write-in address to a data processing queue of a corresponding storage area according to a preset field of each data write-in address;
and S360, writing data corresponding to the data writing address in the storage area corresponding to each data processing queue according to each data writing address in the data processing queue, and completing the data writing request.
Optionally, in an implementation manner provided by the embodiment of the present invention, the method further includes:
when at least two data read-write requests need to read and/or write data in the same storage area, all the data read-write requests are completed according to a preset data read priority sequence.
It should be understood that the data reading and writing method provided in embodiment 3 of the present invention corresponds to the data reading method provided in embodiment 1 and the data writing method provided in embodiment 2 of the present invention, and the principle and the beneficial effect of each specific step can be referred to in embodiment 1 and/or embodiment 2, and therefore will not be described again.
Example 4
Referring to fig. 4, fig. 4 is a schematic structural diagram of a data reading apparatus according to an embodiment of the present invention, and a data reading apparatus 400 according to an embodiment of the present invention includes:
a first dividing module 410, configured to divide an address space of the sram into a predetermined number of storage areas, and set a one-to-one data processing queue for each storage area, where the size of each storage area is the same;
a first storing module 420, configured to obtain all data read addresses from a data read request, and store each data read address to a data processing queue of a corresponding storage area according to a predetermined field of each data read address, where the data read request includes at least one data read channel, and each data read channel includes at least one data read address;
the first reading module 430 is used for reading data according to each data reading address in the data processing queue in the storage area corresponding to each data processing queue, and storing the read data to the corresponding data reading channel;
the first merging module 440 is configured to merge data in all data reading channels after each data processing queue completes data reading, so as to complete a data reading request.
The data reading device provided in the embodiment of the present application can implement each process of the data reading method in the method embodiment of fig. 1, and can achieve the same technical effect, and is not described here again to avoid repetition.
Example 5
Referring to fig. 5, fig. 5 is a schematic structural diagram illustrating a data writing apparatus according to an embodiment of the present invention, where the data writing apparatus 500 according to the embodiment of the present invention includes:
a second dividing module 510, configured to divide an address space of the sram into a predetermined number of storage areas, and set a one-to-one data processing queue for each storage area, where the size of each storage area is the same;
a second storing module 520, configured to obtain all data write addresses from the data write request, and store each data write address to a data processing queue of a corresponding storage area according to a predetermined field of each data write address;
the first writing module 530 is configured to write data corresponding to the data writing address according to each data writing address in the data processing queue in the storage area corresponding to each data processing queue, and complete the data writing request.
The data writing device provided in the embodiment of the present application can implement each process of the data writing method in the method embodiment of fig. 2, and can achieve the same technical effect, and is not described here again to avoid repetition.
Example 6
Referring to fig. 6, fig. 6 shows a schematic structural diagram of a data reading and writing apparatus according to an embodiment of the present invention, where the data reading and writing apparatus 600 according to the embodiment of the present invention includes:
a third dividing module 610, configured to divide the address space of the static random access device into a predetermined number of storage regions, and set a one-to-one data processing queue for each storage region, where the size of each storage region is the same;
a third storing module 620, configured to, when the data read/write request includes a data read request, obtain all data read addresses from the data read request, and store each data read address to a data processing queue of a corresponding storage area according to a predetermined field of each data read address, where the data read request includes at least one data read channel, and each data read channel includes at least one data read address;
the second reading module 630 is configured to read data according to each data reading address in the data processing queue in a storage area corresponding to each data processing queue, and store the read data to a corresponding data reading channel;
the second merging module 640 is configured to merge the data in all the data reading channels after each data processing queue completes data reading, so as to complete a data reading request;
a fourth storing module 650, configured to, when the data read/write request includes a data write request, obtain all data write addresses from the data write request, and store each data write address to a data processing queue of a corresponding storage area according to a predetermined field of each data write address;
the second writing module 660 is configured to write, in the storage area corresponding to each data processing queue, data corresponding to the data write address according to each data write address in the data processing queue, and complete the data write request.
The data reading and writing device provided in the embodiment of the present application can implement each process of the data reading and writing method in the method embodiment of fig. 3, and can achieve the same technical effect, and for avoiding repetition, details are not repeated here.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative and, for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, each functional module or unit in each embodiment of the present invention may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention or a part of the technical solution that contributes to the prior art in essence can be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a smart phone, a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention.

Claims (13)

1. A data reading method, comprising:
dividing an address space of a static random access memory into a predetermined number of storage areas, and setting a one-to-one corresponding data processing queue for each storage area, wherein the space size of each storage area is the same;
acquiring all data reading addresses from a data reading request, and storing each data reading address to a data processing queue of a corresponding storage area according to a preset field of each data reading address, wherein the data reading request comprises at least one data reading channel, and each data reading channel comprises at least one data reading address;
reading data according to each data reading address in the data processing queue in a storage area corresponding to each data processing queue, and storing the read data to a corresponding data reading channel;
and after each data processing queue finishes data reading, merging the data in all the data reading channels to finish the data reading request.
2. The method of claim 1, wherein after the step of obtaining all data read addresses from the data read request, before the step of storing each data read address in a data processing queue of a corresponding storage area according to a predetermined field of each data read address, the method further comprises:
determining a channel tag field of each data reading address, wherein the channel tag field represents a data reading channel to which the data reading address belongs;
the storing the read data to the corresponding data reading channel includes:
and storing the read data to a corresponding data reading channel according to the channel mark field.
3. The method of claim 2, wherein after the step of obtaining all data read addresses from the data read request, before the step of storing each data read address in a data processing queue of a corresponding storage area according to a predetermined field of each data read address, the method further comprises:
determining a category of the data read request, the category of the data read request comprising: a direct memory read request, a convolution kernel read request, a convolution layer read feature map request and a pooling layer read feature map request;
the storing the read data to the corresponding data reading channel includes:
and storing the read data to a data reading channel of the corresponding data reading request according to the channel mark field of the data reading address and the type of the data reading request.
4. The method of claim 1, wherein the predetermined number is 64;
the storing each data reading address to the data processing queue of the corresponding storage area according to the predetermined field of each data reading address comprises:
and storing each data reading address to a data processing queue of a corresponding storage area according to the 3 rd bit to the 8 th bit of each data reading address.
5. A method of writing data, comprising:
dividing an address space of a static random access memory into a predetermined number of storage areas, and setting a one-to-one corresponding data processing queue for each storage area, wherein the space size of each storage area is the same;
acquiring all data writing addresses from a data writing request, and storing each data writing address to a data processing queue of a corresponding storage area according to a preset field of each data writing address;
and writing the data corresponding to the data writing address in the storage area corresponding to each data processing queue according to each data writing address in the data processing queue to finish the data writing request.
6. The method of claim 5, wherein the predetermined number is 64;
the storing each data writing address into a data processing queue of a corresponding storage area according to the predetermined field of each data writing address and the predetermined field of each data writing address comprises:
and storing each data writing address to a data processing queue of a corresponding storage area according to the 3 rd bit to the 8 th bit of each data writing address.
7. A method for reading and writing data, comprising:
dividing an address space of a static random machine into a predetermined number of storage areas, and setting a one-to-one corresponding data processing queue for each storage area, wherein the space size of each storage area is the same;
when the data reading and writing request comprises a data reading request, acquiring all data reading addresses from the data reading request, and storing each data reading address to a data processing queue of a corresponding storage area according to a preset field of each data reading address, wherein the data reading request comprises at least one data reading channel, and each data reading channel comprises at least one data reading address;
reading data according to each data reading address in the data processing queue in a storage area corresponding to each data processing queue, and storing the read data to a corresponding data reading channel;
after each data processing queue finishes data reading, merging the data in all the data reading channels to finish the data reading request;
when the data read-write request comprises a data write-in request, acquiring all data write-in addresses from the data write-in request, and storing each data write-in address to a data processing queue of a corresponding storage area according to a preset field of each data write-in address;
and writing the data corresponding to the data writing address in the storage area corresponding to each data processing queue according to each data writing address in the data processing queue to finish the data writing request.
8. The method of claim 7, further comprising:
and when at least two data read-write requests are to read and/or write data in the same storage area, finishing all the data read-write requests according to a preset data read priority sequence.
9. A data reading apparatus, comprising:
the device comprises a first dividing module, a second dividing module and a first processing module, wherein the first dividing module is used for dividing the address space of the static random access memory into a preset number of storage areas and setting one-to-one corresponding data processing queues for each storage area, and the space size of each storage area is the same;
the storage device comprises a first storage module, a second storage module and a storage module, wherein the first storage module is used for acquiring all data reading addresses from a data reading request and storing each data reading address to a data processing queue of a corresponding storage area according to a preset field of each data reading address, the data reading request comprises at least one data reading channel, and each data reading channel comprises at least one data reading address;
the first reading module is used for reading data according to each data reading address in the data processing queue and storing the read data to a corresponding data reading channel in a storage area corresponding to each data processing queue;
and the first merging module is used for merging the data in all the data reading channels after each data processing queue finishes data reading, so as to finish the data reading request.
10. A data writing apparatus, comprising:
the second division module is used for dividing the address space of the static random access memory into a preset number of storage areas and setting one-to-one corresponding data processing queues for each storage area, wherein the space size of each storage area is the same;
the second storage module is used for acquiring all data write-in addresses from a data write-in request and storing each data write-in address to a data processing queue of a corresponding storage area according to a preset field of each data write-in address;
and the first writing module is used for writing data corresponding to the data writing address into a storage area corresponding to each data processing queue according to each data writing address in the data processing queue to finish the data writing request.
11. A data read/write apparatus, comprising:
the third dividing module is used for dividing the address space of the static random device into a preset number of storage areas and setting a one-to-one corresponding data processing queue for each storage area, wherein the space size of each storage area is the same;
a third storage module, configured to, when a data read/write request includes a data read request, obtain all data read addresses from the data read request, and store each data read address to a data processing queue of a corresponding storage area according to a predetermined field of each data read address, where the data read request includes at least one data read channel, and each data read channel includes at least one data read address;
the second reading module is used for reading data according to each data reading address in the data processing queue in a storage area corresponding to each data processing queue and storing the read data to a corresponding data reading channel;
the second merging module is used for merging the data in all the data reading channels after each data processing queue finishes data reading, and finishing the data reading request;
the fourth storage module is used for acquiring all data writing addresses from the data writing request when the data reading and writing request comprises a data writing request, and storing each data writing address to a data processing queue of a corresponding storage area according to a preset field of each data writing address;
and the second writing module is used for writing the data corresponding to the data reading address into the storage area corresponding to each data processing queue according to each data writing address in the data processing queue to finish the data writing request.
12. A computer device comprising a memory and a processor, the memory storing a computer program which, when run on the processor, performs a data reading method according to any one of claims 1 to 4, or a data writing method according to any one of claims 5 to 6, or a data reading and writing method according to claims 7 to 8.
13. A computer-readable storage medium, on which a computer program is stored which, when run on a processor, performs a data reading method according to any one of claims 1 to 4, or a data writing method according to any one of claims 5 to 6, or a data reading and writing method according to claims 7 to 8.
CN202110850095.6A 2021-07-27 2021-07-27 Data reading method, data writing method and data reading and writing method Active CN113553009B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110850095.6A CN113553009B (en) 2021-07-27 2021-07-27 Data reading method, data writing method and data reading and writing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110850095.6A CN113553009B (en) 2021-07-27 2021-07-27 Data reading method, data writing method and data reading and writing method

Publications (2)

Publication Number Publication Date
CN113553009A CN113553009A (en) 2021-10-26
CN113553009B true CN113553009B (en) 2022-06-03

Family

ID=78132952

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110850095.6A Active CN113553009B (en) 2021-07-27 2021-07-27 Data reading method, data writing method and data reading and writing method

Country Status (1)

Country Link
CN (1) CN113553009B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1399739A (en) * 1999-08-31 2003-02-26 英特尔公司 SRAM controller for parallel processor architecture
CN104699414A (en) * 2013-12-09 2015-06-10 华为技术有限公司 Data reading and writing method and saving equipment
CN105094691A (en) * 2014-05-21 2015-11-25 华为技术有限公司 Data manipulation methods and system, and devices
CN112445713A (en) * 2019-08-15 2021-03-05 辉达公司 Techniques for efficiently partitioning memory

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10140030B2 (en) * 2015-11-02 2018-11-27 International Business Machines Corporation Dynamic modulation of cache memory
US10235299B2 (en) * 2016-11-07 2019-03-19 Samsung Electronics Co., Ltd. Method and device for processing data
KR20190128284A (en) * 2018-05-08 2019-11-18 에스케이하이닉스 주식회사 Memory system and operation method thereof

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1399739A (en) * 1999-08-31 2003-02-26 英特尔公司 SRAM controller for parallel processor architecture
CN104699414A (en) * 2013-12-09 2015-06-10 华为技术有限公司 Data reading and writing method and saving equipment
CN105094691A (en) * 2014-05-21 2015-11-25 华为技术有限公司 Data manipulation methods and system, and devices
CN112445713A (en) * 2019-08-15 2021-03-05 辉达公司 Techniques for efficiently partitioning memory

Also Published As

Publication number Publication date
CN113553009A (en) 2021-10-26

Similar Documents

Publication Publication Date Title
US20200327079A1 (en) Data processing method and device, dma controller, and computer readable storage medium
CN112181902B (en) Database storage method and device and electronic equipment
CN115035128B (en) Image overlapping sliding window segmentation method and system based on FPGA
CN118210455B (en) Ultra-long field data read-write performance optimization method and device, electronic equipment and storage medium
CN110223216A (en) A kind of data processing method based on parallel PLB, device and computer storage medium
CN112466378A (en) Solid state disk operation error correction method and device and related components
CN112435157A (en) Graphics processing system including different types of memory devices and method of operating the same
CN113553009B (en) Data reading method, data writing method and data reading and writing method
CN114328315A (en) DMA-based data preprocessing method, DMA component and chip structure
CN107451070A (en) The processing method and server of a kind of data
CN110059563B (en) Text processing method and device
CN112256206B (en) IO processing method and device
CN113656507B (en) Method and device for executing transaction in block chain system
CN116501247A (en) Data storage method and data storage system
CN116339626A (en) Data processing method, device, computer equipment and storage medium
CN115641887A (en) Flash memory management method and flash memory device
CN104216666A (en) Method and device for managing writing of disk data
CN112068948B (en) Data hashing method, readable storage medium and electronic device
CN114648444A (en) Vector up-sampling calculation method and device applied to neural network data processing
CN113052291B (en) Data processing method and device
CN107436918B (en) Database implementation method, device and equipment
CN110728367B (en) Data storage method and device for neural network
CN118338003B (en) Video decoding method, apparatus, computer device, readable storage medium, and program product
CN115221075A (en) Data memory, data storage method and device, chip, board card and equipment
CN116360701A (en) Method, device, equipment and medium for reading data pointer of RAID (redundant array of independent disks)

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant