CN113360420A - Memory control method and device - Google Patents
Memory control method and device Download PDFInfo
- Publication number
- CN113360420A CN113360420A CN202010153078.2A CN202010153078A CN113360420A CN 113360420 A CN113360420 A CN 113360420A CN 202010153078 A CN202010153078 A CN 202010153078A CN 113360420 A CN113360420 A CN 113360420A
- Authority
- CN
- China
- Prior art keywords
- memory
- group
- fpga
- chips
- slices
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/0223—User address space allocation, e.g. contiguous or non contiguous base addressing
- G06F12/023—Free address space management
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/061—Improving I/O performance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0638—Organizing or formatting or addressing of data
- G06F3/0644—Management of space entities, e.g. partitions, extents, pools
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0646—Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
- G06F3/0647—Migration mechanisms
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/0671—In-line storage system
Abstract
The invention relates to a memory control method and a memory control device, belongs to the technical field of communication, and solves the problems that in the prior art, due to the fact that the buffer memory of an FPGA is small, when the data bidirectional transmission rate is too high, the interrupt frequency is increased along with the buffer memory, and the like. The memory control method comprises the following steps: applying for upper computer memory slice areas with continuous physical addresses and dividing the upper computer memory slice areas into two equal memory spaces, wherein each memory space comprises a first group of memory slices and a second group of memory slices; opening up three memories to form three memory pools, wherein the first memory pool is used for managing the first addresses of all memory chips; opening three caches in the FPGA for transmitting memory chip addresses during data transmission; the FPGA takes out the first address of the first group of memory chips or the second group of memory chips in the first memory space from the first memory pool and puts the first address into a first cache; and the FPGA carries out writing operation on the first group of memory chips and/or reads operation on the second group of memory chips through the DMA. The interrupt frequency of the upper computer driving layer is reduced.
Description
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a memory control method and apparatus.
Background
The FPGA (field Programmable Gate array) is a product of further development on the basis of a Programmable device, is a semi-custom circuit in the field of Application Specific Integrated Circuits (ASIC), not only solves the defects of the custom circuit, but also overcomes the defect of limited Gate circuits of the original Programmable device.
When the FPGA is used for high-speed data acquisition in the industrial field, frequent communication can be carried out between the FPGA and an upper computer. The traditional method uses direct Memory access (dma), which allows hardware devices with different speeds to communicate without relying on a large amount of interrupt load of a Central Processing Unit (CPU), to carry data between an FPGA and an upper computer.
Because the buffer memory of the FPGA is small, when the data bidirectional transmission rate is too high, the interrupt frequency is increased, and when the upper computer is embedded hardware, because the scheduling period of the operating system is generally millisecond level, the too high interrupt frequency aggravates the response burden. And because the memory capacity is limited, a plurality of large special memories cannot be developed for information interaction, so that the drive design needs to be carried out aiming at a single interface, and the complexity and the development period of the system are increased.
Disclosure of Invention
In view of the foregoing analysis, embodiments of the present invention are directed to provide a memory control method and apparatus, so as to solve the problems that, in the prior art, because a buffer of an FPGA is small, when a data bidirectional transmission rate is too high, an interrupt frequency is also increased, and when an upper computer is embedded hardware, because a scheduling cycle of an operating system is generally in the order of milliseconds, the too high interrupt frequency increases a response load.
In one aspect, an embodiment of the present invention provides a memory control method, including: applying for upper computer memory slice areas with continuous physical addresses and dividing the upper computer memory slice areas into two equal memory spaces, wherein each memory space comprises a first group of memory slices and a second group of memory slices; opening up three memories to form three memory pools, wherein a first memory pool is used for managing the initial addresses of all memory chips, a first group of memory chips and a second memory pool are used for storing data sent to the upper computer by the FPGA, and a second group of memory chips and a third memory pool are used for storing data sent to the FPGA by the upper computer; opening three caches in the FPGA for transmitting memory chip addresses during data transmission; the FPGA takes out the first address of the first group of memory chips or the second group of memory chips in a first memory space from the first memory pool and puts the first address into a first cache; and the FPGA carries out writing operation on the first group of memory chips and/or reading operation on the second group of memory chips through DMA.
The beneficial effects of the above technical scheme are as follows: the memory control method of the embodiment of the invention applies for two memory spaces and opens up three memory pools to reduce the interrupt frequency of a driving layer of an upper computer during high-speed signal transmission, and realizes bidirectional communication with smaller resource consumption and control logic by opening up three smaller FPGA caches and matching with the memory chip of the upper computer, thereby reducing the complexity of the system, enhancing the reliability of a transmission link and reducing the development period of the system.
Based on the further improvement of the method, the three caches further include a second cache and a third cache, wherein the first cache is used for storing the first address of the memory slice in the first memory pool; the second cache is used for storing the first address of the first group of memory slices; and the third cache is used for storing the first address of the second group of memory slices.
In a further improvement of the above method, the first group of memory slices includes n memory slices, and the second group of memory slices includes m-n memory slices, where m is greater than n and m and n are positive integers.
Based on the further improvement of the above method, the writing operation of the FPGA to the first group of memory slices through DMA includes: the FPGA takes out the first address of the first group of memory chips from the first memory pool and puts the first address into the first cache; the FPGA writes fixed-length data into a first memory slice in the first group of memory slices through the DMA; when the first memory slice is full, writing the address of the first memory slice into a second cache, and simultaneously continuing to write data into a second memory slice in the first group of memory slices; and continuing to write data into the rest memory slices in the first group of memory slices until the nth memory slice is fully written, and placing the first address of the nth memory slice into a second cache.
Based on the further improvement of the above method, the reading operation of the FPGA on the second group of memory chips by DMA includes: the FPGA starts to read data in the (n + 1) th memory chip in the second group of memory chips through the DMA; after the n +1 th memory chip is empty, the first address of the n +1 th memory chip is placed into a third cache, and data in the n +2 th memory chip is read at the same time; and continuing to read the data in the rest memory chips in the second group of memory chips until the mth memory chip is read, and placing the first address of the mth memory chip into a third cache.
Based on the further improvement of the method, the FPGA pushes the data in the third cache into the second cache.
Based on a further improvement of the above method, performing a write operation on the first set of memory slices and/or performing a read operation on the second set of memory slices further includes: after a read operation is performed on the second set of memory slices, a write operation is performed on the first set of memory slices.
Based on the further improvement of the method, the memory control method further comprises the following steps: the FPGA triggers the upper computer to be interrupted by controlling an external interruption pin and informs the upper computer that the data transmission is finished at the present time; the upper computer responds to the interrupt and takes out the addresses of all the memory chips in the second buffer; and copying the data in the first group of memory chips to the second memory pool according to the addresses of all the memory chips in the second buffer, and/or writing the data required to be written into the FPGA into the second group of memory chips and putting the data into the third memory pool.
Based on the further improvement of the method, the memory control method further comprises the following steps: the FPGA takes out the first address of the first group of memory chips or the second group of memory chips in a second memory space from the first memory pool and puts the first address into a first cache; the FPGA carries out writing operation on the first group of memory chips in a second memory space and/or carries out reading operation on the second group of memory chips in the second memory space through DMA; and after the write-in operation and/or the read-out operation of the second memory space are completed, the upper computer releases the addresses of all the memory chips.
The beneficial effects of the above further improved scheme are: the memory control method of the embodiment of the invention simplifies the driving complexity of the embedded operating system and the FPGA during high-speed communication, directly carries out data transfer on the memory address by carrying out cache design on the FPGA end, increases the flexibility of the memory operation of an upper computer, enhances the system stability, reduces the development period, and is suitable for various high-speed interfaces of the FPGA end.
On the other hand, an embodiment of the present invention provides a memory control device, including an upper computer and an FPGA, where the upper computer includes: the memory device comprises two memory spaces, a first group of memory chips and a second group of memory chips, wherein the memory chip areas with continuous physical addresses are divided into two equal memory spaces, and each memory space comprises the first group of memory chips and the second group of memory chips; the first memory pool is used for managing the initial addresses of all the memory chips, the first group of memory chips and the second memory pool are used for storing data sent to the upper computer by the FPGA, and the second group of memory chips and the third memory pool are used for storing the data sent to the FPGA; the FPGA comprises: the three caches are used for transmitting memory slice addresses during data transmission, and the acquisition module is used for taking out the first address of the first group of memory slices or the second group of memory slices in the first memory space from the first memory pool and placing the first address into the first cache; and the read-write module is used for performing write operation on the first group of memory chips and/or performing read operation on the second group of memory chips through DMA.
Compared with the prior art, the invention can realize at least one of the following beneficial effects:
1. two memory spaces are applied, three memory pools are opened up to reduce the interrupt frequency of a driving layer of the upper computer, and two-way communication is realized by opening up three smaller FPGA caches and matching with a memory chip of the upper computer, so that the resource consumption and the control logic are smaller;
2. the driving complexity of the embedded operating system and the FPGA in high-speed communication is simplified; and
3. by carrying out cache design on the FPGA end, data transfer is directly carried out on the memory address, the flexibility of the memory operation of the upper computer is increased, the system stability is enhanced, the development period is reduced, and the method is suitable for various high-speed interfaces of the FPGA end.
In the invention, the technical schemes can be combined with each other to realize more preferable combination schemes. Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and drawings.
Drawings
The drawings are only for purposes of illustrating particular embodiments and are not to be construed as limiting the invention, wherein like reference numerals are used to designate like parts throughout.
FIG. 1 is a flow chart of a memory control method according to an embodiment of the invention;
FIG. 2 is a flowchart illustrating a write operation in a memory control method according to an embodiment of the invention;
FIG. 3 is a flowchart illustrating a read operation in a memory control method according to an embodiment of the invention;
FIG. 4 is a block diagram of a memory control device according to an embodiment of the invention; and
fig. 5 is a diagram of a memory control method according to an embodiment of the invention.
Reference numerals:
400-an upper computer; 402-a first memory space; 404-a second memory space; 406-a first set of memory chips; 408-a second set of memory slices; 410-a first memory pool; 412-a second memory pool; 414-third memory pool; 416-FPGA; 418-three buffers; 420-an acquisition module; 422-read-write module;
Detailed Description
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate preferred embodiments of the invention and together with the description, serve to explain the principles of the invention and not to limit the scope of the invention.
The invention discloses a memory control method. As shown in fig. 1, the memory control method includes: step S102, applying for an upper computer memory chip area with continuous physical addresses and dividing the upper computer memory chip area into two equal memory spaces, wherein each memory space comprises a first group of memory chips and a second group of memory chips; step S104, opening up three memories to form three memory pools, wherein the first memory pool is used for managing the initial addresses of all the memory chips, the first group of memory chips and the second memory pool are used for storing data sent to an upper computer by the FPGA, and the second group of memory chips and the third memory pool are used for storing data sent to the FPGA by the upper computer; step S106, opening three caches in the FPGA for transmitting the addresses of the memory chips during data transmission; step S108, the FPGA takes out the first address of the first group of memory chips or the second group of memory chips in the first memory space from the first memory pool and puts the first address into a first cache; and step S110, the FPGA carries out writing operation on the first group of memory chips and/or carries out reading operation on the second group of memory chips through DMA.
Compared with the prior art, the memory control method provided by the embodiment applies for two memory spaces and opens up three memory pools so as to reduce the interrupt frequency of the upper computer driving layer during high-speed signal transmission, and realizes bidirectional communication with smaller resource consumption and control logic by opening up three smaller FPGA caches and matching with the memory chip of the upper computer, thereby reducing the complexity of the system, enhancing the reliability of the transmission link and reducing the development period of the system.
Hereinafter, a memory control method according to an embodiment of the present invention will be described in detail with reference to fig. 1 to 3, and fig. 5.
The memory control method of the embodiment of the invention comprises the step S102 of applying for a memory chip area of an upper computer with continuous physical addresses and dividing the memory chip area into two equal memory spaces, wherein each memory space comprises a first group of memory chips and a second group of memory chips, and the step S104 of opening up three memory chips to form three memory pools, wherein the first memory pool is used for managing the first addresses of all the memory chips, the first group of memory chips and the second memory pool are used for storing data sent to the upper computer by an FPGA, and the second group of memory chips and the third memory pool are used for storing data sent to the FPGA by the upper computer. Specifically, the first group of memory slices includes n memory slices, and the second group of memory slices includes m-n memory slices, where m is greater than n and m and n are positive integers.
Hereinafter, the memory division of the upper computer is described in detail by way of specific examples with reference to fig. 1 and 5.
The first step is as follows: the upper computer applies for a memory space with continuous physical addresses through a driver layer development tool (e.g., Windriver), equally divides the whole memory space into 2m regions, each region is called a memory slice, wherein the first m memory slices are divided into a first memory space, and the last m memory slices are divided into a second memory space. The first n (n < m) memory chips (i.e., the first group of memory chips) of each group are used for storing data sent by the FPGA to the memory, and the last m-n memory chips (i.e., the second group of memory chips) are used for storing data sent by the memory to the FPGA. Meanwhile, the upper computer opens up three blocks of memories (see fig. 5) through an application program (for example, a VS or QT application program), each block of memory area is called a memory pool, a memory pool 1 (i.e., a first memory pool) is used for storing the first addresses of all memory slices, a memory pool 2 (i.e., a second memory pool) is used for storing data sent by the FPGA to the upper computer, and a memory pool 3 (i.e., a third memory pool) is used for storing data sent by the upper computer to the FPGA.
The memory control method of the embodiment of the invention further comprises the following steps: and step S106, opening three caches in the FPGA for transmitting the addresses of the memory chips during data transmission. Specifically, the three caches further include a second cache and a third cache, where the first cache is used to store a first address of a memory slice in the first memory pool; the second cache is used for storing the first address of the first group of memory chips; and the third cache is used for storing the first address of the second group of memory chips.
In the following, with reference to fig. 1 and 5, opening up three blocks of buffers in an FPGA is described in detail by way of specific examples.
The second step is that: three buffers (FIFO1, FIFO2 and FIFO3) are opened in the FPGA for transferring the addresses of the memory chips during data transmission, wherein the depths of the FIFO1, the FIFO2 and the FIFO3 (see FIG. 5) are all 2m, and the depths correspond to the number of 2m memory chips in the physical memory space of the upper computer. The FIFO1 (i.e., the first buffer) is used to obtain the first address of the memory slice stored in the memory pool 1; the FIFO2 (i.e., the second buffer) is used to store the first address of the memory slice that the FPGA sends to the memory data. The FIFO3 (i.e., the third cache) stores the first address of the memory chip for which the data is sent to the FPGA.
The memory control method of the embodiment of the invention further comprises the following steps: step S108, the FPGA takes out the first address of the first group of memory chips or the second group of memory chips in the first memory space from the first memory pool and puts the first address into the first cache. Specifically, when performing a write operation, the FPGA takes out a first address of a first group of memory chips in a first memory space from the first memory pool and puts the first address into a first cache; optionally, when performing a read operation, the FPGA fetches the first address of the second group of memory slices in the first memory space from the first memory pool and places the first address into the first cache.
The third step: the upper computer transmits the first m data in the memory pool 1 to the FPGA by driving a read-write register, and the FPGA puts the data into the FIFO 1. Optionally, when performing a read operation, the upper computer transfers the last m-n data in the memory pool 2 to the FPGA by driving the read-write register, and the FPGA puts the data into the FIFO 1.
The memory control method of the embodiment of the invention further comprises the following steps: step S110, the FPGA carries out writing operation on the first group of memory chips and/or carries out reading operation on the second group of memory chips through the DMA.
Specifically, referring to fig. 2, the writing operation of the FPGA to the first group of memory slices through DMA includes: step S202, the FPGA takes out the first address of the first group of memory chips from the first memory pool and puts the first address into a first cache; step S204, the FPGA writes fixed-length data into a first memory slice in the first group of memory slices through the DMA; step S206, after the first memory slice is fully written, writing the address of the first memory slice into a second cache, and simultaneously continuing to write data into a second memory slice in the first group of memory slices; and step S208, continuing to write data into the rest memory slices in the first group of memory slices until the nth memory slice is fully written, and placing the first address of the nth memory slice into the second cache.
Specifically, referring to fig. 3, the reading operation of the second group of memory chips by the FPGA through the DMA includes: step S302, the FPGA starts to read data in the (n + 1) th memory chip in the second group of memory chips through the DMA; step S304, after the (n + 1) th memory chip is empty, the first address of the (n + 1) th memory chip is put into a third cache, and the data in the (n + 2) th memory chip is read; and step S306, continuing to read the data in the rest memory chips in the second group of memory chips until the mth memory chip is read, and putting the first address of the mth memory chip into the third cache. And then, the FPGA pushes the data in the third cache into the second cache.
In an alternative embodiment, the performing a write operation to the first set of memory slices and/or performing a read operation to the second set of memory slices further comprises: after performing a read operation on the second set of memory slices, performing a write operation on the first set of memory slices (i.e., interchanging the order of the write operation and the read operation); or only writing operation is carried out on the first group of memory chips, and reading operation is not carried out on the second group of memory chips; or only the second set of memory slices are read and not the first set of memory slices.
The memory control method of the embodiment of the invention further comprises the following steps: the FPGA triggers the upper computer to interrupt by controlling an external interrupt pin, and informs the upper computer that the data transmission is finished; the upper computer responds to the interrupt and takes out the addresses of all the memory chips in the second buffer; and copying the data in the first group of memory chips to a second memory pool according to the addresses of all the memory chips in the second buffer, and/or writing the data needing to be written into the FPGA into the second group of memory chips and putting the data into a third memory pool.
In addition, the memory control method of the embodiment of the present invention further includes: the FPGA takes out the first address of the first group of memory chips or the second group of memory chips in the second memory space from the first memory pool and puts the first address into a first cache; the FPGA carries out writing operation on the first group of memory chips in the second memory space and/or carries out reading operation on the second group of memory chips in the second memory space through the DMA; and after the write operation and/or the read operation of the second memory space are completed, the upper computer releases the addresses of all the memory slices. In other words, after the read and write operations are completed for the first memory space, the read and write operations are continued for the second memory space; or performing a write operation on the first memory space and then performing a read operation on the second memory space; or a read operation is performed on the first memory space followed by a write operation on the second memory space.
Hereinafter, referring to fig. 5, step S110 of the memory control method, i.e., performing a write operation on the first group of memory slices and/or performing a read operation on the second group of memory slices, is described in detail by way of specific example.
Fourthly, the FPGA starts to write data with fixed length into the first memory chip address through a DMA controller and a high-speed interface (such as PCIe);
the fifth step: when the memory slice is fully written, the memory initial address of the fully written memory slice is written into the FIFO2, and data with fixed length is written into the next memory slice address in the FIFO 1;
and a sixth step: repeating the fifth step to write data until the nth memory chip is fully written, and placing the first address of the nth memory chip into the FIFO 2;
the seventh step: the FPGA starts to read data in the (n + 1) th memory chip address through the DMA controller;
eighth step: when the memory chip is empty, the initial address of the memory chip after the empty reading is written into the FIFO3, and the data in the next memory chip in the FIFO1 is read;
the ninth step: repeating the step eight to read the data until the mth memory chip is read, and placing the first address of the mth memory chip into the FIFO 3;
the tenth step: the FPGA pushes the data of the FIFO3 into the FIFO 2;
the eleventh step: the FPGA triggers the upper computer to interrupt by controlling an external interrupt pin, and informs the upper computer that the data transmission is finished. The upper computer takes out the addresses of all the memory chips in the FIFO2 at the moment after responding to the interrupt, copies the data in the first n memory chips into the memory pool 2 according to the addresses of the memory chips in the FIFO2, writes the data needing to be written into the FPGA into the last m-n memory chips at the same time, and puts the data into the memory pool 3 for management;
the twelfth step: the FPGA takes out the first address of the second group of memory chips in the memory pool 1 and places the first address into the FIFO1, and the fourth step to the eleventh step are repeated;
the thirteenth step: the upper computer releases all the memory chip addresses, and the addresses of all the memory chips are put into the memory pool 1 so as to be capable of continuously performing reading and/or writing operations on the first memory space and the second memory space; and
the fourteenth step is that: and repeating the third step to the thirteenth step.
Optionally, when only the first group of memory chips are subjected to the write operation and the second group of memory chips are not subjected to the read operation, the seventh step to the tenth step are omitted; when only the second group of memory chips are read and the first group of memory chips are not written, the fourth step to the sixth step are omitted; or, when the order of the write operation and the read operation is interchanged, that is, the read operation is performed first and then the write operation is performed, the seventh step to the tenth step are performed, and then the fourth step to the sixth step are performed.
The invention discloses a memory control device. As shown in fig. 4, the memory control device includes an upper computer 400 and an FPGA 416, where the upper computer 400 includes: two memory spaces, a memory slice area with continuous physical addresses is divided into two equal memory spaces 402 and 404, and each memory space comprises a first group of memory slices 406 and a second group of memory slices 408; the first memory pool 410 is used for managing the first addresses of all the memory chips, the first group of memory chips 406 and the second memory pool 412 are used for storing data sent to the upper computer by the FPGA, and the second group of memory chips 408 and the third memory pool 414 are used for storing data sent to the FPGA; the FPGA 416 includes: the three caches 418 are used for transmitting memory slice addresses during data transmission, and the obtaining module 420 is used for taking out the first address of the first group of memory slices or the second group of memory slices in the first memory space from the first memory pool and placing the first address into the first cache; and a read/write module 422, which performs a write operation on the first group of memory slices and/or a read operation on the second group of memory slices through DMA.
In addition, the memory control device further includes other modules, and since the memory control method corresponds to the memory control device, the other modules of the memory control device are not described in detail in order to avoid redundancy.
In the embodiment of the invention, assuming that the size of a single memory chip is 512KB, m is 128, and n is 64 (the data volume of the upper computer and the FPGA interacting simultaneously is the same), the depth of the three FIFOs is 128, and when the bus clock frequency is 62.5Mhz, and the data bus is 32 bits, the data flow is 250MB/s (125 MB/s for both uploading and downloading). Therefore, in the embodiment of the present invention, the interrupt frequency when the upper computer processes data is: 512 × 64KB/250MB/s 131.07 ms. On the contrary, in the prior art, if the DMA transfer mode is directly used in the FPGA and the 512KB memory is used for data interaction, the interrupt frequency is: 512KB/250MB/s 2.05 ms. Therefore, the new processing method reduces the interrupt frequency by 64 times and reduces the delay of the two-way interactive data.
The memory control method is used for simplifying the driving complexity of the embedded operating system and the FPGA during high-speed communication, directly carries out data transfer on the memory address by carrying out cache design on the FPGA end, increases the flexibility of the memory operation of an upper computer, enhances the system stability, reduces the development period, and is suitable for various high-speed interfaces of the FPGA end.
Compared with the prior art, the embodiment of the invention can realize at least one of the following beneficial effects:
1. two memory spaces are applied, three memory pools are opened up to reduce the interrupt frequency of a driving layer of the upper computer, and two-way communication is realized by opening up three smaller FPGA caches and matching with a memory chip of the upper computer, so that the resource consumption and the control logic are smaller;
2. the driving complexity of the embedded operating system and the FPGA in high-speed communication is simplified; and
3. by carrying out cache design on the FPGA end, data transfer is directly carried out on the memory address, the flexibility of the memory operation of the upper computer is increased, the system stability is enhanced, the development period is reduced, and the method is suitable for various high-speed interfaces of the FPGA end.
Those skilled in the art will appreciate that all or part of the flow of the method implementing the above embodiments may be implemented by a computer program, which is stored in a computer readable storage medium, to instruct related hardware. The computer readable storage medium is a magnetic disk, an optical disk, a read-only memory or a random access memory.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention.
Claims (10)
1. A memory control method, comprising:
applying for upper computer memory slice areas with continuous physical addresses and dividing the upper computer memory slice areas into two equal memory spaces, wherein each memory space comprises a first group of memory slices and a second group of memory slices;
opening up three memories to form three memory pools, wherein a first memory pool is used for managing the initial addresses of all memory chips, a first group of memory chips and a second memory pool are used for storing data sent to the upper computer by the FPGA, and a second group of memory chips and a third memory pool are used for storing data sent to the FPGA by the upper computer;
opening three caches in the FPGA for transmitting memory chip addresses during data transmission;
the FPGA takes out the first address of the first group of memory chips or the second group of memory chips in a first memory space from the first memory pool and puts the first address into a first cache; and
and the FPGA carries out writing operation on the first group of memory chips and/or reading operation on the second group of memory chips through DMA.
2. The memory control method of claim 1, wherein the three caches further comprise a second cache and a third cache, wherein,
the first cache is used for storing the first address of the memory slice in the first memory pool;
the second cache is used for storing the first address of the first group of memory slices; and
the third cache is used for storing the first address of the second group of memory slices.
3. The memory control method according to claim 1, wherein the first group of memory slices comprises n memory slices, and the second group of memory slices comprises m-n memory slices, wherein m is greater than n and m and n are positive integers.
4. The memory control method according to claim 3, wherein the writing operation of the FPGA on the first group of memory slices by DMA comprises:
the FPGA takes out the first address of the first group of memory chips from the first memory pool and puts the first address into the first cache;
the FPGA writes fixed-length data into a first memory slice in the first group of memory slices through the DMA;
when the first memory slice is full, writing the address of the first memory slice into a second cache, and simultaneously continuing to write data into a second memory slice in the first group of memory slices; and
and continuously writing data into the rest memory slices in the first group of memory slices until the nth memory slice is fully written, and placing the first address of the nth memory slice into a second cache.
5. The memory control method according to claim 1 or 4, wherein the reading operation of the second group of memory chips by the FPGA through DMA comprises:
the FPGA starts to read data in the (n + 1) th memory chip in the second group of memory chips through the DMA;
after the n +1 th memory chip is empty, the first address of the n +1 th memory chip is placed into a third cache, and data in the n +2 th memory chip is read at the same time; and
and continuing to read the data in the rest memory chips in the second group of memory chips until the mth memory chip is read, and placing the first address of the mth memory chip into a third cache.
6. The memory control method according to claim 5, wherein the FPGA pushes data in the third cache into the second cache.
7. The memory control method according to claim 6, wherein performing a write operation on the first set of memory slices and/or performing a read operation on the second set of memory slices further comprises:
after a read operation is performed on the second set of memory slices, a write operation is performed on the first set of memory slices.
8. The memory control method according to claim 7, further comprising:
the FPGA triggers the upper computer to be interrupted by controlling an external interruption pin and informs the upper computer that the data transmission is finished at the present time;
the upper computer responds to the interrupt and takes out the addresses of all the memory chips in the second buffer; and
and copying the data in the first group of memory chips to the second memory pool according to the addresses of all the memory chips in the second buffer, and/or writing the data needing to be written into the FPGA into the second group of memory chips and putting the data into the third memory pool.
9. The memory control method according to claim 8, further comprising:
the FPGA takes out the first address of the first group of memory chips or the second group of memory chips in a second memory space from the first memory pool and puts the first address into a first cache;
the FPGA carries out writing operation on the first group of memory chips in a second memory space and/or carries out reading operation on the second group of memory chips in the second memory space through DMA; and
and after the write-in operation and/or the read-out operation of the second memory space are completed, the upper computer releases the addresses of all the memory chips.
10. A memory control device is characterized by comprising an upper computer and an FPGA,
the host computer includes:
the memory device comprises two memory spaces, a first group of memory chips and a second group of memory chips, wherein the memory chip areas with continuous physical addresses are divided into two equal memory spaces, and each memory space comprises the first group of memory chips and the second group of memory chips;
the first memory pool is used for managing the initial addresses of all the memory chips, the first group of memory chips and the second memory pool are used for storing data sent to the upper computer by the FPGA, and the second group of memory chips and the third memory pool are used for storing the data sent to the FPGA; the FPGA comprises:
three buffers are used for transferring the memory chip address during data transmission,
an obtaining module, configured to take out a first address of the first group of memory slices or the second group of memory slices in a first memory space from the first memory pool and place the first address into a first cache; and
and the read-write module is used for performing write operation on the first group of memory chips and/or performing read operation on the second group of memory chips through DMA.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010153078.2A CN113360420A (en) | 2020-03-06 | 2020-03-06 | Memory control method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010153078.2A CN113360420A (en) | 2020-03-06 | 2020-03-06 | Memory control method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113360420A true CN113360420A (en) | 2021-09-07 |
Family
ID=77524208
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010153078.2A Pending CN113360420A (en) | 2020-03-06 | 2020-03-06 | Memory control method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113360420A (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140281056A1 (en) * | 2013-03-15 | 2014-09-18 | Vmware, Inc. | Latency reduction for direct memory access operations involving address translation |
CN104169891A (en) * | 2013-10-29 | 2014-11-26 | 华为技术有限公司 | Method and device for accessing memory |
CN104281539A (en) * | 2013-07-10 | 2015-01-14 | 北京旋极信息技术股份有限公司 | Cache managing method and device |
CN106980556A (en) * | 2016-01-19 | 2017-07-25 | 中兴通讯股份有限公司 | A kind of method and device of data backup |
WO2017157110A1 (en) * | 2016-03-18 | 2017-09-21 | 深圳市中兴微电子技术有限公司 | Method of controlling high-speed access to double data rate synchronous dynamic random access memory, and device |
-
2020
- 2020-03-06 CN CN202010153078.2A patent/CN113360420A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140281056A1 (en) * | 2013-03-15 | 2014-09-18 | Vmware, Inc. | Latency reduction for direct memory access operations involving address translation |
CN104281539A (en) * | 2013-07-10 | 2015-01-14 | 北京旋极信息技术股份有限公司 | Cache managing method and device |
CN104169891A (en) * | 2013-10-29 | 2014-11-26 | 华为技术有限公司 | Method and device for accessing memory |
CN106980556A (en) * | 2016-01-19 | 2017-07-25 | 中兴通讯股份有限公司 | A kind of method and device of data backup |
WO2017157110A1 (en) * | 2016-03-18 | 2017-09-21 | 深圳市中兴微电子技术有限公司 | Method of controlling high-speed access to double data rate synchronous dynamic random access memory, and device |
Non-Patent Citations (2)
Title |
---|
ABOLI AUDUMBAR KHEDKAR等: ""High speed FPGA-based data acquisition system"", 《MICROPROCESSORS AND MICROSYSTEMS》, vol. 49 * |
汪舒: ""高速PCIe传输FPGA设计与KMDF驱动实现"", 《硕士电子期刊》, vol. 2019, no. 04 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8681552B2 (en) | System and method for accessing and storing interleaved data | |
KR100673013B1 (en) | Memory controller and data processing system with the same | |
US5574944A (en) | System for accessing distributed memory by breaking each accepted access request into series of instructions by using sets of parameters defined as logical channel context | |
US5408627A (en) | Configurable multiport memory interface | |
EP0164550B1 (en) | I/o controller for multiple disparate serial memories with a cache | |
US20070076503A1 (en) | Circuitry and methods for efficient FIFO memory | |
US4811280A (en) | Dual mode disk controller | |
US20040193782A1 (en) | Nonvolatile intelligent flash cache memory | |
CN110334035B (en) | Control unit of data storage system and method for updating logical-to-physical mapping table | |
CN102214482A (en) | High-speed high-capacity solid electronic recorder | |
CN110543433B (en) | Data migration method and device of hybrid memory | |
US20240021239A1 (en) | Hardware Acceleration System for Data Processing, and Chip | |
EP0745941A2 (en) | A system and method for providing a flexible memory hierarchy | |
US7069409B2 (en) | System for addressing a data storage unit used in a computer | |
CN114253461A (en) | Mixed channel memory device | |
CN114253462A (en) | Method for providing mixed channel memory device | |
CN113360420A (en) | Memory control method and device | |
US7581072B2 (en) | Method and device for data buffering | |
KR100438736B1 (en) | Memory control apparatus of performing data writing on address line | |
US20080282054A1 (en) | Semiconductor device having memory access mechanism with address-translating function | |
CN213338708U (en) | Control unit and storage device | |
JP2005267148A (en) | Memory controller | |
CN112256203B (en) | Writing method, device, equipment, medium and system of FLASH memory | |
CN113495850B (en) | Method, apparatus and computer readable storage medium for managing garbage collection program | |
KR100487199B1 (en) | Apparatus and method for data transmission in dma |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |