WO2017008563A1 - Procédé et dispositif de traitement de données, et support de stockage - Google Patents

Procédé et dispositif de traitement de données, et support de stockage Download PDF

Info

Publication number
WO2017008563A1
WO2017008563A1 PCT/CN2016/081615 CN2016081615W WO2017008563A1 WO 2017008563 A1 WO2017008563 A1 WO 2017008563A1 CN 2016081615 W CN2016081615 W CN 2016081615W WO 2017008563 A1 WO2017008563 A1 WO 2017008563A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
memory
cache
address information
feature parameter
Prior art date
Application number
PCT/CN2016/081615
Other languages
English (en)
Chinese (zh)
Inventor
陆亚军
廖智勇
刘衡祁
王志忠
Original Assignee
深圳市中兴微电子技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市中兴微电子技术有限公司 filed Critical 深圳市中兴微电子技术有限公司
Publication of WO2017008563A1 publication Critical patent/WO2017008563A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/06Addressing a physical block of locations, e.g. base addressing, module addressing, memory dedication

Definitions

  • the present invention relates to data reading and writing technology, and in particular, to a data processing method, a device thereof, and a storage medium.
  • the network transmission chip In order to reduce the cost, the network transmission chip generally uses Double Data Rate Synchronous Dynamic Random Access Memory (DDR SDRAM) to store data to meet the storage bandwidth and cache capacity requirements.
  • DDR SDRAM Double Data Rate Synchronous Dynamic Random Access Memory
  • the read/write efficiency of the DDR SDRAM and the cache utilization rate cannot be effectively improved. Therefore, a method is needed to solve the above problem.
  • the embodiments of the present invention provide a data processing method, a device thereof, and a storage medium, which can effectively improve the read/write efficiency of the memory and the cache utilization rate.
  • An embodiment of the present invention provides a data processing method, where the method includes:
  • the first cache feature parameter corresponding to the memory is adjusted according to the read address information, and the second cache feature parameter corresponding to the bank is adjusted.
  • the method further includes:
  • the first cache feature parameter and the second cache feature parameter both represent a cache usage amount; correspondingly,
  • the adjusting the first cache feature parameter corresponding to the memory according to the write address information, and adjusting the second cache feature parameter corresponding to the storage body includes:
  • the method further includes:
  • the first cache feature parameter and the second cache feature parameter are both characterized Cache usage; correspondingly,
  • the adjusting the first cache feature parameter corresponding to the memory according to the read address information, and adjusting the second cache feature parameter corresponding to the storage body includes:
  • the embodiment of the invention further provides a data processing device, the data processing device comprising:
  • a first acquiring unit configured to acquire a first cache feature parameter corresponding to the at least one memory
  • a second acquiring unit configured to acquire a second cache feature parameter corresponding to the at least one bank; the at least one bank is disposed in the at least one memory;
  • the adjusting unit is configured to: when it is determined that the first data is written into the at least one of the at least one memory, adjust the first cache feature parameter corresponding to the memory according to the write address information, and adjust the second corresponding to the storage body Cache feature parameters;
  • the data processing device further includes:
  • a first determining unit configured to determine a first target memory according to the first cache feature parameter corresponding to the at least one memory
  • a second determining unit configured to determine a first target storage body according to a second cache feature parameter corresponding to the at least one storage body; the first target storage body is disposed in the first target storage;
  • a third determining unit configured to determine the write address information according to the address information corresponding to the first target memory and the first target storage body, so as to store the first data in the write address The first target storage in the first target memory indicated by the information.
  • the first cache feature parameter and the second cache feature parameter both represent a cache usage amount; correspondingly,
  • the adjusting unit is further configured to: adjust, according to the first target memory indicated by the write address information, a buffer usage amount corresponding to the first target memory; and first, according to the write address information
  • the target storage body reduces the cache usage amount corresponding to the first target storage body.
  • the data processing device further includes:
  • a receiving unit configured to receive the read address information
  • a processing unit configured to acquire the second target storage indicated by the read address information and the second target storage in the second target storage.
  • the first cache feature parameter and the second cache feature parameter both represent a cache usage amount; correspondingly,
  • the adjusting unit is further configured to increase a buffer usage amount corresponding to the second target memory according to the second target memory indicated by the read address information; and second according to the read address information
  • the target storage body increases the cache usage amount corresponding to the second target storage medium.
  • the data processing method and the device and the storage medium according to the embodiments of the present invention are capable of adjusting the first cache feature parameter corresponding to the memory according to the storage change of the data, and adjusting the second cache feature parameter corresponding to the storage body, thereby maximally
  • the average use of each set of memory, as well as the banks in each set of memory, improves memory utilization while improving memory read and write efficiency.
  • FIG. 1 is a schematic flowchart 1 of an implementation process of a data processing method according to an embodiment of the present invention
  • FIG. 2 is a schematic flowchart of a step of determining write address information according to an embodiment of the present invention
  • FIG. 3 is a schematic structural diagram 1 of a data processing apparatus according to an embodiment of the present invention.
  • FIG. 4 is a schematic flowchart of a specific implementation process of a data processing method according to an embodiment of the present invention.
  • FIG. 5 is a schematic structural diagram of a connection between a data processing device and other devices in a specific application according to an embodiment of the present invention
  • FIG. 6 is a schematic diagram of a probability curve according to an embodiment of the present invention.
  • FIG. 7 is a second schematic structural diagram of a data processing apparatus according to an embodiment of the present invention.
  • FIG. 8 is a schematic structural diagram 3 of a data processing apparatus according to an embodiment of the present invention.
  • DDR SDRAM chips In order to match data traffic, electronic devices usually use multiple sets of DDR SDRAM chips. For example, at 200 Gbps data traffic, electronic devices use eight sets of DDR SDRAM chips; here, due to the characteristics of DDR SDRAM chips, pre-charging is required when switching in the same bank. And activation, so the read and write efficiency is reduced; in order to solve the problem of reduced read and write efficiency caused by pre-charging and activation, the existing method often uses the inter-bank polling method to pre-charge and activate the bank of the pre-executed command in advance. Operation, which in turn hides the time taken for pre-charging and activation operations.
  • a splicing method is mentioned, that is, a message in which the length of the tail piece does not satisfy the slice length is spliced with the first slice of the next message to ensure that the length of the slice is fixed, and then The processing is further performed to achieve the purpose of improving the read/write efficiency and cache utilization of the DDR SDRAM.
  • the above method of polling and splicing between banks improves the efficiency of reading and writing, it also has great drawbacks.
  • the inter-bank polling can hide the time of pre-charging and activation operations, if the electronic device uses multiple sets of DDR SDRAM chips, it is impossible to solve the problem of improving cache utilization only by inter-bank polling. For example, when an electronic device uses two sets of DDR SDRAM chips, it uses an average of two sets of DDRs by inter-bank polling. SDRAM chip, but because the message is prioritized, the high-priority message is preferentially scheduled. Assuming that the high-priority message is mainly stored in the DDR SDRAM chip numbered 0, the high-priority message is followed.
  • the available buffer of the DDR SDRAM chip numbered 0 will be increased. If the buffer address is determined by polling between banks, there is no switching between DDR SDRAMs, that is, the polling mode between banks does not poll. To the bank in the DDR SDRAM chip numbered 0, the DDR SDRAM chip numbered 0 is not fully utilized, so the cache utilization of the DDR SDRAM chip numbered 0 is reduced.
  • the splicing operation is required when the message is written, and can only be processed after the splicing is completed. Therefore, the efficiency of the message writing is reduced; specifically, the processing period of the packet is not required to be spliced.
  • the processing period of the splicing operation is T1
  • the message writing time after the splicing operation becomes T0+T1
  • the time becomes longer thus affecting the line rate processing of the message; further, in the message After being read out, it needs to be spliced, and this part of the processing logic also takes time. Therefore, the splicing method prolongs the processing period of the entire message, that is, the processing period of reading and writing of the message is prolonged.
  • an embodiment of the present invention provides a data processing method, an apparatus, and a storage medium.
  • the basic idea of the embodiment of the present invention is: acquiring a first cache feature parameter corresponding to the at least one memory; acquiring a second cache feature parameter corresponding to the at least one bank; the at least one bank is disposed in the at least one memory; Determining, when the first data is written in the at least one memory in the at least one memory, adjusting the first cache feature parameter corresponding to the memory according to the write address information, and adjusting the second cache feature parameter corresponding to the bank; When the second data is read out from the at least one of the at least one memory, the first cache feature parameter corresponding to the memory is adjusted according to the read address information, and the corresponding bank is adjusted.
  • the second cache feature parameter is: acquiring a first cache feature parameter corresponding to the at least one memory; acquiring a second cache feature parameter corresponding to the at least one bank; the at least one bank is disposed in the at least one memory; Determining, when the first data is written in the at least one memory in the at least one memory, adjusting the first cache feature parameter corresponding to
  • FIG. 1 is a schematic flowchart of an implementation of a data processing method according to an embodiment of the present invention; applied to a data processing device; as shown in FIG. 1, the method includes:
  • Step 101 Acquire a first cache feature parameter corresponding to at least one memory.
  • Step 102 Acquire a second cache feature parameter corresponding to at least one bank; the at least one bank is disposed in the at least one memory;
  • Step 103 When it is determined that the first data is written into the at least one memory in the at least one memory, adjusting the first cache feature parameter corresponding to the memory according to the write address information, and adjusting the second cache feature corresponding to the storage body parameter;
  • Step 104 When it is determined that the second data is read out in at least one of the at least one memory, adjusting the first cache feature parameter corresponding to the memory according to the read address information, and adjusting the second cache corresponding to the storage body Characteristic Parameters.
  • the memory may be specifically DDR, or DDR SDRAM.
  • the electronic device is provided with or connected to a data processing device, and the data processing device is provided with or connected with N sets of memories, and each group of memories is provided with M banks (Bank), wherein, N and M A positive integer greater than or equal to 1; for example, the data processing apparatus is connected with N sets of DDR SDRAM, each set of DDR SDRAMs includes M banks; at this time, the data processing apparatus acquires the first of each set of DDR SDRAMs.
  • the data processing device also acquires the second cache feature parameters of the M banks corresponding to each group of DDR SDRAM, a total of M ⁇ N; thus, when the first data is pre-stored in at least one And determining, by the data processing device, the write address information corresponding to the first data according to the N first cache feature parameters and the M ⁇ N second cache feature parameters acquired by the data processing device; , as shown in FIG. 2, the step of determining write address information corresponding to the pre-written data includes:
  • Step 201 Determine a first target memory according to the first cache feature parameter corresponding to the at least one memory
  • Step 202 Determine a first target storage body according to a second cache feature parameter corresponding to the at least one storage unit; the first target storage body is disposed in the first target storage;
  • Step 203 Determine the write address information according to the address information corresponding to the first target memory and the first target storage body, so as to store the first data in the first indication of the write address information.
  • the first target storage body is a storage body corresponding to the first target storage, that is, the first target storage medium is disposed in the first target storage; further, to ensure The first target storage body is a storage body in the first target storage, that is, in order to prevent the data processing device from acquiring a second cache feature parameter corresponding to a storage body that is not the first target storage,
  • the step 202 may also be specifically:
  • the first target bank is output.
  • the write address information is determined according to the determined first target storage body and the first target storage, so as to write the first data into the first target corresponding to the write address information.
  • the first target bank of memory is used to store the write address information.
  • the data processing device further receives the read address information; further, after the data processing device receives the read address information, parsing the read address information, and according to The parsing result acquires the second target storage indicated by the read address information and the second target storage in the second target storage.
  • the first cache feature parameter corresponding to the first target memory finds a change; similarly, when the second data is read from the second target bank in the second target memory, it is apparent that the The first cache feature parameter corresponding to the second target storage and the second cache feature parameter corresponding to the second target storage are found to be changed; therefore, to enable the data processing device to acquire an accurate cache of the memory and the storage body
  • the feature parameter which is convenient for the new data to be written, determines the new write address information that matches the current cache feature according to the latest cache feature parameter, and the embodiment of the present invention also needs to adjust the cache feature parameters of the memory and the storage body.
  • the first cache feature parameter and the second cache feature parameter both represent a cache usage amount; correspondingly,
  • the adjusting the first cache feature parameter corresponding to the memory according to the write address information, and adjusting the second cache feature parameter corresponding to the storage body includes:
  • the adjusting the first cache feature parameter corresponding to the memory according to the read address information, and adjusting the second cache feature parameter corresponding to the storage body includes:
  • the buffer usage amount corresponding to the first target memory may be reduced according to the data amount of the first data written, and the corresponding corresponding to the first target storage body may be adjusted. Cache usage; further, data according to the read second data The amount of the cache usage corresponding to the second target storage is increased, and the cache usage corresponding to the second target storage volume is increased.
  • the data processing method in the embodiment of the present invention can adjust the first cache feature parameter corresponding to the memory according to the storage change of the data, and adjust the second cache feature parameter corresponding to the storage body, thereby maximizing the average use of each group.
  • the memory, and the banks in each group of memories improve the utilization of the memory; moreover, when the memory is DDR SDRAM, the embodiment of the present invention can hide the pre-charging time and activation time of the same line of the DDR SDRAM. In turn, the read and write efficiency of DDR SDRAM is improved.
  • the embodiment of the present invention further provides a data processing device.
  • the data processing device includes:
  • the first obtaining unit 31 is configured to acquire a first cache feature parameter corresponding to the at least one memory
  • the second obtaining unit 32 is configured to acquire a second cache feature parameter corresponding to the at least one bank; the at least one bank is disposed in the at least one memory;
  • the adjusting unit 33 is configured to: when it is determined that the first data is written into the at least one memory in the at least one memory, adjust the first cache feature parameter corresponding to the memory according to the write address information, and adjust the corresponding body of the storage body Two cache feature parameters;
  • the data processing apparatus further includes:
  • the first determining unit 34 is configured to determine, according to the first cache feature parameter corresponding to the at least one memory, a first target memory;
  • the second determining unit 35 is configured to determine a first target storage body according to the second cache feature parameter corresponding to the at least one storage body; the first target storage body is disposed in the first target storage In the device;
  • the third determining unit 36 is configured to determine the write address information according to the address information corresponding to the first target memory and the first target storage body, so as to store the first data in the write The first target bank in the first target memory indicated by the address information.
  • the first cache feature parameter and the second cache feature parameter both represent a cache usage amount; correspondingly,
  • the adjusting unit 33 is further configured to: adjust, according to the first target memory indicated by the write address information, a buffer usage amount corresponding to the first target memory; and according to the instruction indicated by the write address information a target bank, which reduces the cache usage of the first target bank.
  • the data processing apparatus further includes:
  • the receiving unit 37 is configured to receive the read address information
  • the processing unit 38 is configured to acquire the second target storage indicated by the read address information and the second target storage in the second target storage.
  • the first cache feature parameter and the second cache feature parameter both represent a cache usage amount; correspondingly,
  • the adjusting unit 33 is further configured to increase a buffer usage amount corresponding to the second target memory according to the second target memory indicated by the read address information; and according to the instruction indicated by the read address information
  • the second target storage body increases the cache usage amount corresponding to the second target storage medium.
  • the first acquiring unit 31, the second obtaining unit 32, the adjusting unit 33, the first determining unit 34, the second determining unit 35, the third determining unit 36, the receiving unit 37, and the processing unit 38 may each be Central Processing Unit (CPU, Central Processing Unit), or digital The signal processing (DSP, Digital Signal Processor), or Field Programmable Gate Array (FPGA) is implemented; the CPU, DSP, and FPGA can be built in the data processing device.
  • CPU Central Processing Unit
  • DSP Digital Signal Processor
  • FPGA Field Programmable Gate Array
  • the embodiment of the invention further provides a computer readable storage medium, the storage medium comprising a set of instructions for executing the data processing method according to the first embodiment.
  • the foregoing device embodiments are merely illustrative.
  • the division of the unit is only a logical function division, and may be implemented in actual implementation.
  • the way of division such as: multiple units or components can be combined, or can be integrated into another system, or some features can be ignored, or not executed.
  • the manner in which the data processing apparatus is divided is different from that in the first embodiment.
  • the memory is specifically a DDR SDRAM; the data processing method runs in a data processing device; as shown in FIG. 5, the data processing device 51 is shown in FIG. Connected to the DDR controller 52; the DDR controller 52 is coupled to the DDR SDRAM group 53; that is, the data processing device 51 is coupled to the DDR SDRAM group 53 via the DDR controller 52; the DDR SDRAM The group includes one or more sets of DDR SDRAMs, and each set of DDR SDRAMs includes one or more banks.
  • the read and write operations of the message data are implemented by the DDR SDRAM group, and all The message data is in accordance with the processing flow read and then read; specifically, as shown in FIG. 4, the method includes:
  • Step 401 The DDR controller receives the first packet, and slices the received first packet according to requirements, and divides the first packet into one or more first fragments of the same length.
  • Step 402 The DDR controller sends one or more first fragments to the data processing device. And triggering the data processing device to count the cache usage of each group of DDR SDRAMs in the DDR SDRAM group, and counting the buffer usage of each bank in each group of DDR SDRAMs;
  • Step 403 The data processing apparatus determines, according to the buffer usage of each group of DDR SDRAMs in the DDR SDRAM group, and the buffer usage of each bank in each group of DDR SDRAMs, the first target DDR SDRAM is determined. And the first target bank in the first target DDR SDRAM;
  • the data processing apparatus determines a probability curve corresponding to the cache usage amount according to a preset rule, and further, causes the data processing apparatus to use a cache usage amount of each group of DDR SDRAMs in the DDR SDRAM group. And a buffer usage of each bank in each group of DDR SDRAMs, and a probability curve corresponding to the cache usage amount to determine a first target DDR SDRAM, and a first target bank in the first target DDR SDRAM.
  • FIG. 6 is a schematic diagram of a probability curve according to an embodiment of the present invention. As shown in FIG. 6, the greater the amount of cache usage, the smaller the probability that the set of DDR SDRAMs are selected; when the cache usage exceeds a certain maximum threshold, such as a maximum value (maxth) ), the set of DDR SDRAM will not be selected.
  • a certain maximum threshold such as a maximum value (maxth)
  • the data processing apparatus determines a first target DDR SDRAM according to a buffer usage amount of each group of DDR SDRAMs in the DDR SDRAM group; further, the data processing apparatus is configured according to the first The buffer usage of each bank in the target DDR SDRAM determines the first target bank. At this time, the first target bank is the bank in the first target DDR SDRAM.
  • Step 404 The data processing apparatus determines, according to the first target DDR SDRAM and address information corresponding to the first target bank, write address information.
  • Step 405 The data processing apparatus sends the write address information to a DDR controller, and writes one or more first fragments in the first packet to the write by using the DDR controller. Entering the first target bank of the first target DDR SDRAM corresponding to the address information;
  • Step 406 After the first message is written into the DDR SDRAM group according to the write address information, the DDR controller triggers the data processing device to determine the first target DDR SDRAM and the a first target bank, adjusting a cache usage amount corresponding to the first target DDR SDRAM, and adjusting a cache usage amount corresponding to the first target bank;
  • the first target DDR SDRAM determined by the data processing device may be specifically an identification number corresponding to the target DDR SDRAM, which is called a DDR SDRAM number; similarly, the data processing device
  • the first target bank that is determined may also be specifically an identification number corresponding to the target bank, which is called a bank number; thus, the data processing device determines the first target DDR SDRAM by using the DDR SDRAM number, and the first number is obtained by the Bank number.
  • the first target bank is determined in the target DDR SDRAM.
  • Step 407 When the first message needs to be read, the data processing device receives the read address information sent by the DDR controller, parses the read address information, and obtains the information according to the analysis result. a second target DDR SDRAM corresponding to the read address information, and a second target bank corresponding to the second target DDR SDRAM;
  • the read message is also the first message, and the first message is written into the DDR SDRAM group according to the write address information
  • the read address information and the The write address information is the same;
  • the second target DDR SDRAM is the same as the first target DDR SDRAM, and the second target bank is the same as the first target bank.
  • Step 408 The DDR controller reads each first fragment from the DDR SDRAM group according to the read address information, and combines each first fragment into a complete first packet, and outputs the first a message;
  • the data processing device sends the second target DDR SDRAM and the second target bank determined by itself to the DDR controller, thereby causing the DDR controller to be
  • the second target DDR SDRAM and the second target bank read out the first fragments, and combine the first fragments into a complete first message to output the first message.
  • Step 409 After determining that the first message is read, the DDR controller triggers the data processing device to use the second target DDR SDRAM corresponding to the read address information, and the second target DDR.
  • the second target bank corresponding to the SDRAM adjusts a cache usage amount corresponding to the second target DDR SDRAM, and adjusts a cache usage amount corresponding to the second target bank to release a cache space corresponding to the read address information.
  • the data processing device can obtain the latest cache usage of each group of DDR SDRAMs in the DDR SDRAM group, and each group of DDR SDRAMs.
  • the latest cache usage of each bank and based on the latest cache usage, determine the new write address information that matches the current cache usage, thus maximizing the average use of each set of DDR SDRAM, and
  • Each bank in each group of DDR SDRAMs improves the cache utilization of the DDR SDRAM group and improves the read and write efficiency of the DDR SDRAM group.
  • the embodiment of the present invention further provides a data processing apparatus.
  • the data processing apparatus includes:
  • the packet processing module 71 is configured to receive one or more first fragments sent by the DDR controller, and is further configured to send the one or more first fragments to the DDR processing module and the bank processing module;
  • the packet processing module 71 is further configured to acquire and read an address.
  • the DDR processing module 72 is configured to receive one or more first fragments sent by the packet processing module 71, and trigger the self-stated cache usage of each group of DDR SDRAMs in the DDR SDRAM group, and determine according to a preset rule. And outputting the first target DDR SDRAM corresponding to the one or more first fragments, and then sending the determined first target DDR SDRAM to the address module;
  • the DDR processing module 72 includes: a DDR statistics sub-module 721, a DDR selection probability sub-module 722, and a DDR synthesis judging sub-module 723; a DDR statistical sub-module and a DDR selection probability sub-
  • the number of modules corresponds to the number of groups of DDR SDRAM; specifically,
  • the DDR statistics sub-module 721 is configured to track and count the cache usage of each group of DDR SDRAMs in the DDR SDRAM group; here, in a preferred embodiment, there are several sets of DDR SDRAMs, and several DDR statistical sub-modules are required; Each DDR statistic sub-module 721 is configured to track and count the cache usage of a certain group of DDR SDRAMs in the DDR SDRAM group; and then track and count the cache usage of all DDR SDRAMs in the DDR SDRAM group through the plurality of sets of DDR statistic sub-modules 721.
  • the DDR statistics sub-module is further configured to: when the first message is written into the DDR SDRAM group, adjust a buffer usage of the first target DDR SDRAM corresponding to the write address information, specifically And increasing a buffer usage amount corresponding to the first target DDR SDRAM; and configured to adjust the second target DDR SDRAM corresponding to the read address information when the second packet is read out of the DDR SDRAM group Cache usage, specifically reducing the second target DDR corresponding to the read address information SDRAM cache usage.
  • the DDR selection probability sub-module 722 is configured to select a probability curve and control logic according to the buffer usage of each group of DDR SDRAMs in the DDR SDRAM group; and configured to use the buffer usage of each group of DDR SDRAMs in the DDR SDRAM group.
  • a probability curve and control logic determining a probability value corresponding to each set of DDR SDRAMs; further configured to determine whether a certain set of DDR SDRAMs corresponding to the probability values are based on probability values corresponding to each set of DDR SDRAMs and control logic Specifically, the larger the cache usage of a certain group of DDR SDRAMs, the smaller the probability of being selected; in a preferred embodiment, the DDR selection probability sub-module 722 has a one-to-one correspondence with the DDR statistics sub-module 721, that is, Each DDR selection probability sub-module 722 and DDR statistical sub-module 721 corresponds to one DDR SDRAM group, and each set of DDR SDRAM corresponds to a probability curve.
  • the DDR selection probability sub-module only functions at the slice input, and is invalid when the slice is output; that is, when the first data is written into the DDR SDRAM group, the DDR selection probability sub-module functions; When the second data is read out of the DDR SDRAM group, the DDR selection probability sub-module does not function.
  • the DDR comprehensive judgment sub-module 723 is configured to select the first target DDR SDRAM according to the information indicating whether the DDR SDRAM of the group is selected and the judgment logic of the DDR selection probability sub-module output; specifically, selecting the first target DDR SDRAM Corresponding DDR SDRAM number; configured to send the first target DDR SDRAM to the bank processing module and the address module; specifically, send the DDR SDRAM number corresponding to the first target DDR SDRAM to the bank processing module and the address module
  • the DDR comprehensive judgment sub-module 723 is only provided with one, that is, the plurality of DDR selection probability sub-modules 722 and the plurality of DDR statistical sub-modules 721 correspond to one DDR comprehensive judgment sub-module 723.
  • the DDR integrated judgment sub-module functions only at the time of slice input, and is invalid when the slice is output; that is, when the first data is written into the DDR SDRAM group, the DDR integrated judgment sub-module functions.
  • the DDR comprehensive judgment submodule When the second data is read out of the DDR SDRAM group, the DDR comprehensive judgment submodule The block does not work.
  • the currently selected DDR SDRAM numbers are combined and processed according to a selection algorithm to determine a unique first target DDR SDRAM, and The determined first target DDR SDRAM is sent to the bank processing module and the address module; for example, determining the DDR SDRAM number corresponding to the unique first target DDR SDRAM, and transmitting the DDR SDRAM number corresponding to the unique first target DDR SDRAM To the bank processing module and address module.
  • the selection algorithm may be a polling scheduling (RR) algorithm, a congestion management algorithm (WFQ), or the like.
  • the bank processing module 73 is configured to receive one or more first fragments sent by the packet processing module 71, and trigger itself to track and count the buffer usage of each bank in the DDR SDRAM group; Determining, by the first target DDR SDRAM, a first target bank corresponding to the one or more first fragments in the first target DDR SDRAM according to a preset rule, and further, the first target bank Send to the address module;
  • the usage amount is further configured to: when the second message is read out of the DDR SDRAM group, adjust a buffer usage amount of the second target bank corresponding to the read address information, specifically reduce the readout The cache usage of the second target bank corresponding to the address information.
  • the bank processing module 73 includes: a bank statistics sub-module 731, a bank selection probability sub-module 732, a bank comprehensive judgment sub-module 733, and a multi-path selection sub-module (MUX) 734; As shown in FIG.
  • each bank corresponds to a Bank statistical sub-module 731 and a Bank selection probability sub-module 732; however, if one bank of each group of DDR SDRAMs has a bank corresponding to the Bank statistical sub-module 731, Bank selection probability
  • the sub-module 732 is bound into a group; all the banks in each group of DDR SDRAM correspond to one bank comprehensive judgment sub-module, and all the bank comprehensive judgment sub-modules correspond to one multi-path selection sub-module 734; Body,
  • the Bank Statistics sub-module 731 is configured to track and count the buffer usage of each bank in the DDR SDRAM group; further, to be configured to adjust and write the first message when the first message is written to the DDR SDRAM group
  • the buffer usage of the first target bank corresponding to the address information specifically increasing the buffer usage amount corresponding to the first target bank; and further configured to adjust when the second packet is read out of the DDR SDRAM group
  • the buffer usage amount of the second target bank corresponding to the read address information specifically reduces the buffer usage amount of the second target bank corresponding to the read address information.
  • the number of DDR SDRAM groups used is different, and the number of banks in each group of DDR SDRAM is different, and the number of Bank statistical sub-modules and Bank selection probability sub-modules are different.
  • the Bank selection probability sub-module is similar to the Bank statistical sub-module; assuming that the system uses 8 sets of DDR SDRAM, each group of DDR SDRAM has 8 banks, then a total of 64 Bank statistical sub-modules are needed.
  • the 64 Bank statistical sub-modules are not completely independent, but each 8 bank statistics modules are bound into a group, corresponding to a group of DDR SDRAM.
  • each time a first fragment is written the buffer usage of the bank to which the first fragment belongs is increased by one; conversely, when the fragment is output, the usage of the bank to which the fragment belongs is subtracted by one.
  • the Bank selection probability sub-module 732 is configured to select a probability curve according to the buffer usage of each bank in the DDR SDRAM group; and is further configured to determine a probability value corresponding to each bank according to the buffer usage of each bank and the probability curve. And configured to determine, according to the probability value corresponding to each bank, whether a certain bank corresponding to the probability value is selected; specifically, the larger the buffer usage of a certain group of banks, the smaller the probability of being selected; here,
  • the Bank Select Probability sub-module only works when the slice is input, and is invalid when the slice is output; that is, when the first data is written into the DDR SDRAM group, the Bank Select Probability sub-module functions; when the second When the data is read out of the DDR SDRAM group, the Bank Selection Probability sub-module does not work.
  • each bank or each group of DDR SDRAM or all groups of DDR SDRAM corresponds to a probability curve.
  • the Bank comprehensive judgment sub-module 733 is configured to determine, according to the information indicating whether the Bank is selected by the Bank selection probability sub-module and its own judgment logic, to determine the first suspect target bank; and to configure the first suspect target bank to be determined.
  • a multiplex selection sub-module 734 configured to receive the first target DDR SDRAM and the first suspected target bank; further configured to receive, according to the received first suspect target bank, the first target DDR SDRAM Determining a first target bank in a target DDR SDRAM; specifically, determining a bank number corresponding to the first target bank in the first target DDR SDRAM; and configured to send the first target bank to the address module.
  • the Bank comprehensive judgment sub-module receives the input information of a plurality of banks belonging to a group of DDR SDRAMs, and combines the information corresponding to the last selected bank according to a selection algorithm, such as RR polling, WFQ. Etc., determine the currently selected bank number, that is, the bank number corresponding to the first suspected target bank. If there are multiple sets of DDR SDRAM, there will be a corresponding number of Bank numbers. At this time, a plurality of Bank numbers, that is, a bank number corresponding to the plurality of first suspect target banks are sent to the multiplex selection sub-module; and then the multiplex selection sub-module needs to output the first target DDR SDRAM according to the DDR processing module. Corresponding DDR SDRAM number, the bank number corresponding to the first target bank is determined in the first target DDR SDRAM.
  • the address module 74 is configured to receive the first target DDR SDRAM, and the first target bank in the first target DDR SDRAM, and determine the write according to the first target DDR SDRAM and the first target bank. Address information; also configured to write the address address The information is sent to the DDR control module.
  • the packet processing module 71, the DDR processing module 72, the bank processing module 73, and the address module 74 may each be a central processing unit (CPU) or a digital signal processor (DSP). Or a Field Programmable Gate Array (FPGA), etc.; the CPU, DSP, and FPGA can be built in the data processing apparatus.
  • CPU central processing unit
  • DSP digital signal processor
  • FPGA Field Programmable Gate Array
  • the data processing method and apparatus can maximize the average utilization of the cache space, and avoid excessive use or under-use of a certain group or some DDR SDRAMs, and the case of one or some banks in the DDR SDRAM.
  • embodiments of the present invention can be provided as a method, system, or computer program product. Accordingly, the present invention can take the form of a hardware embodiment, a software embodiment, or a combination of software and hardware. Moreover, the invention can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage and optical storage, etc.) including computer usable program code.
  • These computer program instructions can also be stored in a bootable computer or other programmable data processing
  • the apparatus is readable in a computer readable memory in a particular manner such that instructions stored in the computer readable memory produce an article of manufacture comprising instruction means implemented in one or more flows and/or block diagrams of the flowchart The function specified in the box or in multiple boxes.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
  • the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.
  • the embodiment of the present invention can adjust the first cache feature parameter corresponding to the memory according to the storage change of the data, and adjust the second cache feature parameter corresponding to the storage body, thereby maximizing the average use of each set of memory, and each set of memory
  • the memory bank improves the utilization of the memory and improves the read and write efficiency of the memory.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

Procédé et dispositif de traitement de données, et support de stockage. Le procédé comporte les étapes consistant à: acquérir un premier paramètre caractéristique de tampon correspondant à au moins une mémoire (101); acquérir un deuxième paramètre caractéristique de tampon correspondant à au moins un bloc de mémoire, le ou les blocs de mémoire se trouvant dans la ou les mémoires (102); lorsqu'il est déterminé qu'il existe des premières données écrites dans le ou les blocs de mémoire dans la ou les mémoires, régler le premier paramètre caractéristique de tampon correspondant à la mémoire et régler le deuxième paramètre caractéristique de tampon correspondant au bloc de mémoire, en fonction d'informations concernant une adresse dans laquelle les premières données sont écrites (103); et lorsqu'il est déterminé qu'il existe des deuxièmes données lues à partir du ou des blocs de mémoire dans la ou les mémoires, régler le premier paramètre caractéristique de tampon correspondant à la mémoire et régler le deuxième paramètre caractéristique de tampon correspondant au bloc de mémoire, en fonction d'informations concernant l'adresse à partir de laquelle les deuxièmes données sont lues (104).
PCT/CN2016/081615 2015-07-15 2016-05-10 Procédé et dispositif de traitement de données, et support de stockage WO2017008563A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510417343.2A CN106356088A (zh) 2015-07-15 2015-07-15 一种数据处理方法及其装置
CN201510417343.2 2015-07-15

Publications (1)

Publication Number Publication Date
WO2017008563A1 true WO2017008563A1 (fr) 2017-01-19

Family

ID=57756801

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/081615 WO2017008563A1 (fr) 2015-07-15 2016-05-10 Procédé et dispositif de traitement de données, et support de stockage

Country Status (2)

Country Link
CN (1) CN106356088A (fr)
WO (1) WO2017008563A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019095942A1 (fr) * 2017-11-17 2019-05-23 华为技术有限公司 Procédé de transmission de données et dispositif de communication

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108959105B (zh) * 2017-05-17 2023-12-22 深圳市中兴微电子技术有限公司 一种实现地址映射的方法及装置
CN111857817B (zh) * 2019-04-25 2024-02-20 比亚迪半导体股份有限公司 数据读取方法、数据读取装置及数据读取系统
CN117393013B (zh) * 2023-12-09 2024-04-09 深圳星云智联科技有限公司 统计应用中的高效ddr控制方法及相关装置

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6912616B2 (en) * 2002-11-12 2005-06-28 Hewlett-Packard Development Company, L.P. Mapping addresses to memory banks based on at least one mathematical relationship
CN102096562A (zh) * 2011-02-12 2011-06-15 华为技术有限公司 数据写入方法及装置
CN103425437A (zh) * 2012-05-25 2013-12-04 华为技术有限公司 初始写入地址选择方法和装置
CN103605478A (zh) * 2013-05-17 2014-02-26 华为技术有限公司 存储地址标示、配置方法和数据存取方法及系统

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009211735A (ja) * 2008-02-29 2009-09-17 Toshiba Corp 不揮発性記憶装置
JP2009259329A (ja) * 2008-04-16 2009-11-05 Toshiba Corp 半導体集積回路装置
CN102684976B (zh) * 2011-03-10 2015-07-22 中兴通讯股份有限公司 一种基于ddr sdram进行数据读写的方法、装置及系统

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6912616B2 (en) * 2002-11-12 2005-06-28 Hewlett-Packard Development Company, L.P. Mapping addresses to memory banks based on at least one mathematical relationship
CN102096562A (zh) * 2011-02-12 2011-06-15 华为技术有限公司 数据写入方法及装置
CN103425437A (zh) * 2012-05-25 2013-12-04 华为技术有限公司 初始写入地址选择方法和装置
CN103605478A (zh) * 2013-05-17 2014-02-26 华为技术有限公司 存储地址标示、配置方法和数据存取方法及系统

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019095942A1 (fr) * 2017-11-17 2019-05-23 华为技术有限公司 Procédé de transmission de données et dispositif de communication
US11297011B2 (en) 2017-11-17 2022-04-05 Huawei Technologies Co., Ltd. Data transmission method and communications device

Also Published As

Publication number Publication date
CN106356088A (zh) 2017-01-25

Similar Documents

Publication Publication Date Title
WO2017008563A1 (fr) Procédé et dispositif de traitement de données, et support de stockage
US10248350B2 (en) Queue management method and apparatus
US10135711B2 (en) Technologies for sideband performance tracing of network traffic
EP3657744B1 (fr) Traitement de message
CN112511325B (zh) 网络拥塞控制方法、节点、系统及存储介质
CN101026587B (zh) 全局交换机资源管理器
WO2018149102A1 (fr) Procédé et dispositif adaptés pour réduire la latence de transmission de données à haute priorité, et support de stockage
CN109660468B (zh) 一种端口拥塞管理方法、装置和设备
US20230004321A1 (en) Storage device throttling amount of communicated data depending on suspension frequency of operation
US11134021B2 (en) Techniques for processor queue management
CN113411262A (zh) 一种大型接收卸载功能的设置方法和装置
WO2016070668A1 (fr) Procédé, dispositif et support de stockage informatique pour mettre en œuvre une conversion de format de données
CN111181874B (zh) 一种报文处理方法、装置及存储介质
CN105335323A (zh) 一种数据突发的缓存装置和方法
TW201642140A (zh) 用於防止欠載之封包記憶體系統、方法及裝置
CN104486442A (zh) 分布式存储系统的数据传输方法、装置
US20230367735A1 (en) Data transmission method, module and apparatus, device, and storage medium
CN110297785B (zh) 一种基于fpga的金融数据流控装置和流控方法
CN108763107B (zh) 后台写盘流控方法、装置、电子设备及存储介质
WO2022110681A1 (fr) Procédé de retour et appareil de commande de retour pour informations de réponse de commande, et dispositif électronique
WO2019232925A1 (fr) Procédé et appareil de commande de flux de migration de données de point d'accès sans fil, et dispositif électronique et support de données
CN105279136A (zh) 基于多核dsp多路信号的实时并行频域分析方法与系统
CN102170401A (zh) 一种数据的处理方法和设备
US8345701B1 (en) Memory system for controlling distribution of packet data across a switch
WO2022174444A1 (fr) Procédé et appareil de transmission de flux de données, et dispositif de réseau

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16823710

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16823710

Country of ref document: EP

Kind code of ref document: A1