WO2017054603A1 - 一种用户流量的生成方法及装置 - Google Patents

一种用户流量的生成方法及装置 Download PDF

Info

Publication number
WO2017054603A1
WO2017054603A1 PCT/CN2016/097245 CN2016097245W WO2017054603A1 WO 2017054603 A1 WO2017054603 A1 WO 2017054603A1 CN 2016097245 W CN2016097245 W CN 2016097245W WO 2017054603 A1 WO2017054603 A1 WO 2017054603A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
index information
module
dram
stored
Prior art date
Application number
PCT/CN2016/097245
Other languages
English (en)
French (fr)
Inventor
周开艺
邓美龙
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP16850227.6A priority Critical patent/EP3349405B1/en
Publication of WO2017054603A1 publication Critical patent/WO2017054603A1/zh
Priority to US15/940,993 priority patent/US10700980B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0862Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with prefetch
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/21Flow control; Congestion control using leaky-bucket
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/9005Buffering arrangements using dynamic buffer space allocation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/50Testing arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/901Buffering arrangements using storage descriptor, e.g. read or write pointers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1016Performance improvement
    • G06F2212/1021Hit rate improvement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/60Details of cache memory
    • G06F2212/6026Prefetching based on access pattern detection, e.g. stride based prefetch
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/9042Separate storage for different parts of the packet, e.g. header and payload

Definitions

  • the present invention relates to the field of network testing technologies, and in particular, to a method and an apparatus for generating user traffic.
  • the forwarding performance of the network device can be completed by using a test device as shown in FIG. 1 , wherein the test device shown in FIG. 1 is configured by a central processing unit (CPU), a traffic simulation device, a network device to be detected, and
  • the traffic verification device comprises: a flow control module for controlling traffic delivery parameters (such as packet sending time, number of packets sent, and packet sending interval), and a static random access memory (SRAM) for storing user messages. Static Random Access Memory) and the traffic initiating module for initiating traffic.
  • traffic delivery parameters such as packet sending time, number of packets sent, and packet sending interval
  • SRAM static random access memory
  • Static Random Access Memory Static Random Access Memory
  • the basic principle is that the traffic simulation device sends user traffic under the control of the CPU, and the network device to be detected forwards the user traffic sent by the traffic simulation device to the traffic verification device, and the traffic verification device analyzes the user traffic forwarded by the network device, Check and statistics, and then analyze the forwarding performance of the network device to be detected. It can be seen that the user traffic sent by the traffic simulation device is particularly important in the forwarding performance test of the network device.
  • the flow simulation device in the test device shown in Figure 1 is usually generated by the generation of proprietary hardware based on Field Programmable Gate Array (FPGA) under the control of the CPU.
  • FPGA Field Programmable Gate Array
  • User traffic and its schematic diagram is shown in Figure 2.
  • the CPU stores the user packet configuration information in the on-chip SRAM and stores the user packet header information in the SRAM (on-chip SRAM or off-chip).
  • the header information stored in the read SRAM that is looped in the user traffic generation phase generates user traffic.
  • the FPGA-based proprietary hardware generation method enables ultra-high-speed bandwidth traffic generation and precise control of user traffic, but since the number of user messages is limited by the size of the SRAM space, this makes FPGA-based The generation of proprietary hardware cannot store large amounts of user packets and cannot generate user traffic at line rate.
  • the embodiment of the invention discloses a method and a device for generating user traffic, which can realize storage of massive user messages and generation of user traffic at line rate.
  • a first aspect of the embodiments of the present invention discloses a method for generating user traffic, where the method includes:
  • the user message performs a pre-read operation and a cache operation, and the first on-chip SRAM is used to store index information of all user messages that need to be used, and the DRAM is used to store all the user messages;
  • the method further includes:
  • All the user packets are grouped according to the service type to obtain a plurality of user packet groups;
  • Each user message in each of the sub-user message groups of each of the user message groups is sequentially stored in the DRAM, and according to the storage location of the all user messages in the DRAM Generating index information of all the user packets;
  • the index information of all the user messages is stored in the first on-chip SRAM.
  • the method Before operating the cached user message to generate user traffic, the method further includes:
  • the preset number threshold When the preset number threshold is not reached, executing the pre-stored in the first on-chip static random access memory SRAM according to the user traffic generation instruction and the field programmable gate array FPGA Index information, performing a pre-read operation and a buffer operation on the user message stored in the dynamic random access memory DRAM and indicated by the index information, until the number of user messages buffered by the cache operation reaches the preset The number threshold.
  • the dynamic random access memory is generated according to the user flow generation instruction and the index information pre-stored in the first on-chip static random access memory (SRAM) of the field programmable gate array FPGA.
  • the user message stored in the index information and the pre-read operation and the cache operation are performed by the user information, including:
  • the index information of all the user packets is stored in the Before the first intra-SRAM, the method further includes:
  • a zeroing operation is performed on the first on-chip SRAM and the second on-chip SRAM.
  • a second aspect of the embodiments of the present invention discloses a device for generating user traffic, where the device includes a communication module, a processing module, and a first generation module, where:
  • the communication module is configured to receive a user traffic generation instruction
  • the processing module is configured to perform a pre-read operation on the user message stored in the DRAM and indicated by the index information according to the user traffic generation instruction and the index information pre-stored in the first on-chip SRAM of the FPGA. And the buffering operation, the first on-chip SRAM is used to store index information of all user messages that are needed, and the DRAM is used to store all the user messages;
  • the first generating module is configured to generate user traffic according to the user message cached by the cache operation.
  • the device further includes a grouping module, a storage module, and a second generating module, where:
  • the grouping module is configured to group all the user packets according to the service type to obtain a plurality of user packet groups, and group each of the user packet groups according to the access path to obtain the user. Multiple sub-user message groups of the message group;
  • the storage module is configured to sequentially store each user message in each of the sub-user message groups of each of the user message groups in the DRAM;
  • the second generating module is further configured to generate index information of all the user packets according to the storage location of the all user packets in the DRAM;
  • the storage module is further configured to store index information of all the user messages in the first on-chip SRAM.
  • the device further includes Module, where:
  • the determining module is configured to determine whether the number of user packets buffered by the cache operation reaches a preset number threshold before the first generation module generates user traffic according to the user message buffered by the cache operation, when the threshold is reached.
  • the first generation module is configured to perform the operation of generating user traffic according to the user message cached by the cache operation, and triggering the processing module when the preset number threshold is not reached. Performing a pre-read operation and a cache operation on the user message stored in the DRAM and indicated by the index information according to the user traffic generation instruction and the index information pre-stored in the first on-chip SRAM of the FPGA.
  • the processing module includes a read submodule and a cache submodule, where:
  • the reading submodule is configured to read, according to the user traffic generation instruction and the index information pre-stored in the first on-chip SRAM, the user packet that is stored in the DRAM and indicated by the index information ;
  • the cache submodule is configured to buffer the user message read by the read submodule into the second on-chip SRAM.
  • the device further includes a clearing module, where:
  • the clearing module is configured to: before the buffer submodule caches the user message read by the read submodule into the second on-chip SRAM, to the first on-chip SRAM And the second on-chip SRAM performs a clear operation.
  • a user traffic generation instruction is received, and the dynamic random access memory (DRAM) is processed according to the user traffic generation instruction and the index information pre-stored in the first on-chip static random access memory (SRAM) of the field programmable gate array FPGA.
  • the user message stored in the index information performs a pre-read operation and a cache operation, and generates user traffic according to the user message cached by the cache operation, wherein the first on-chip SRAM is used for storage.
  • Index information of all user messages, DRAM is used to store all user messages. It can be seen that the embodiment of the present invention can implement the storage of a large number of user messages through the DRAM, and realize the line rate generation user traffic by pre-reading and buffering operations on the stored user messages.
  • FIG. 1 is a schematic structural diagram of a device for testing forwarding performance of a network device disclosed in the prior art
  • FIG. 2 is a schematic diagram of a principle of user traffic generation disclosed in the prior art
  • FIG. 3 is a schematic flowchart of a method for generating user traffic according to an embodiment of the present invention.
  • FIG. 4 is a schematic flowchart diagram of another method for generating user traffic according to an embodiment of the present invention.
  • FIG. 5 is a schematic flowchart diagram of still another method for generating user traffic according to an embodiment of the present disclosure
  • FIG. 6 is a schematic diagram of grouping of user messages according to an embodiment of the present invention.
  • FIG. 7 is a diagram showing a correspondence relationship between the number of user packets buffered in a user packet buffer and time according to an embodiment of the present disclosure
  • FIG. 8 is a schematic structural diagram of a device for generating user traffic according to an embodiment of the present invention.
  • FIG. 9 is a schematic structural diagram of another apparatus for generating user traffic according to an embodiment of the present disclosure.
  • FIG. 10 is a schematic structural diagram of another apparatus for generating user traffic according to an embodiment of the present invention.
  • FIG. 11 is a schematic structural diagram of another apparatus for generating user traffic according to an embodiment of the present disclosure.
  • FIG. 12 is a schematic structural diagram of another apparatus for generating user traffic according to an embodiment of the present invention.
  • the embodiment of the invention discloses a method and a device for generating user traffic, which can realize the storage of a large number of user messages through a dynamic random access memory (DRAM), and pre-read the stored user messages.
  • DRAM dynamic random access memory
  • the fetch and cache operations enable wire-speed generation of user traffic. The details are described below separately.
  • FIG. 3 is a schematic flowchart diagram of a method for generating user traffic according to an embodiment of the present invention. As shown in FIG. 3, the method for generating the user traffic may include the following steps:
  • the user traffic generation command is used to enable the user traffic generation, and the user traffic generation command may be manually input by the tester, or may be generated by the CPU, which is not limited by the embodiment of the present invention.
  • the information stored in the dynamic random access memory (DRAM) and indicated by the index information is The user message performs a pre-read operation and a cache operation.
  • the first on-chip SRAM of the FPGA is used to store index information of all user messages that need to be used
  • the DRAM is used to store all user messages that need to be used.
  • the DRAM may be specific.
  • For storing all the user messages in the form of packet storage that is, all the user messages are grouped according to preset conditions, and the grouped user messages are sequentially stored in the DRAM. Since DRAM has the advantages of large storage space and the ability to store user messages of tens of millions of users, all user packets that need to be used for DRAM storage can store a large number of user messages, ensuring the scale of the simulated users and saving. Hardware cost.
  • a pre-read operation and a cache operation are performed on a user message indicated by the index information pre-stored in the first intra-SRAM in the packet user message stored in the DRAM, so that Overcoming the difficulty of DRAM access bandwidth uncertainty, it can efficiently read user messages from DRAM in a fixed period, thus ensuring the line rate generation of user traffic.
  • generating user traffic according to the user message buffered by the cache operation may include:
  • the user traffic is generated according to the user message cached by the cache operation and the user traffic generation parameter, where the user traffic generation parameter may include a packet delivery mode, a number of packets, and a user bandwidth, which are not limited in the embodiment of the present invention.
  • step S301 the following operations may also be performed:
  • All the user packets are grouped according to the service type to obtain a plurality of user packet groups;
  • Each user packet group is grouped according to the access path to obtain a plurality of sub-user packet groups of the user packet group;
  • Each user message in each sub-user message group of each user message group is sequentially stored in the DRAM, and all users are generated according to the storage location of each user message in the DRAM in all user messages.
  • Index information of each user packet in the packet is sequentially stored in the DRAM, and all users are generated according to the storage location of each user message in the DRAM in all user messages.
  • the index information of all user messages is stored in the first intra-chip SRAM described above.
  • the user message is grouped and continuously configured into the DRAM according to the grouping order, and the index information (ie, the storage location information) of the user message in the DRAM is configured into the first on-chip SRAM.
  • the CPU may perform the grouping operation on the user message group, the grouping operation on the sub-user message group, the storage operation of each user message, the generation operation of the index information of each user message, and the storage operation, etc.
  • the embodiment of the invention is not limited.
  • the service type is ipv4 service type and ipv6 service type
  • the user packet group is obtained according to the access path.
  • User_ipv4 is divided into two sub-user message groups: User_ipv4_p01 and User_ipv4_p02
  • User_ipv6 is divided into two sub-user message groups: User_ipv6_p01 and User_ipv6_p02. Since each user packet carries a tag and other related attributes, the user message is sent.
  • the user packets in the sub-user packet group User_ipv4_p01 of the user_ipv4 group are User_ipv4_p01_label0, User_ipv4_p01_label1, ..., User_ipv4_p01_labeln-1 and User_ipv4_p01_labeln, and the user packets in the sub-user packet group User_ipv4_p02 of the user packet group User_ipv4.
  • FIG. 6 is a schematic diagram of grouping
  • step S302 after performing step S302, and before performing step S303, the following operations may also be performed:
  • step S303 Determining whether the number of user packets buffered by the cache operation reaches a preset number threshold (or a certain watermark).
  • step S303 is triggered, when the cache is used. If the number of the user packets that have been cached does not reach the preset threshold, the process continues to step S302 until the number of user packets buffered by the cache operation reaches a preset number threshold, wherein each time step S302 is performed, the cache operation cache is performed.
  • the user message is different from the user message that was cached last time, and the user message buffered by the cache operation in step S303 is the sum of the user messages buffered by the cache operation in all the steps S302 before step S303.
  • a feedback signal is generated, and user traffic is generated according to the feedback signal and the user message cached by the cache operation, so that the user traffic is generated. It can guarantee the stability of user traffic generation.
  • the DRAM is stored in the DRAM according to the user flow generation instruction and the pre-stored index information in the first on-chip static random access memory SRAM of the field programmable gate array FPGA. And performing the pre-read operation and the buffering operation on the user message indicated by the index information may include:
  • step S301 the following operations may also be performed:
  • the CPU may be used in the embodiment of the present invention.
  • the user traffic generation mechanism (such as pre-read operation and cache operation) performs an initialization operation, and performs a clear operation on the first on-chip SRAM and the second on-chip SRAM, thereby ensuring the accuracy of user traffic generation.
  • a user traffic generation instruction is received, and the dynamic random access memory (DRAM) is processed according to the user traffic generation instruction and the index information pre-stored in the first on-chip static random access memory (SRAM) of the field programmable gate array FPGA.
  • the user message stored in the index information performs a pre-read operation and a cache operation, and generates user traffic according to the user message cached by the cache operation, wherein the first on-chip SRAM is used for storage.
  • Index information of all user messages, DRAM is used to store all user messages. It can be seen that the embodiment of the present invention can implement the storage of a large number of user messages through the DRAM, and realize the line rate generation user traffic by pre-reading and buffering operations on the stored user messages.
  • FIG. 4 is a schematic flowchart diagram of another method for generating user traffic according to an embodiment of the present invention.
  • the method for generating user traffic in FIG. 4 is applicable to an architecture composed of a CPU, an FPGA, and a DRAM, wherein the FPGA includes a first on-chip SRAM and a second on-chip SRAM.
  • the method for generating the user traffic may include the following steps:
  • performing an initialization operation by the CPU may include:
  • S402. Perform a grouping operation and a configuration operation on all user packets that need to be used by the CPU.
  • performing a grouping operation and a configuration operation on the user packets that are required to be used by the CPU may include:
  • All the user packets are grouped according to the service type by the CPU to obtain a plurality of user packet groups;
  • Each user packet group is grouped by the CPU according to the access path to obtain a sub-user packet group of each user packet group;
  • the user messages in each sub-user message group of each user packet group are sequentially stored in the DRAM by the CPU;
  • the index information of each user message is configured by the CPU into the first on-chip SRAM of the FPGA.
  • generating user traffic by the CPU may include:
  • a user traffic generation instruction is generated by the CPU.
  • performing a pre-read operation on the user packet may include:
  • the user message is sequentially read from the DRAM according to the index information stored in the first intra-SRAM.
  • the user message that is cached and read may include:
  • the user message read in step S404 is cached in the second on-chip SRAM of the FPGA, where the first time is generated when the number of user messages buffered in step S405 reaches a preset number threshold (a certain watermark)
  • the feedback signal is triggered to execute step S406, and the second feedback signal is generated to trigger the suspension to perform step S404.
  • step S404 is continued. In this way, it is possible to prevent overflow of user messages buffered in the SRAM in the second slice, and to ensure stability of user traffic generation.
  • generating user traffic may include:
  • the user message buffered in step S405 is read according to the user flow generation command and the first feedback signal, and the user traffic is generated according to the read user message.
  • the implementation of the embodiment of the invention can realize the storage of massive user messages and the generation of user traffic at the line rate, and the stability of user traffic generation is ensured.
  • FIG. 5 is a schematic flowchart diagram of another method for generating user traffic according to an embodiment of the present invention. As shown in FIG. 5, the method for generating the user traffic may include the following steps:
  • the CPU enables user traffic generation.
  • S502. Determine whether the number of index information (or feedback 0) of the cached user message exceeds the first.
  • the preset number threshold if not, enables continuous scheduling of user messages, wherein the index information of the cached user messages is continuously read from the first intra-chip SRAM in which the index information of all user messages is stored. Index information of user messages.
  • S503 After the continuous scheduling of the user packet is enabled, the index information of the N user packets is continuously parsed, and is output and cached in the address cache, where each step of the N user packets is scheduled after each session is determined. The number of user packets buffered in S507 (or feedback 2). If the buffer space is allowed, the next round of user packets is continuously scheduled. If the buffer space is not allowed, the continuous scheduling of user packets is suspended.
  • S506. Determine a state of the data cache. If a user packet is cached in the data cache, the packet is immediately read and cached in the user packet cache.
  • S507. Determine whether the number of user packets buffered in the user packet cache reaches a preset second preset number threshold. If yes, output a feedback 2 for suspending continuous scheduling of user packets, and determine the user message cache. Whether the number of cached user packets reaches a preset third number threshold, and if so, outputs feedback 3 for enabling generation of user traffic, wherein the output feedback 2 can avoid overflow of the user message cache space.
  • the feedback 2 is continuously outputted, and the feedback 3 only reaches the third pre-number after the CPU-enabled user traffic is generated and the number of user packets buffered in the user packet cache is reached. Output once when the number threshold is set.
  • the correspondence between the number of user packets buffered in the user packet cache and the time may be as shown in FIG. 7.
  • FIG. 7 is a user report cached in the user packet cache disclosed in the embodiment of the present invention. The correspondence between the number of words and time. As shown in Figure 7, before the CPU enables user traffic generation, the number of user packets buffered in the user packet cache is 0. After the CPU enables traffic generation, the number of user packets cached in the user packet cache. Gradually increase until the third preset number threshold is reached (ie, the feedback 3 watermark), and the user traffic is started when the number is gradually increased until the third preset number threshold is reached (ie, the feedback 3 watermark), and the user report needs to be suspended.
  • the third preset number threshold ie, the feedback 3 watermark
  • the continuous scheduling of the feedback before the 2 waterline The number of user packets buffered in the user packet cache is up and down in the steady state waterline and the feedback watermark.
  • the number of user packets cached in the user packet cache is fixed in the feedback. Between the line and the maximum cache line.
  • the implementation of the embodiment of the invention can realize the storage of massive user messages and the generation of user traffic at the line rate, and the stability of user traffic generation is ensured.
  • FIG. 8 is a schematic structural diagram of a device for generating user traffic according to an embodiment of the present invention.
  • the device for generating user traffic may include a communication module 801, a processing module 802, and a first generation module 803, where:
  • the communication module 801 is configured to receive a user traffic generation instruction.
  • the user traffic generation command is used to enable the user traffic generation, and the user traffic generation command may be manually input by the tester, or may be generated by the CPU, which is not limited by the embodiment of the present invention.
  • the processing module 802 is configured to perform pre-reading on the user message stored in the DRAM and indicated by the index information according to the user traffic generation instruction received by the communication module 801 and the index information pre-stored in the first on-chip SRAM of the FPGA. Operation and caching operations.
  • the first on-chip SRAM of the FPGA is used to store index information of all user messages that need to be used, and the DRAM is used to store all user messages that need to be used.
  • the DRAM may be specific.
  • storing all the user messages in the form of packet storage that is, all the user messages are grouped according to preset conditions, and the grouped user messages are sequentially stored in the DRAM. Since DRAM has the advantages of large storage space and the ability to store user messages of tens of millions of users, all user packets that need to be used for DRAM storage can store a large number of user messages, ensuring the scale of the simulated users and saving. Hardware cost.
  • the first generation module 803 is configured to generate user traffic according to the foregoing user traffic generation instruction and the user packet that the processing module 802 performs the cache operation buffering.
  • the device for generating user traffic may further include a grouping module 804, a storage module 805, and a second generation module 806, where the user traffic is
  • the structure of the generating device can be as shown in FIG. 9.
  • FIG. 9 is a schematic structural diagram of another device for generating user traffic according to an embodiment of the present invention. among them:
  • the grouping module 804 is configured to group all user messages according to the service type to obtain a plurality of user message groups, and group each user message group according to the access path to obtain multiple sub-user messages of the user message group. group.
  • the storage module 805 is configured to sequentially store each user message in each sub-user message group of each user message group in the DRAM.
  • the second generation module 806 is configured to generate index information of each user message in all user messages according to the storage location of all user messages in the DRAM;
  • the storage module 805 can also be configured to store index information of all user messages in the first on-chip SRAM.
  • the device for generating user traffic may further include a determining module 807.
  • the structure of the generating device for the user traffic may be as shown in FIG. 10;
  • FIG. 10 is a schematic structural diagram of another apparatus for generating user traffic according to an embodiment of the present invention. among them:
  • the determining module 807 is configured to determine, after the first generating module 803 generates the user traffic according to the user packet that is cached by the processing module 802, whether the number of user packets buffered by the processing module 802 is up to a preset threshold. When the preset number threshold is reached, the first generation module 803 is triggered to perform the above operation of generating user traffic according to the user packet buffered by the processing module 802. When the preset threshold is not reached, the trigger processing module 802 continues to execute. According to the user traffic generation instruction received by the communication module 801 and the index information pre-stored in the first on-chip SRAM of the FPGA, the pre-read operation and the cache operation are performed on the user message stored in the DRAM and indicated by the index information. .
  • the determining module 807 determines that the number of the user packets that the processing module 802 performs the buffering operation reaches the preset number threshold, generates a feedback signal and sends the feedback signal to the first generating module 803 to trigger the first
  • the generating module 803 generates user traffic according to the user message cached by the cache operation.
  • the processing module 802 can include a read submodule 8021 and a cache submodule 8022, where:
  • the reading sub-module 8021 is configured to read, according to the user traffic generation instruction received by the communication module 801 and the index information pre-stored in the first on-chip SRAM, the information stored in the DRAM and referred to by the index information. User message shown.
  • the buffer sub-module 8022 is configured to cache the user message read by the read sub-module 8021 into the second on-chip SRAM.
  • the device for generating user traffic may further include a clearing module 808.
  • the structure of the generating device for the user traffic may be as shown in FIG. 11 .
  • FIG. 11 is a schematic structural diagram of another apparatus for generating user traffic according to an embodiment of the present invention. among them:
  • the clearing module 808 is configured to perform an initializing operation on a user flow generation mechanism (such as a pre-reading operation and a buffering operation) used in the embodiment of the present invention, and perform clearing on the first intra-chip SRAM and the second intra-chip SRAM. Zero operation, which ensures the accuracy of user traffic generation.
  • a user flow generation mechanism such as a pre-reading operation and a buffering operation
  • the implementation of the embodiment of the invention can realize the storage of massive user messages and the generation of user traffic at the line rate, and ensure the stability and accuracy of user traffic generation.
  • FIG. 12 is a schematic structural diagram of another apparatus for generating user traffic according to an embodiment of the present invention.
  • the device for generating user traffic may include a CPU 1201, an FPGA 1202, and a DRAM 1203.
  • the FPGA 1202 may include a user message index information storage module 12021, a user message storage module 12022, a read/write scheduling module 12023, and a user message.
  • the cache module 12024, the user message scheduling module 12025, the user traffic generation module 12026, and the user traffic generation control module 12027, and the user flow generation device shown in FIG. 12 works as follows:
  • the CPU 1201 performs an initialization operation on the FPGA 1202 before the user traffic is enabled, and groups the massive user packets.
  • the user message storage module 12022 sequentially writes the grouped user messages into the DRAM 1203 through the read/write scheduling module 12023.
  • the index information storage module 12021 stores the index information of the user message in the DRAM 1203; the CPU 1201 enables the user traffic generation, and the user message scheduling module 12025 reads a certain number of consecutive index information from the user message index information storage module 12021.
  • the read/write scheduling module 12023 accesses the DRAM 1203 according to the certain number of consecutive index information, and the DRAM 1203 outputs the user message indicated by the certain number of consecutive index information, and the user message cache module 12024 is configured to cache the user message output by the DRAM 1203. And outputting the first state feedback information and the second state feedback information according to the number of the buffered user packets, where the first state feedback information is used to indicate the user packet scheduling when the number of the cached user packets reaches a certain watermark.
  • the module 12025 is suspended, and the second state feedback information is used to indicate that the user traffic generation control module 12027 controls the user traffic generation module 12026 from the user message cache module 12024 according to the control parameters (such as the delivery mode, the number of packets, and the user bandwidth, etc.). Read cached user messages to generate user traffic.
  • the control parameters such as the delivery mode, the number of packets, and the user bandwidth, etc.
  • the user message index information storage module 12021 may be one SRAM in the FPGA 1202, the user message cache module 12024 may be another SRAM in the FPGA 1202, and the modules in the FPGA 1202 may be merged according to actual needs.
  • the embodiment of the present invention is not limited.
  • the implementation of the embodiment of the present invention can realize the storage of massive user messages and the generation of user traffic at the line rate, and ensure the stability of user traffic generation and quasi-determination.
  • modules and sub-modules in the apparatus of the embodiment of the present invention may be combined, divided, and deleted according to actual needs.
  • the module in the embodiment of the present invention may be implemented by a general-purpose integrated circuit, such as a CPU (Central Processing Unit) or an ASIC (Application Specific Integrated Circuit).
  • a general-purpose integrated circuit such as a CPU (Central Processing Unit) or an ASIC (Application Specific Integrated Circuit).
  • the storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), or a random access memory (RAM).

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本发明实施例公开了一种用户流量的生成方法及装置,该方法包括接收用户流量生成指令,根据该用户流量生成指令以及现场可编程门阵列FPGA的第一片内静态随机存取存储器SRAM中预先存储的索引信息,对动态随机存取存储器DRAM中存储的且该索引信息所指示的用户报文执行预读取操作与缓存操作,并根据该缓存操作缓存的用户报文生成用户流量,其中,第一片内SRAM用于存储需要用到的所有用户报文的索引信息,DRAM用于存储所有用户报文。实施本发明实施例能够实现海量用户报文的存储以及线速生成用户流量。

Description

一种用户流量的生成方法及装置
本申请要求于2015年9月30日提交中国专利局、申请号为201510644528.7、发明名称为“一种用户流量的生成方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明涉及网络测试技术领域,具体涉及一种用户流量的生成方法及装置。
背景技术
当前,网络设备的转发性能可以通过如图1所示的测试装置来完成,其中,图1所示的测试装置由中央处理器(CPU,Central Processing Unit)、流量模拟装置、待检测网络设备以及流量校验装置组成,且该流量模拟装置包括用于控制流量发包参数(如发包时间、发包数量以及发包间隔等)的流量控制模块、用于存储用户报文的静态随机存取存储器(SRAM,Static Random Access Memory)以及用于发起流量的流量发起模块。其基本原理为流量模拟装置在CPU的控制下发出用户流量,待检测网络设备将流量模拟装置发出的用户流量转发至流量校验装置,流量校验装置对待检测网络设备转发的用户流量进行分析、校验与统计,进而分析待检测网络设备的转发性能。可见,流量模拟装置发出的用户流量在网络设备的转发性能测试中显得尤为重要。
在以太网二三层测试中,图1所示的测试装置中的流量模拟装置在CPU的控制下通常通过基于现场可编程门阵列(FPGA,Field Programmable Gate Array)的专有硬件的生成方式发出用户流量,且其原理示意图如图2所示,在用户流量生成前,CPU将用户报文配置信息存储于片内SRAM中并将用户报文头部信息存储到SRAM(片内SRAM或片外SRAM)中,且在用户流量生成阶段循环的读取SRAM内存储的头部信息生成用户流量。该基于FPGA的专有硬件的生成方式能够实现超高速带宽流量生成与用户流量的精准控制,但是,由于用户报文的数量受限于SRAM空间的大小,这使得基于FPGA的 专有硬件的生成方式无法实现海量用户报文的存储且无法线速生成用户流量。
发明内容
本发明实施例公开了一种用户流量的生成方法及装置,能够实现海量用户报文的存储以及线速生成用户流量。
本发明实施例第一方面公开了一种用户流量的生成方法,所述方法包括:
接收用户流量生成指令;
根据所述用户流量生成指令以及现场可编程门阵列FPGA的第一片内静态随机存取存储器SRAM中预先存储的索引信息,对动态随机存取存储器DRAM中存储的且所述索引信息所指示的用户报文执行预读取操作与缓存操作,所述第一片内SRAM用于存储需要用到的所有用户报文的索引信息,所述DRAM用于存储所述所有用户报文;
根据所述缓存操作缓存的用户报文生成用户流量。
在本发明实施例第一方面的第一种可能的实现方式中,所述方法还包括:
根据业务类型对所述所有用户报文进行分组,以获得多个用户报文组;
根据访问路径对每个所述用户报文组进行分组,以获得该用户报文组的多个子用户报文组;
将每个所述用户报文组的每个所述子用户报文组中的每个用户报文依次存储在所述DRAM中,并根据所述所有用户报文在所述DRAM中的存储位置生成所述所有用户报文的索引信息;
将所述所有用户报文的索引信息存储在所述第一片内SRAM中。
结合本发明实施例第一方面或本发明实施例第一方面的第一种可能的实现方式,在本发明实施例第一方面的第二种可能的实现反式中,所述根据所述缓存操作缓存的用户报文生成用户流量之前,所述方法还包括:
判断所述缓存操作缓存的用户报文的数量是否达到预设数量阈值;
当达到所述预设数量阈值时,执行所述根据所述缓存操作缓存的用户报文生成用户流量的操作;
当未达到所述预设数量阈值时,执行所述根据所述用户流量生成指令以及现场可编程门阵列FPGA的第一片内静态随机存取存储器SRAM中预先存储的 索引信息,对动态随机存取存储器DRAM中存储的且所述索引信息所指示的用户报文执行预读取操作与缓存操作,直至所述缓存操作缓存的用户报文的数量达到所述预设数量阈值。
结合本发明实施例第一方面、本发明实施例第一方面的第一种可能的实现方式或本发明实施例第一方面的第二种可能的实现方式,在本发明实施例第一方面的第三种可能的实现方式中,所述根据所述用户流量生成指令以及现场可编程门阵列FPGA的第一片内静态随机存取存储器SRAM中预先存储的索引信息,对动态随机存取存储器DRAM中存储的且所述索引信息所指示的用户报文执行预读取操作与缓存操作,包括:
根据所述用户流量生成指令以及FPGA的第一片内SRAM中预先存储的索引信息,读取DRAM中存储的且所述索引信息所指示的用户报文,并将读取到的所述用户报文缓存到所述FPGA的第二片内SRAM中。
结合本发明实施例第一方面的第三种可能的实现方式,在本发明实施例第一方面的第四种可能的实现方式中,所述将所述所有用户报文的索引信息存储在所述第一片内SRAM中之前,所述方法还包括:
对所述第一片内SRAM以及所述第二片内SRAM执行清零操作。
本发明实施例第二方面公开了一种用户流量的生成装置,所述装置包括通信模块、处理模块以及第一生成模块,其中:
所述通信模块,用于接收用户流量生成指令;
所述处理模块,用于根据所述用户流量生成指令以及FPGA的第一片内SRAM中预先存储的索引信息,对DRAM中存储的且所述索引信息所指示的用户报文执行预读取操作与缓存操作,所述第一片内SRAM用于存储需要用到的所有用户报文的索引信息,所述DRAM用于存储所述所有用户报文;
所述第一生成模块,用于根据所述缓存操作缓存的用户报文生成用户流量。
在本发明实施例第二方面的第一种可能的实现方式中,所述装置还包括分组模块、存储模块以及第二生成模块,其中:
所述分组模块,用于根据业务类型对所述所有用户报文进行分组以获得多个用户报文组,并根据访问路径对每个所述用户报文组进行分组以获得该用户 报文组的多个子用户报文组;
所述存储模块,用于将每个所述用户报文组的每个所述子用户报文组中的每个用户报文依次存储在所述DRAM中;
所述第二生成模块,还用于根据所述所有用户报文在所述DRAM中的存储位置生成所述所有用户报文的索引信息;
所述存储模块,还用于将所述所有用户报文的索引信息存储在所述第一片内SRAM中。
结合本发明实施例第二方面或本发明实施例第二方面的第一种可能的实现方式,在本发明实施例第二方面的第二种可能的实现反式中,所述装置还包括判断模块,其中:
所述判断模块,用于在所述第一生成模块根据所述缓存操作缓存的用户报文生成用户流量之前,判断所述缓存操作缓存的用户报文的数量是否达到预设数量阈值,当达到所述预设数量阈值时,触发所述第一生成模块执行所述根据所述缓存操作缓存的用户报文生成用户流量的操作,当未达到所述预设数量阈值时,触发所述处理模块执行所述根据所述用户流量生成指令以及FPGA的第一片内SRAM中预先存储的索引信息,对DRAM中存储的且所述索引信息所指示的用户报文执行预读取操作与缓存操作。
结合本发明实施例第二方面、本发明实施例第二方面的第一种可能的实现方式或本发明实施例第二方面的第二种可能的实现方式,在本发明实施例第二方面的第三种可能的实现方式中,所述处理模块包括读取子模块以及缓存子模块,其中:
所述读取子模块,用于根据所述用户流量生成指令以及所述第一片内SRAM中预先存储的索引信息,读取所述DRAM中存储的且所述索引信息所指示的用户报文;
所述缓存子模块,用于将所述读取子模块读取到的所述用户报文缓存到所述第二片内SRAM中。
结合本发明实施例第二方面的第三种可能的实现方式,在本发明实施例第二方面的第四种可能的实现方式中,所述装置还包括清零模块,其中:
所述清零模块,用于在所述缓存子模块将所述读取子模块读取到的所述用户报文缓存到所述第二片内SRAM中之前,对所述第一片内SRAM以及所述第二片内SRAM执行清零操作。
本发明实施例中,接收用户流量生成指令,根据该用户流量生成指令以及现场可编程门阵列FPGA的第一片内静态随机存取存储器SRAM中预先存储的索引信息,对动态随机存取存储器DRAM中存储的且该索引信息所指示的用户报文执行预读取操作与缓存操作,并根据该缓存操作缓存的用户报文生成用户流量,其中,第一片内SRAM用于存储需要用到的所有用户报文的索引信息,DRAM用于存储所有用户报文。可见,实施本发明实施例能够通过DRAM实现海量用户报文的存储,且通过对存储的用户报文的预读取操作与缓存操作实现了线速生成用户流量。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是现有技术公开的一种网络设备的转发性能的测试装置的结构示意图;
图2是现有技术公开的一种用户流量生成的原理示意图;
图3是本发明实施例公开的一种用户流量的生成方法的流程示意图;
图4是本发明实施例公开的另一种用户流量的生成方法的流程示意图;
图5是本发明实施例公开的又一种用户流量的生成方法的流程示意图;
图6是本发明实施例公开的一种用户报文的分组示意图;
图7是本发明实施例公开的一种用户报文缓存中缓存的用户报文数量与时间的对应关系图;
图8是本发明实施例公开的一种用户流量的生成装置的结构示意图;
图9是本发明实施例公开的另一种用户流量的生成装置的结构示意图;
图10是本发明实施例公开的又一种用户流量的生成装置的结构示意图;
图11是本发明实施例公开的又一种用户流量的生成装置的结构示意图;
图12是本发明实施例公开的又一种用户流量的生成装置的结构示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
本发明实施例公开了一种用户流量的生成方法及装置,能够通过动态随机存取存储器(DRAM,Dynamic Random Access Memory)实现海量用户报文的存储,且通过对存储的用户报文的预读取操作与缓存操作实现了线速生成用户流量。以下分别进行详细说明。
请参阅图3,图3是本发明实施例公开的一种用户流量的生成方法的流程示意图。如图3所示,该用户流量的生成方法可以包括以下步骤:
S301、接收用户流量生成指令。
本发明实施例中,该用户流量生成指令用于使能用户流量生成,且该用户流量生成指令可以是测试人员手动输入的,也可以是由CPU产生的,本发明实施例不做限定。
S302、根据上述用户流量生成指令以及现场可编程门阵列FPGA的第一片内静态随机存取存储器SRAM中预先存储的索引信息,对动态随机存取存储器DRAM中存储的且该索引信息所指示的用户报文执行预读取操作与缓存操作。
本发明实施例中,FPGA的第一片内SRAM用于存储需要用到的所有用户报文的索引信息,且DRAM用于存储该需要用到的所有用户报文,可选的,DRAM可以具体用于以分组存储的形式存储该所有用户报文,即将该所有用户报文按照预设条件进行分组,并将分组后的用户报文按顺序存储在DRAM中。由于DRAM具有存储空间大且能够存储千万级用户的用户报文的优点,使用DRAM存储需要使用到的所有用户报文能够实现海量用户报文的存储,保证了模拟用户的规模,且节省了硬件成本。
本发明实施例中,对DRAM中存储的分组用户报文中上述第一片内SRAM中预先存储的索引信息所指示的用户报文执行预读取操作以及缓存操作,这样 克服了DRAM访问带宽不确定的困难,能够高效的在固定的周期内从DRAM中读取用户报文,进而保证了用户流量的线速生成。
S303、根据上述缓存操作缓存的用户报文生成用户流量。
本发明实施例中,根据上述缓存操作缓存的用户报文生成用户流量可以包括:
根据上述缓存操作缓存的用户报文以及用户流量生成参数生成用户流量,其中,用户流量生成参数可以包括发包模式、发包个数以及用户带宽等,本发明实施例不做限定。
作为一种可选的实施方式,在执行步骤S301之前,还可以执行以下操作:
根据业务类型对上述所有用户报文进行分组,以获得多个用户报文组;
根据访问路径对每个用户报文组进行分组,以获得该用户报文组的多个子用户报文组;
将每个用户报文组的每个子用户报文组中的每个用户报文依次存储在上述DRAM中,并根据所有用户报文中每个用户报文在上述DRAM中的存储位置生成所有用户报文中每个用户报文的索引信息;
将所有用户报文的索引信息存储在上述第一片内SRAM中。
在该可选的实施方式中,对用户报文进行分组并按照分组顺序连续配置到DRAM中,将用户报文在DRAM中的索引信息(即存储位置信息)配置到第一片内SRAM中,以便调度用户报文。且可以具体由CPU执行对用户报文组的分组操作、对子用户报文组的分组操作、每个用户报文的存储操作、每个用户报文的索引信息的生成操作以及存储操作等,本发明实施例不作限定。举例来说,假设业务类型有ipv4业务类型以及ipv6业务类型,则可以根据业务类型获得两个用户报文组:User_ipv4以及User_ipv6,且访问路径有p01以及p02,则根据访问路径将用户报文组User_ipv4分为两个子用户报文组:User_ipv4_p01以及User_ipv4_p02,将用户报文组User_ipv6分为两个子用户报文组:User_ipv6_p01以及User_ipv6_p02,由于每个用户报文携带有标签等相关属性,则用户报文组User_ipv4的子用户报文组User_ipv4_p01中的用户报文有User_ipv4_p01_label0、User_ipv4_p01_label1、……、User_ipv4_p01_labeln-1以及User_ipv4_p01_labeln,用户报文组User_ipv4的子用户报文组User_ipv4_p02中的用户报文 有User_ipv4_p02_label0、User_ipv4_p02_label1、……、User_ipv4_p02_labeln-1以及User_ipv4_p02_labeln,用户报文组User_ipv6的子用户报文组User_ipv6_p01中的用户报文有User_ipv6_p01_label0、User_ipv6_p01_label1、……、User_ipv6_p01_labeln-1以及User_ipv6_p01_labeln,用户报文组User_ipv6的子用户报文组User_ipv6_p02中的用户报文有User_ipv6_p02_label0、User_ipv6_p02_label1、……、User_ipv6_p02_labeln-1以及User_ipv6_p02_labeln,具体请参见图6,图6是本发明实施例公开的一种用户报文的分组示意图。
作为另一种可选的实施方式,在执行步骤S302之后,且在执行步骤S303之前,还可以执行以下操作:
判断上述缓存操作缓存的用户报文的数量是否达到预设数量阈值(或一定水线),当上述缓存操作缓存的用户报文的数量达到预设数量阈值时,触发执行步骤S303,当上述缓存操作缓存的用户报文的数量未达到预设数量阈值时,继续执行步骤S302,直到缓存操作缓存的用户报文的数量达到预设数量阈值,其中,每次执行步骤S302时,缓存操作缓存的用户报文与上一次缓存的用户报文不同,且步骤S303中缓存操作缓存的用户报文为步骤S303之前的所有步骤S302中缓存操作缓存的用户报文的总和。
在该另一种可选的实施方式中,当缓存操作缓存的用户报文的数量达到一定水线时,生成反馈信号,根据该反馈信号以及上述缓存操作缓存的用户报文生成用户流量,这样能够保证用户流量生成的稳定性。
作为又一种可选的实施方式,根据上述用户流量生成指令以及现场可编程门阵列FPGA的第一片内静态随机存取存储器SRAM中预先存储的索引信息,对动态随机存取存储器DRAM中存储的且该索引信息所指示的用户报文执行预读取操作与缓存操作可以包括:
根据上述用户流量生成指令以及FPGA的第一片内SRAM中预先存储的索引信息,读取DRAM中存储的且该索引信息所指示的用户报文,并将读取到的用户报文缓存到第二片内SRAM中。
作为又一种可选的实施方式,在执行步骤S301之前,还可以执行以下操作:
执行初始化操作。
具体的,在该又一种可选的实施方式中,可以由CPU对本发明实施例用到 的用户流量的生成机制(如预读取操作以及缓存操作等)进行初始化操作,并对第一片内SRAM以及第二片内SRAM执行清零操作,这样能够保证用户流量生成的准确性。
本发明实施例中,接收用户流量生成指令,根据该用户流量生成指令以及现场可编程门阵列FPGA的第一片内静态随机存取存储器SRAM中预先存储的索引信息,对动态随机存取存储器DRAM中存储的且该索引信息所指示的用户报文执行预读取操作与缓存操作,并根据该缓存操作缓存的用户报文生成用户流量,其中,第一片内SRAM用于存储需要用到的所有用户报文的索引信息,DRAM用于存储所有用户报文。可见,实施本发明实施例能够通过DRAM实现海量用户报文的存储,且通过对存储的用户报文的预读取操作与缓存操作实现了线速生成用户流量。
请参阅图4,图4是本发明实施例公开的另一种用户流量的生成方法的流程示意图。其中,图4中的用户流量的生成方法适用于由CPU、FPGA以及DRAM组成的架构中,其中,FPGA包括第一片内SRAM以及第二片内SRAM。如图4所示,该用户流量的生成方法可以包括以下步骤:
S401、通过CPU执行初始化操作。
本发明实施例中,具体的,通过CPU执行初始化操作可以包括:
通过CPU对本发明实施例中的用户流量生成机制(如用户报文的读取操作与缓存操作等)进行初始化操作,并对FPGA中的第一片内SRAM以及第二片内SRAM执行清零操作。
S402、通过CPU对需要用到的所有用户报文执行分组操作与配置操作。
本发明实施例中,具体的,通过CPU对需要用到的所有用户报文执行分组操作与配置操作可以包括:
通过CPU根据业务类型将所有用户报文进行分组,以获得多个用户报文组;
通过CPU根据访问路径对每个用户报文组进行分组,以获得每个用户报文组的子用户报文组;
通过CPU依次将每个用户报文组的每个子用户报文组中的用户报文存储到DRAM中;
通过CPU获取每个用户报文组的每个子用户报文组中的每个用户报文的位置信息并生成索引信息;
通过CPU将每个用户报文的索引信息配置到FPGA的第一片内SRAM中。
S403、通过CPU触发生成用户流量。
本发明实施例中,通过CPU触发生成用户流量可以包括:
通过CPU生成用户流量生成指令。
S404、对用户报文执行预读取操作。
本发明实施例中,对用户报文执行预读取操作可以包括:
根据上述第一片内SRAM中存储的索引信息,从上述DRAM中依次读取用户报文。
S405、缓存读取到的用户报文。
本发明实施例中,缓存读取到的用户报文可以包括:
将步骤S404中读取到的用户报文缓存到FPGA的第二片内SRAM中,其中,当步骤S405中缓存的用户报文的数量达到预设数量阈值(一定水线)时,生成第一反馈信号以触发执行步骤S406,生成第二反馈信号以触发暂停执行步骤S404,当步骤S405中缓存的用户报文的数量未达到预设数量阈值时,继续执行步骤S404。这样既能够防止第二片内SRAM中缓存的用户报文的溢出,又能够保证用户流量生成的稳定性。
S406、生成用户流量。
本发明实施例中,生成用户流量可以包括:
根据上述用户流量生成指令以及上述第一反馈信号,读取步骤S405中缓存的用户报文,并根据读取到的用户报文生成用户流量。
可见,实施本发明实施例能够实现海量用户报文的存储以及线速生成用户流量,且保证了用户流量生成的稳定性。
请参阅图5,图5是本发明实施例公开的又一种用户流量的生成方法的流程示意图。如图5所示,该用户流量的生成方法可以包括以下步骤:
S501、CPU使能用户流量生成。
S502、判断缓存的用户报文的索引信息的数量(或反馈0)是否超过第一 预设数量阈值,若否,则使能一次用户报文的连续调度,其中,缓存的用户报文的索引信息是从存储有所有用户报文的索引信息的第一片内SRAM中连续读取的用户报文的索引信息。
S503、用户报文的连续调度使能后,连续解析N个用户报文的索引信息,输出并缓存在地址缓存中,其中,每次完成N个用户报文的连续调度后,均需判断步骤S507中缓存的用户报文的数量(或反馈2),若缓存空间允许,则进行下一轮用户报文的连续调度,若缓存空间不允许,则暂停用户报文的连续调度。
S504、根据反馈1判断DRAM是否能够被访问,若是,则输出地址缓存中缓存的索引信息访问DRAM。
S505、将从DRAM中读取到的用户报文缓存在数据缓存中。
S506、判断数据缓存的状态,若数据缓存中缓存有用户报文,则立即读取并缓存到用户报文缓存中。
S507、判断用户报文缓存中缓存的用户报文的数量是否达到预设第二预设数量阈值,若是,则输出用于暂停用户报文的连续调度的反馈2,并判断用户报文缓存中缓存的用户报文的数量是否达到预设第三数量阈值,若是,则输出用于使能生成用户流量的反馈3,其中,输出反馈2能够避免用户报文缓存空间溢出。
需要说明的是,在CPU关闭用户流量生成之前,上述反馈2是持续输出的,上述反馈3只在CPU使能用户流量生成之后且用户报文缓存中缓存的用户报文的数量达到第三预设数量阈值时输出一次。
S508、读取用户报文缓存中的用户报文,生成用户流量。
S509、CPU关闭用户流量生成。
本发明实施例中,用户报文缓存中缓存的用户报文的数量与时间的对应关系可以如图7所示,图7是本发明实施例公开的一种用户报文缓存中缓存的用户报文数量与时间的对应关系图。如图7所示,在CPU使能用户流量生成之前,用户报文缓存中缓存的用户报文的数量为0,在CPU使能流量生成之后,用户报文缓存中缓存的用户报文的数量逐渐增加直至达到第三预设数量阈值(即反馈3水线),且在数量逐渐增加直至达到第三预设数量阈值(即反馈3水线)时开始生成用户流量,在达到需要暂停用户报文的连续调度的反馈2水线之前, 用户报文缓存中缓存的用户报文的数量在稳定状态水线以及反馈2水线附近上下浮动,当用户流量生成关闭时,用户报文缓存中缓存的用户报文的数量固定在反馈2水线与最大缓存水线之间。
可见,实施本发明实施例能够实现海量用户报文的存储以及线速生成用户流量,且保证了用户流量生成的稳定性。
请参阅图8,图8是本发明实施例公开的一种用户流量的生成装置的结构示意图。如图8所示,该用户流量的生成装置可以包括通信模块801、处理模块802以及第一生成模块803,其中:
通信模块801用于接收用户流量生成指令。
本发明实施例中,该用户流量生成指令用于使能用户流量生成,且该用户流量生成指令可以是测试人员手动输入的,也可以是由CPU产生的,本发明实施例不做限定。
处理模块802用于根据通信模块801接收到的用户流量生成指令以及FPGA的第一片内SRAM中预先存储的索引信息,对DRAM中存储的且该索引信息所指示的用户报文执行预读取操作与缓存操作。
本发明实施例中,FPGA的第一片内SRAM用于存储需要用到的所有用户报文的索引信息,且DRAM用于存储该需要用到的所有用户报文,可选的,DRAM可以具体用于以分组存储的形式存储该所有用户报文,即将该所有用户报文按照预设条件进行分组,并将分组后的用户报文按顺序存储在DRAM中。由于DRAM具有存储空间大且能够存储千万级用户的用户报文的优点,使用DRAM存储需要使用到的所有用户报文能够实现海量用户报文的存储,保证了模拟用户的规模,且节省了硬件成本。
第一生成模块803用于根据上述用户流量生成指令以及处理模块802执行上述缓存操作缓存的用户报文生成用户流量。
作为一种可选的实施方式,在图8所示的装置结构基础上,该用户流量的生成装置还可以包括分组模块804、存储模块805以及第二生成模块806,此时,该用户流量的生成装置的结构可以如图9所示,图9是本发明实施例公开的另一种用户流量的生成装置的结构示意图。其中:
分组模块804用于根据业务类型对所有用户报文进行分组以获得多个用户报文组,并根据访问路径对每个用户报文组进行分组以获得该用户报文组的多个子用户报文组。
存储模块805用于将每个用户报文组的每个子用户报文组中的每个用户报文依次存储在DRAM中。
第二生成模块806用于根据所有用户报文在DRAM中的存储位置生成所有用户报文中每个用户报文的索引信息;
存储模块805还可以用于将所有用户报文的索引信息存储在上述第一片内SRAM中。
作为另一种可选的实施方式,在图9所示的装置结构基础上,该用户流量的生成装置还可以包括判断模块807,此时,该用户流量的生成装置的结构可以如图10所示,图10是本发明实施例公开的又一种用户流量的生成装置的结构示意图。其中:
判断模块807用于在第一生成模块803根据处理模块802执行缓存操作缓存的用户报文生成用户流量之前,判断处理模块802执行缓存操作缓存的用户报文的数量是否达到预设数量阈值,当达到该预设数量阈值时,触发第一生成模块803执行上述根据处理模块802执行缓存操作缓存的用户报文生成用户流量的操作,当未达到该预设数量阈值时,触发处理模块802继续执行上述根据通信模块801接收到的用户流量生成指令以及FPGA的第一片内SRAM中预先存储的索引信息,对DRAM中存储的且该索引信息所指示的用户报文执行预读取操作与缓存操作。
具体的,当判断模块807判断出处理模块802执行缓存操作缓存的用户报文的数量达到上述预设数量阈值时,生成反馈信号并将该反馈信号发送至第一生成模块803,以触发第一生成模块803根据缓存操作缓存的用户报文生成用户流量。
进一步可选的,如图10所示,处理模块802可以包括读取子模块8021以及缓存子模块8022,其中:
读取子模块8021用于根据通信模块801接收到的用户流量生成指令以及第一片内SRAM中预先存储的索引信息,读取DRAM中存储的且该索引信息所指 示的用户报文。
缓存子模块8022用于将读取子模块8021读取到的用户报文缓存到第二片内SRAM中。
作为另一种可选的实施方式,在图10所示的装置结构基础上,该用户流量的生成装置还可以包括清零模块808,此时,该用户流量的生成装置的结构可以图11所示,图11是本发明实施例公开的又一种用户流量的生成装置的结构示意图。其中:
清零模块808用于对本发明实施例用到的用户流量的生成机制(如预读取操作以及缓存操作等)进行初始化操作,并对上述第一片内SRAM以及上述第二片内SRAM执行清零操作,这样能够保证用户流量生成的准确性。
可见,实施本发明实施例能够实现海量用户报文的存储以及线速生成用户流量,且保证了用户流量生成的稳定性以及准确性。
请参阅图12,图12是本发明实施例公开的又一种用户流量的生成装置的结构示意图。如图12所示,该用户流量的生成装置可以包括CPU1201、FPGA1202以及DRAM1203,其中,FPGA1202可以包括用户报文索引信息存储模块12021、用户报文存储模块12022、读写调度模块12023、用户报文缓存模块12024、用户报文调度模块12025、用户流量生成模块12026以及用户流量生成控制模块12027,且图12所示的用户流量的生成装置的工作原理为:
CPU1201在使能用户流量生成前对FPGA1202执行初始化操作,并将海量用户报文进行分组,用户报文存储模块12022通过读写调度模块12023将分组后的用户报文依次写入DRAM1203中,用户报文索引信息存储模块12021存储用户报文在DRAM1203中的索引信息;CPU1201使能用户流量生成,用户报文调度模块12025从用户报文索引信息存储模块12021中读取一定数量的连续的索引信息,读写调度模块12023根据该一定数量的连续的索引信息访问DRAM1203,DRAM1203输出该一定数量的连续的索引信息所指示的用户报文,用户报文缓存模块12024用于缓存DRAM1203输出的用户报文,并根据缓存的用户报文的数量输出第一状态反馈信息以及第二状态反馈信息,其中,当缓存的用户报文的数量达到一定水线时,第一状态反馈信息用于指示用户报文调度 模块12025暂停工作,且第二状态反馈信息用于指示用户流量生成控制模块12027根据控制参数(如发包模式、发包个数以及用户带宽等)控制用户流量生成模块12026从用户报文缓存模块12024中读取缓存的用户报文生成用户流量。
需要说明的是,上述用户报文索引信息存储模块12021可以为FPGA1202中的一个SRAM,上述用户报文缓存模块12024可以为FPGA1202中的另一个SRAM,且FPGA1202中的模块可以根据实际的需求进行合并、删除或拆分,本发明实施例不做限定。
可见,实施本发明实施例能够实现海量用户报文的存储以及线速生成用户流量,且保证了用户流量生及准成的稳定性以确定。
需要说明的是,在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详细描述的部分,可以参见其他实施例的相关描述。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作、模块以及子模块并不一定是本发明所必须的。
本发明实施例方法中的步骤可以根据实际需要进行顺序调整、合并和删减。
本发明实施例装置中的模块以及子模块可以根据实际需要进行合并、划分和删减。
本发明实施例中所述模块可以通过通用集成电路,例如CPU(Central Processing Unit,中央处理器),或通过ASIC(Application Specific Integrated Circuit,专用集成电路)来实现。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,所述的存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)或随机存储记忆体(Random Access Memory,RAM)等。
以上对本发明实施例所提供的一种用户流量的生成方法及装置进行了详细介绍,本文中应用了具体实例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的方法及其核心思想;同时,对于本领域的一般技术人员,依据本发明的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本发明的限制。

Claims (10)

  1. 一种用户流量的生成方法,其特征在于,所述方法包括:
    接收用户流量生成指令;
    根据所述用户流量生成指令以及现场可编程门阵列FPGA的第一片内静态随机存取存储器SRAM中预先存储的索引信息,对动态随机存取存储器DRAM中存储的且所述索引信息所指示的用户报文执行预读取操作与缓存操作,所述第一片内SRAM用于存储需要用到的所有用户报文的索引信息,所述DRAM用于存储所述所有用户报文;
    根据所述缓存操作缓存的用户报文生成用户流量。
  2. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    根据业务类型对所述所有用户报文进行分组,以获得多个用户报文组;
    根据访问路径对每个所述用户报文组进行分组,以获得该用户报文组的多个子用户报文组;
    将每个所述用户报文组的每个所述子用户报文组中的每个用户报文依次存储在所述DRAM中,并根据所述所有用户报文在所述DRAM中的存储位置生成所述所有用户报文的索引信息;
    将所述所有用户报文的索引信息存储在所述第一片内SRAM中。
  3. 根据权利要求1或2所述的方法,其特征在于,所述根据所述缓存操作缓存的用户报文生成用户流量之前,所述方法还包括:
    判断所述缓存操作缓存的用户报文的数量是否达到预设数量阈值;
    当达到所述预设数量阈值时,执行所述根据所述缓存操作缓存的用户报文生成用户流量的操作;
    当未达到所述预设数量阈值时,执行所述根据所述用户流量生成指令以及现场可编程门阵列FPGA的第一片内静态随机存取存储器SRAM中预先存储的索引信息,对动态随机存取存储器DRAM中存储的且所述索引信息所指示的用户报文执行预读取操作与缓存操作,直至所述缓存操作缓存的用户报文的数量达到所述预设数量阈值。
  4. 根据权利要求1-3任一项所述的方法,其特征在于,所述根据所述用户流量生成指令以及现场可编程门阵列FPGA的第一片内静态随机存取存储器SRAM中预先存储的索引信息,对动态随机存取存储器DRAM中存储的且所述索引信息所指示的用户报文执行预读取操作与缓存操作,包括:
    根据所述用户流量生成指令以及FPGA的第一片内SRAM中预先存储的索引信息,读取DRAM中存储的且所述索引信息所指示的用户报文,并将读取到的所述用户报文缓存到所述FPGA的第二片内SRAM中。
  5. 根据权利要求4所述的方法,其特征在于,所述将所述所有用户报文的索引信息存储在所述第一片内SRAM中之前,所述方法还包括:
    对所述第一片内SRAM以及所述第二片内SRAM执行清零操作。
  6. 一种用户流量的生成装置,其特征在于,所述装置包括通信模块、处理模块以及第一生成模块,其中:
    所述通信模块,用于接收用户流量生成指令;
    所述处理模块,用于根据所述用户流量生成指令以及FPGA的第一片内SRAM中预先存储的索引信息,对DRAM中存储的且所述索引信息所指示的用户报文执行预读取操作与缓存操作,所述第一片内SRAM用于存储需要用到的所有用户报文的索引信息,所述DRAM用于存储所述所有用户报文;
    所述第一生成模块,用于根据所述缓存操作缓存的用户报文生成用户流量。
  7. 根据权利要求6所述的装置,其特征在于,所述装置还包括分组模块、存储模块以及第二生成模块,其中:
    所述分组模块,用于根据业务类型对所述所有用户报文进行分组以获得多个用户报文组,并根据访问路径对每个所述用户报文组进行分组以获得该用户报文组的多个子用户报文组;
    所述存储模块,用于将每个所述用户报文组的每个所述子用户报文组中的每个用户报文依次存储在所述DRAM中;
    所述第二生成模块,还用于根据所述所有用户报文在所述DRAM中的存储位置生成所述所有用户报文的索引信息;
    所述存储模块,还用于将所述所有用户报文的索引信息存储在所述第一片内SRAM中。
  8. 根据权利要求6或7所述的装置,其特征在于,所述装置还包括判断模块,其中:
    所述判断模块,用于在所述第一生成模块根据所述缓存操作缓存的用户报文生成用户流量之前,判断所述缓存操作缓存的用户报文的数量是否达到预设数量阈值,当达到所述预设数量阈值时,触发所述第一生成模块执行所述根据所述缓存操作缓存的用户报文生成用户流量的操作,当未达到所述预设数量阈值时,触发所述处理模块执行所述根据所述用户流量生成指令以及FPGA的第一片内SRAM中预先存储的索引信息,对DRAM中存储的且所述索引信息所指示的用户报文执行预读取操作与缓存操作。
  9. 根据权利要求6-8任一项所述的装置,其特征在于,所述处理模块包括读取子模块以及缓存子模块,其中:
    所述读取子模块,用于根据所述用户流量生成指令以及所述第一片内SRAM中预先存储的索引信息,读取所述DRAM中存储的且所述索引信息所指示的用户报文;
    所述缓存子模块,用于将所述读取子模块读取到的所述用户报文缓存到所述第二片内SRAM中。
  10. 根据权利要求9所述的装置,其特征在于,所述装置还包括清零模块,其中,所述清零模块,用于在所述缓存子模块将所述读取子模块读取到的所述用户报文缓存到所述第二片内SRAM中之前,对所述第一片内SRAM以及所述第二片内SRAM执行清零操作。
PCT/CN2016/097245 2015-09-30 2016-08-29 一种用户流量的生成方法及装置 WO2017054603A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP16850227.6A EP3349405B1 (en) 2015-09-30 2016-08-29 Method and apparatus for generating user traffic
US15/940,993 US10700980B2 (en) 2015-09-30 2018-03-30 User traffic generation method and apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510644528.7 2015-09-30
CN201510644528.7A CN105207953B (zh) 2015-09-30 2015-09-30 一种用户流量的生成方法及装置

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/940,993 Continuation US10700980B2 (en) 2015-09-30 2018-03-30 User traffic generation method and apparatus

Publications (1)

Publication Number Publication Date
WO2017054603A1 true WO2017054603A1 (zh) 2017-04-06

Family

ID=54955394

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/097245 WO2017054603A1 (zh) 2015-09-30 2016-08-29 一种用户流量的生成方法及装置

Country Status (4)

Country Link
US (1) US10700980B2 (zh)
EP (1) EP3349405B1 (zh)
CN (1) CN105207953B (zh)
WO (1) WO2017054603A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114500385A (zh) * 2021-12-23 2022-05-13 武汉微创光电股份有限公司 一种通过fpga实现千兆以太网数据流量整形的方法及系统

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105207953B (zh) * 2015-09-30 2019-02-05 华为技术有限公司 一种用户流量的生成方法及装置

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1487699A (zh) * 2002-09-30 2004-04-07 华为技术有限公司 一种用于交换网测试的流量模拟方法及装置
WO2005125098A1 (en) * 2004-06-15 2005-12-29 Alcatel Improved networks statistics processing device
CN103248540A (zh) * 2013-05-27 2013-08-14 济南大学 基于多分形小波模型的fpga网络流量发生系统及方法
CN104168162A (zh) * 2014-08-20 2014-11-26 电子科技大学 一种软硬件协同实现用于交换机验证测试的流量生成器
CN104518899A (zh) * 2013-09-30 2015-04-15 中国电信股份有限公司 网络路由流量仿真方法和装置
CN105207953A (zh) * 2015-09-30 2015-12-30 华为技术有限公司 一种用户流量的生成方法及装置

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7174427B2 (en) * 2003-12-05 2007-02-06 Intel Corporation Device and method for handling MPLS labels
CN100372316C (zh) 2005-05-13 2008-02-27 清华大学 10g网络性能测试仪流量生成与发送电路组件
US20080052270A1 (en) * 2006-08-23 2008-02-28 Telefonaktiebolaget Lm Ericsson (Publ) Hash table structure and search method
US7890702B2 (en) * 2007-11-26 2011-02-15 Advanced Micro Devices, Inc. Prefetch instruction extensions
US9172647B2 (en) * 2013-04-25 2015-10-27 Ixia Distributed network test system
CN103501209B (zh) * 2013-09-25 2017-04-19 中国科学院声学研究所 一种异构多网协同传输的单业务分流方法和设备

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1487699A (zh) * 2002-09-30 2004-04-07 华为技术有限公司 一种用于交换网测试的流量模拟方法及装置
WO2005125098A1 (en) * 2004-06-15 2005-12-29 Alcatel Improved networks statistics processing device
CN103248540A (zh) * 2013-05-27 2013-08-14 济南大学 基于多分形小波模型的fpga网络流量发生系统及方法
CN104518899A (zh) * 2013-09-30 2015-04-15 中国电信股份有限公司 网络路由流量仿真方法和装置
CN104168162A (zh) * 2014-08-20 2014-11-26 电子科技大学 一种软硬件协同实现用于交换机验证测试的流量生成器
CN105207953A (zh) * 2015-09-30 2015-12-30 华为技术有限公司 一种用户流量的生成方法及装置

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114500385A (zh) * 2021-12-23 2022-05-13 武汉微创光电股份有限公司 一种通过fpga实现千兆以太网数据流量整形的方法及系统

Also Published As

Publication number Publication date
CN105207953B (zh) 2019-02-05
EP3349405B1 (en) 2020-07-15
EP3349405A4 (en) 2018-09-26
US10700980B2 (en) 2020-06-30
CN105207953A (zh) 2015-12-30
US20180227233A1 (en) 2018-08-09
EP3349405A1 (en) 2018-07-18

Similar Documents

Publication Publication Date Title
US7773519B2 (en) Method and system to manage network traffic congestion
US8831025B2 (en) Parallel processing using multi-core processor
US10026442B2 (en) Data storage mechanism using storage system determined write locations
US9407460B2 (en) Cut-through processing for slow and fast ports
US9609016B2 (en) Web redirection for content scanning
US20100146143A1 (en) System and Method for Analyzing Data Traffic
US8972513B2 (en) Content caching
US10031837B1 (en) Dynamic service debugging in a virtual environment
JP2017539163A (ja) Sshプロトコルに基づく会話解析方法及びシステム
US10623450B2 (en) Access to data on a remote device
US10530683B2 (en) High-quality adaptive bitrate video through multiple links
US10200293B2 (en) Dynamically offloading flows from a service chain
US20140258781A1 (en) Multi-Stage Application Layer Test Packet Generator For Testing Communication Networks
WO2017054603A1 (zh) 一种用户流量的生成方法及装置
WO2016029738A1 (zh) 对流数据进行处理的方法及装置
CN102395958A (zh) 一种数据包的并发处理方法及设备
US20190132252A1 (en) Communication control device and communication control method
US8914542B1 (en) Content caching
US11381630B2 (en) Transmitting data over a network in representational state transfer (REST) applications
WO2021056715A1 (zh) 一种服务器的代理监测方法及相关产品
WO2016101485A1 (zh) 一种写入tcam条目的方法及装置
WO2022062758A1 (zh) 激励报文发送方法、装置、电子设备及存储介质
US20230409506A1 (en) Data transmission method, device, network system, and storage medium
US20130318291A1 (en) Methods, systems, and computer readable media for generating test packets in a network test device using value list caching
JP2010004262A (ja) 情報受信装置および情報受信方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16850227

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2016850227

Country of ref document: EP