WO2023115978A1 - 一种报文处理方法、装置及电子设备 - Google Patents

一种报文处理方法、装置及电子设备 Download PDF

Info

Publication number
WO2023115978A1
WO2023115978A1 PCT/CN2022/111397 CN2022111397W WO2023115978A1 WO 2023115978 A1 WO2023115978 A1 WO 2023115978A1 CN 2022111397 W CN2022111397 W CN 2022111397W WO 2023115978 A1 WO2023115978 A1 WO 2023115978A1
Authority
WO
WIPO (PCT)
Prior art keywords
address
target
node
rep
identifier
Prior art date
Application number
PCT/CN2022/111397
Other languages
English (en)
French (fr)
Inventor
张志强
刘峰
Original Assignee
北京百度网讯科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京百度网讯科技有限公司 filed Critical 北京百度网讯科技有限公司
Publication of WO2023115978A1 publication Critical patent/WO2023115978A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/22Indexing; Data structures therefor; Storage structures
    • G06F16/2228Indexing structures
    • G06F16/2255Hash tables
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling

Definitions

  • the present disclosure relates to the technical field of cloud computing, and in particular to a message processing method, device and electronic equipment.
  • Inserting a data processing unit (Data Processing Unit, DPU) card into the cloud server can offload the work of the network core, cloud disk core, and management core in the host (Host) to the DPU card, eliminating the core occupation on the host side, and effectively It can improve network performance and increase the number of cores that can be sold.
  • DPU Data Processing Unit
  • multiple elastic network cards are virtualized in the cloud server, and each elastic network card has a Media Access Control (MAC) address, a globally unique identifier, which is an address used to confirm the location of the network device; each The physical function (Physical Function, PF) of an elastic network card will virtualize several virtual functions (Virtual Function, VF) according to business needs, and each VF will also be assigned a globally unique MAC address identifier.
  • MAC Media Access Control
  • Each VF or PF will dynamically allocate the number of queues through the DPU.
  • the DPU card supports a certain number of queues. All VFs and PFs will share these queues, and the software will dynamically allocate them when needed.
  • the network card of the DPU receives a message in the network, it needs to determine which VF or PF the message is transmitted to according to the MAC address, which is equivalent to a one-to-one mapping relationship between the MAC address and the VF and PF, and there is a proxy port in the software (Represent Port, REP) is in one-to-one correspondence with the PF and VF on the host side. After determining which VF or PF, since one VF or one PF can be allocated multiple queues, it is necessary to determine which queue the message will be transmitted to.
  • a centralized query is performed in a table that provides the query.
  • the mapping method often used in the table is through a bitmap (bitmap), that is, all queues belonging to The bit corresponding to the queue of a certain REP is set to 1, and the bit corresponding to the queue that does not belong to the REP is set to 0. For example, if there are 2048 queues, 2048 bits are required to represent the result of each queue assigned to the REP.
  • bitmap bitmap
  • the disclosure provides a message processing method, device and electronic equipment.
  • an embodiment of the present disclosure provides a packet processing method, the method including:
  • the target proxy port REP identifier corresponding to the destination MAC address in the MAC address hash table ;
  • the target corresponding to the destination MAC address is queried in the MAC address hash table
  • the REP ID is all that is needed, and then according to the target REP ID queried from the MAC address hash table, the corresponding target queue ID can be obtained, and the mapping of the queue is realized, and the first message is transmitted to the queue corresponding to the target queue ID In this way, there is no need to query in a table with a large space occupied by the results of the queue assigned to a certain REP by using the same number of bits as the number of queues.
  • the MAC address hash is used.
  • Table which can reduce the space occupied by the table, and obtain the target REP corresponding to the destination MAC address from the MAC address hash table when the destination MAC address is found in the pre-acquired MAC address hash table ID, and then obtain the corresponding target queue ID according to the target REP ID, which also avoids querying the destination MAC address, target REP ID, and target queue ID in one table. In this way, the stability of querying the queue identifier can be improved, thereby improving the stability of the transmission of the first message.
  • an embodiment of the present disclosure provides a message processing device, the device including:
  • a first acquiring module configured to acquire a first message, where the first message includes a destination MAC address
  • Hash processing module for carrying out hash calculation to described purpose MAC address, obtains target hash value
  • the first identification obtaining module is configured to use the target hash value to obtain the destination MAC in the MAC address hash table when the destination MAC address is found in the pre-acquired MAC address hash table
  • the second identification acquisition module is used to acquire the target queue identification corresponding to the target REP identification
  • a transmission module configured to transmit the first packet to the queue corresponding to the target queue identifier.
  • an embodiment of the present disclosure further provides an electronic device, including:
  • the memory stores instructions that can be executed by the at least one processor, and the instructions are executed by the at least one processor, so that the at least one processor can perform the message processing provided by the first aspect of the present disclosure method.
  • an embodiment of the present disclosure further provides a non-transitory computer-readable storage medium storing computer instructions, the computer instructions are used to enable the computer to execute the message processing method provided in the first aspect of the present disclosure.
  • an embodiment of the present disclosure provides a computer program product, including a computer program, and when the computer program is executed by a processor, implements the message processing method provided in the first aspect of the present disclosure.
  • FIG. 1 is one of the schematic flowcharts of a message processing method according to an embodiment of the present disclosure
  • FIG. 2 is the second schematic flow diagram of a message processing method according to an embodiment of the present disclosure
  • FIG. 3 is a schematic diagram of a message processing method according to an embodiment of the present disclosure.
  • FIG. 4 is a schematic diagram of a circular linked list of an embodiment provided by the present disclosure.
  • FIG. 5 is a schematic diagram of a newly added node in a circular linked list according to an embodiment of the present disclosure
  • FIG. 6 is a schematic diagram of deleting nodes in a circular linked list according to an embodiment of the present disclosure
  • FIG. 7 is a structural diagram of a message processing device according to an embodiment of the present disclosure.
  • Fig. 8 is a block diagram of an electronic device used to implement the message processing method of the embodiment of the present disclosure.
  • the present disclosure provides a message processing method, the method including:
  • Step S101 Obtain a first packet, where the first packet includes a destination MAC address.
  • the method can be applied to a cloud server, where the cloud server includes a host (Host) and a DPU card, and the DPU card includes a Field Programmable Gate Array (Field Programmable Gate Array, FPGA), a System on Chip (System on Chip, SOC) and a network card,
  • the first message may be received through the network card, and the message processing method in this embodiment may be executed by FPGA.
  • the queue is set on the Host side, and the first message received is put into a certain queue on the Host side. First, it is necessary to determine which queue the first message is put into, that is, a queue query (queue mapping) is required.
  • Step S102 Perform hash calculation on the destination MAC address to obtain a target hash value.
  • the hashing algorithm is the same as the hashing algorithm for hashing the MAC address in the process of pre-initializing the MAC address hash table. There are many types of hash algorithms, and the specific hash algorithms used in this disclosure are not limited.
  • Step S103 Using the target hash value, if the destination MAC address is found in the pre-acquired MAC address hash table, obtain the target proxy port REP identifier corresponding to the destination MAC address in the MAC address hash table.
  • the MAC address hash table includes a hash value sequence, a MAC address sequence, and a REP identification sequence.
  • the REP identifiers corresponding to the multiple MAC addresses initialize (create) the MAC address hash table.
  • the SOC may perform the initialization of each table in the embodiment of the present disclosure.
  • the destination MAC address is one of multiple supported MAC addresses. In this way, the target REP identifier corresponding to the destination MAC address can be continuously queried in the MAC address hash table to obtain the target REP identifier.
  • Step S104 Obtain the target queue ID corresponding to the target REP ID.
  • the corresponding target queue ID is obtained according to the target REP ID to implement queue mapping.
  • the REP identifier and the queue identifier may have a one-to-many relationship, and the above-mentioned target queue identifier is one of multiple REP identifiers corresponding to the target REP identifier.
  • Step S105 Transmit the first packet to the queue corresponding to the target queue identifier.
  • the message can be sent to the queue corresponding to the target queue ID, so that the message can be read from the queue for corresponding processing. It should be noted that the queue is set on the Host, and after the FPGA obtains the target queue ID, it sends the first packet into the queue corresponding to the target queue ID of the Host.
  • the target corresponding to the destination MAC address is queried in the MAC address hash table
  • the REP ID is all that is needed, and then according to the target REP ID queried from the MAC address hash table, the corresponding target queue ID can be obtained, and the mapping of the queue is realized, and the first message is transmitted to the queue corresponding to the target queue ID.
  • the MAC address hash table is used.
  • the space occupied by the table can be reduced.
  • the target REP ID corresponding to the destination MAC address is obtained from the MAC address hash table, and then obtained according to the target REP ID.
  • the corresponding target queue ID which can also avoid performing destination MAC address query, target REP ID query, and target queue ID query in one table. In this way, the stability of querying the queue identifier can be improved, thereby improving the stability of the transmission of the first message.
  • obtaining the target queue ID corresponding to the target REP ID includes:
  • the circular linked list node address table includes a REP tag sequence and a circular linked list node address sequence corresponding to the REP tag sequence;
  • the queue circular linked list includes the addresses of M nodes and the entries of each node, the addresses of the M nodes include the circular linked list node address sequence, M is an integer greater than 1, any entry includes a queue identifier, the target entry is the entry of the target node, and the target node is a node corresponding to the address of the target node among the M nodes.
  • a message processing method is provided, as shown in FIG. 2, the method includes:
  • Step S201 Obtain a first packet, where the first packet includes a destination MAC address.
  • Step S202 Perform hash calculation on the destination MAC address to obtain a target hash value.
  • Step S203 Using the target hash value, if the destination MAC address is found in the pre-obtained MAC address hash table, obtain the target proxy port REP identifier corresponding to the destination MAC address in the MAC address hash table.
  • Step S204 Query the target node address corresponding to the target REP identifier in the pre-acquired circular linked list node address table.
  • the circular linked list node address table includes the REP identification sequence and the circular linked list node address sequence corresponding to the REP identification sequence.
  • Step S205 Obtain the target queue identifier from the target entry of the pre-acquired queue circular linked list.
  • the queue circular linked list includes the addresses of M nodes and the entries of each node, the addresses of M nodes include the address sequence of the circular linked list nodes, M is an integer greater than 1, and any entry includes the queue identifier, the target table The item is an entry of the target node, and the target node is a node corresponding to the address of the target node among the M nodes.
  • Step S206 Transmit the first packet to the queue corresponding to the target queue identifier.
  • the above steps S201-S203 correspond one-to-one to the above steps S101-S103, and the step S105 corresponds to the step S206, which will not be repeated here.
  • the SOC can pre-initialize the queue circular linked list and the circular linked list node address table. These two tables are related. It can be understood that the queue circular linked list is a circular linked list formed by M nodes. Each node includes the address of the node and the node Any table entry includes a queue identifier, so it can be regarded as a circular linked list of the queue, and the address of a node can also be understood as the address stored in the table entry of the node.
  • the circular linked list node address table includes the circular linked list node address sequence and the corresponding REP identification sequence.
  • any identification in the REP identification sequence corresponds to a circular linked list node address in the circular linked list node address sequence
  • each in the REP identification sequence The addresses of the circular linked list nodes corresponding to the identification are different, and any circular linked list node address in the circular linked list node address sequence is the address of a node in the queue circular linked list, which can include the addresses of N nodes, and N is an integer greater than 1, and is less than M, the circular linked list node addresses in the circular linked list node address sequence belong to the addresses of M nodes.
  • the above target node address is a circular linked list node address corresponding to the target REP identifier in the circular linked list node address table.
  • first query the target REP identifier in the MAC address hash table then query the target node address corresponding to the target REP identifier in the circular linked list node address table, and then obtain the target queue from the target entry of the queue circular linked list Identification, to realize the mapping of the queue, so that the target REP identification query is allocated in the MAC address hash table, the target node address query is allocated in the circular linked list node address table, and the target queue identification query is allocated in the queue circular linked list, so , which can realize pipelined query and avoid the query stability being affected by the concentration of queries in one table. In this way, the stability of queue query can be improved, and the first message will be transmitted to the query queue to improve the transmission of the first message stability.
  • the MAC address hash table includes a first hash table and a second hash table
  • the first hash table includes a first hash value sequence and a first information group sequence corresponding to the first hash value sequence , any information group in the first information group sequence includes MAC address, REP identification and next address
  • the second hash table includes the hash address sequence and the second information group sequence corresponding to the hash address sequence, the second information group sequence Any of the information groups includes MAC address, REP identifier and next address;
  • obtaining the target proxy port REP identification corresponding to the destination MAC address in the MAC address hash table including any of the following :
  • the target hash value in the case that the destination MAC address is inquired in the first hash table, obtain the target REP mark in the first hash table;
  • the target hash value if the destination MAC address is not found in the first hash table, and according to the target next address, if the destination MAC address is found from the second hash table, from the second hash table Obtain the target REP identifier, and the target next address is the next address corresponding to the target hash value in the first hash table.
  • the second hash table can be understood as a hash bucket.
  • use the target hash value to query the target MAC address in the first hash table. If the target MAC address is found, then directly obtain the target REP identifier corresponding to the target MAC address in the first hash table. If the destination MAC address is not found, it is necessary to obtain the target next address corresponding to the target hash value from the first hash table, and continue to query in the second hash table according to the next target address, and from the second hash table In the case where the destination MAC address is queried, the target REP identifier can be obtained from the second hash table.
  • the query pressure of the MAC address hash table can be shared between the first hash table and the second hash table , preventing the query of the target REP identifier from being concentrated in one table, realizing pipeline query, improving query stability, and further improving the stability of the first message transmission.
  • the target hash value when the destination MAC address is not found in the first hash table, and according to the target next address, when the destination MAC address is found from the second hash table, from Obtain the target REP identifier in the second hash table, including:
  • the current information group is the information group corresponding to the first hash address in the second information group sequence, and the first hash address is the current query address in the first hash address list;
  • the current query address is updated as the next address in the current information group, and the step of determining the current information group based on the current query address is returned again Query until the destination MAC address is found in the current information group of the second hash table, and obtain the target REP identifier from the current information group.
  • the link is realized through the next address in the hash table, for example, the same hash value
  • the multiple MAC addresses in the network are sequentially linked according to the corresponding next address. In this embodiment, if the destination MAC address is not found in the first hash table, first use the next address of the target as the current query address, and use the current query address to query in the second hash table to obtain its MAC address.
  • the current query address can be updated , that is, update to the next address in the current information group, the second information group where the next address is located has a MAC address, return based on the current query address, re-determine the current information group, and query in the re-determined current information group
  • the destination MAC address that is, re-query, if it is still not found, then update the current query address, and continue to re-query until the destination MAC address is found in the current information group of the second hash table, indicating that the destination MAC address query is successful.
  • the target REP identifier corresponding to the target MAC address may be acquired from the current information group.
  • the next address is used as the basis of the query, that is, the target next address is first used as the current query address to query, if not found, the current query
  • the address is updated to the next address in the current information group, and then the query is continued, and the current information group is queried according to the current query address.
  • the destination MAC address is found in the current information group of the second hash table, from the current information group Obtain the target REP ID from . In this way, even if different MAC addresses have the same hash value, these MAC addresses are linked through the next address, and the query is performed sequentially through the next address, so as to improve the efficiency of the query.
  • querying the target node address corresponding to the target REP identifier in the pre-acquired circular linked list node address table includes:
  • the effective indication information corresponding to the target REP identifier in the MAC address hash table is the first preset value, obtain the first address corresponding to the target REP identifier from the MAC address hash table, and determine the first address as the target node address ;
  • the target node address corresponding to the target REP identifier in the circular linked list node address table is obtained.
  • the circular linked list node address in the circular linked list node address table may be empty, that is, the circular linked list node
  • the node address of the circular linked list corresponding to each REP identifier in the address table may be empty.
  • the preset value (for example, 0) means that the received message is represented as the first message for the REP.
  • the circular linked list node address corresponding to the REP mark in the circular linked list node address table is empty, thereby, in the MAC address hash
  • the effective indication information corresponding to the target REP identifier in the table is the first preset value, it is necessary to first obtain the first address corresponding to the target REP identifier from the MAC address hash table as the target node address.
  • the effective indication information is the second preset value (for example, 1), it means that the received message is represented as a non-first message for the REP, and the circular linked list node address corresponding to the REP mark in the circular linked list node address table is received when the first message is received.
  • the message will be updated later, and the effective indication information will also be updated, so that the circular linked list node address corresponding to the REP identifier can be directly obtained from the circular linked list node address table.
  • the target node address is determined in different ways to improve the flexibility of obtaining the target node address, and then the target queue identifier is obtained according to the target node address , thereby improving the flexibility of queue queries.
  • any entry further includes the next address, after obtaining the target queue identifier from the target entry of the pre-acquired queue circular linked list, it also includes:
  • the effective indication information is the first preset value, update the effective indication information to the second preset value.
  • the node address of the target REP identifier in the circular linked list node address table needs to be updated to realize the dynamic update of the node address corresponding to the target REP identifier.
  • a new queue ID can be found.
  • the message can be sent to a new queue to realize dynamic mapping of the queue and avoid the same REP
  • the marked packets are continuously sent to the same queue to reduce the pressure on the queue.
  • the next address in a certain entry in the queue circular linked list can be understood as the address of the next node of the node corresponding to the entry in the queue circular linked list.
  • the queue circular linked list includes N circular linked lists, where N is an integer greater than 1, and multiple queue identifiers in the same circular linked list correspond to the same REP identifier.
  • a REP identifier corresponds to a circular linked list, and multiple queue identifiers therein are corresponding to the REP identifier.
  • the target REP identifier is obtained, and after querying the target node address corresponding to the target REP identifier in the circular linked list node address table, pass The target node address can query the target queue ID in the target entry in the circular linked list corresponding to the target REP ID.
  • Different REP IDs correspond to different circular linked lists, so that the queue ID can be queried in different circular linked lists, avoiding the queue ID Query tasks are concentrated in a circular linked list to improve query stability and accuracy.
  • the method also includes:
  • the newly added entry of the newly added instruction is stored in the free first address in the storage space of the queue circular list, and the newly added instruction is indicated in Newly added between the first node and the second node of the first circular linked list, the first circular linked list corresponds to the first REP identifier, and the newly added entry includes the first REP identifier and the corresponding newly added queue identifier;
  • new nodes can be added in the circular linked list to realize the updating of the circular linked list.
  • the address of the newly added node is the first address
  • the entry of the newly added node is the new address after the next address update. entry, and the newly added entry includes the first REP identifier and the corresponding newly added queue identifier. In this way, the function of dynamically adding a queue can be realized to meet the requirement of adding a queue.
  • the method also includes:
  • the target node In the case of receiving a delete instruction to the target node of the second circular linked list in the N circular linked lists, the target node is deleted from the second circular linked list, and the next in the table entry of the third node in the second circular linked list The address is updated to the third address;
  • the third address is the next address in the entry of the target node
  • the third node is the node whose next address is the address of the target node in the second circular linked list before updating.
  • nodes can be deleted in the circular linked list to realize updating of the circular linked list.
  • the function of dynamically deleting queues can be realized to meet the requirement of deleting queues.
  • the embodiments of the present disclosure implement queue mapping in a two-stage query manner: the first-stage query matches the mapping relationship between the message and the REP identifier, and the second-stage query queries the queue identifier in the circular linked list corresponding to the REP representation.
  • the first-level query the method of hash table mapping is adopted. Because the number of supported MAC addresses is fixed, each MAC address corresponds to a REP on the SOC side, and also corresponds to a PF or VF on the Host side, and a PF or a VF can correspond to multiple queues. In the process of queue mapping, it is necessary to map the 48-bit destination MAC address to the supported REP identifier. Therefore, the first storage space (ram) on the FPGA side is the storage space of the first hash table. The main purpose is to find the direct mapping relationship between the destination MAC address of the received message and the REP. The second storage space on the FPGA side is The storage space of the second hash table.
  • the second hash table can be understood as a hash bucket table. If the destination MAC address is not found in the first hash table, continue to query in the second hash table, which is ideal. The situation is that there is no hash conflict (the hash value corresponding to each MAC address is different), and the first query in the first hash table can be matched; when there is a hash conflict (the hash value of different MAC addresses hashed same), then use the next address (next address) corresponding to the target hash value of the destination MAC address in the first hash table to query in the second hash table, and so on until the final result is found, all All hash collisions are placed in the second hash table.
  • the SOC side can pre-acquire supported MAC addresses, for example, 256 MAC addresses, and obtain 256 hash values through hash processing, and obtain the corresponding information group to build the first hash table , wherein a hash value and a corresponding information group constitute an information entry of the first hash table, if the hash values of the 256 MAC addresses are all different, then the constructed first hash table includes 256 pieces of information At this time, the second hash table is no longer constructed.
  • MAC addresses for example, 256 MAC addresses
  • the first hash table is first constructed according to different hash values and corresponding information groups, including X information table items (corresponding to X MAC addresses), X is a positive integer, less than or equal to 256.
  • X is a positive integer, less than or equal to 256.
  • the second information group of a MAC address is associated with the hash address to obtain a second hash table. If the next address in the second information group of a MAC address is not empty, the next address is the MAC address.
  • the hash address of the next MAC address (same hash value as this MAC address).
  • the broadcast message is specially processed on the logical side, without looking up the table entry, and each PF copies a copy.
  • the method adopted by the Hash algorithm can be processed by byte XOR, or the more complex cyclic redundancy check (Cyclic Redundancy Check, CRC) algorithm can be used to separate the hash results, because the number of supported MAC addresses is fixed, It is recommended to use a relatively simple hash algorithm.
  • CRC Cyclic Redundancy Check
  • the second-level query select the queue ID in the form of a circular linked list. For example, a storage space with a depth capable of accommodating the number of all queues can be instantiated, and the identifiers of all queues are stored in this storage space. In this way, the internal resources of the FPGA are acceptable, and the space utilization rate is high without causing waste of resources.
  • queues are shared by all REPs, but only one can be allocated to REPs at the same time, that is to say, two different REPs will not share the same queue, and the queues belonging to one REP will be linked in a circular list in the SOC side memory connected in the same way, and flashed to the internal storage space of the FPGA, so that the internal storage space of the FPGA can receive all the queue IDs (Q id) dynamically allocated by REP to improve the utilization efficiency of resources.
  • the first message in the logic is first When searching for a matching hit, the first address of the circular linked list corresponding to REP will be obtained, and the queue identifier of the first message will be queried.
  • the address of the linked list node will be recorded in another storage space through the circular linked list node address table, which is convenient for subsequent reporting.
  • the text obtains the queue ID of the REP identifier (REP id) in the circular linked list in turn, and sends the message to the Host side.
  • whether the node is valid should be identified in the circular linked list node address table (cache table).
  • Valid indication information is set in the table to indicate whether it is valid or invalid. If it is the first preset value, it means invalid. If it is the second preset value, it means valid. Therefore, it can be judged whether the REP is valid or not through the valid indication information (valid). Transmit the first telegram.
  • this method does not affect the performance of message forwarding, and can achieve pipeline processing.
  • the processing delay is the time to read the storage space, which can generally be It is designed to be 1 clock cycle.
  • a hash conflict occurs, a lookup will be added when querying the hash bucket, but as long as the depth of the hash bucket does not affect the overall forwarding performance. If there is a performance bottleneck in this place, you can improve the search performance by increasing the hash conflict ram pipeline or by obtaining 2 or 3 hash buckets for the first-level query, that is, by exchanging resources for performance. After evaluating the current disclosure scheme meet performance requirements.
  • the number of MAC addresses supported is 256, and the number of queues on the Host side is 2048.
  • the internal implementation of the FPGA is shown in Figure 3: the received message enters the FPGA through the SOC side, and the logic side parses out the destination MAC address.
  • the 48-bit destination MAC address is mapped to the 8-bit table lookup target hash value (key value) through the hash algorithm. For example, if the target hash value is 100, then 100 is used as the lookup hash table address of the first hash table. The found result shows that there is no hit.
  • the next address (n_addr) in the entry corresponding to 100 in the first hash table points to address 5 of the second hash table. Continue to check the address 5 of the second hash table.
  • the next address (n_addr) in the entry corresponding to address 5 points to address 255 continue to address 255 in the second hash table, and match and hit after getting it, that is, the MAC in the entry of address 255
  • the address is the destination MAC address
  • the corresponding target REP identification, effective indication information and first address are obtained from the entry in the second hash table, for example, the target REP identification is 128, if the effective indication information is the second preset value , the indication is valid, then there is no need to use the first address, and the circular linked list node address table is queried through the target REP identifier 128 to obtain the corresponding target node address.
  • Queue identification (Q id) of the corresponding table of address 255 in the circular linked list update the next address (next_addr) in the circular linked list node address table at the same time, as the address corresponding to the target REP identification 128, at this time, effectively indicate The effective enable bit of information valid is used for querying the linked list of subsequent messages, so that the effect of pipeline can be achieved. If it is the first message, the address of the REP flag in the circular link node address table is found to be invalid, and the first address (start address) s_addr in the first hash table is used as the address of the query circular linked list, and the effective indication information is updated, Make it indicate that the address is valid.
  • address 0 to address 3 in a circular linked list respectively store a table entry, the next address (n_addr) in the table entry in address 0 points to address 1, and the next address (n_addr) in the table entry in address 1
  • An address (n_addr) points to address 2
  • the next address (n_addr) in the entry in address 2 points to address 3
  • the next address (n_addr) in the entry in address 3 points to address 0, thus forming a circular linked list.
  • new nodes can be added, and address 0, address 1, address 2, and address 3 form a circular linked list. If a new queue is added at this time When the REP logo is used, it is necessary to add the new entry of the newly added queue to the storage space for storing the circular linked list.
  • the next address (n_addr) in the entry points to address 0, and the next address (n_addr) in the table entry in address 3 points to address 4, thereby forming a new circular linked list, and assigning address 0, address 1, address 2, address 3,
  • the table items in address 4 are strung together in the form of a linked list.
  • the storage locations of address 0, address 1, address 2, address 3, and address 4 can be any location in the opened storage space (RAM), as long as the table of the linked list
  • the next address (n_addr) in the item loops to the next address to form a circular linked list, and the search function will not be abnormal during the adding process, and the dynamic adding queue function is realized.
  • the number of queues on the SOC side can be effectively saved, because each queue needs to apply for a corresponding memory space for sending and receiving network messages, thereby also achieving the effect of releasing the memory space on the SOC side and increasing the number of reserved queues.
  • Bandwidth utilization the number of corresponding FPGA side needs to be supported will be reduced, and the resource occupation of the FPGA will also be released, so that more functions can be offloaded to the FPGA for acceleration, and the timing will be optimized as resources decrease. , so that the main frequency of the FPGA can run higher.
  • the dynamic queue mapping process done in the software is offloaded to the FPGA, which avoids the operation of software parsing the destination MAC address in the message and looking up the table to calculate the key (key) value, which improves the efficiency of sending and receiving messages in the software, thereby improving the overall DPU performance, and improve the access performance to the host's physical machine and virtual machine, and alleviate the high-frequency host access congestion.
  • the present disclosure also provides a packet processing device 700, which includes:
  • the first acquiring module 701 is configured to acquire a first message, where the first message includes a target media access control MAC address;
  • the hash processing module 702 is used to perform hash calculation on the destination MAC address to obtain the target hash value
  • the first identification obtaining module 703 is configured to use the target hash value to obtain the target proxy port REP corresponding to the target MAC address in the MAC address hash table in the case that the destination MAC address is found in the pre-acquired MAC address hash table logo;
  • the second identification obtaining module 704 is used to obtain the target queue identification corresponding to the target REP identification;
  • the transmission module 705 is configured to transmit the first packet to the queue corresponding to the target queue identifier.
  • the second identification acquisition module includes:
  • the first search module is used to query the target node address corresponding to the target REP identifier in the pre-acquired circular linked list node address table, wherein the circular linked list node address table includes a REP identifier sequence and a circular linked list node address sequence corresponding to the REP identifier sequence ;
  • the queue identification acquisition module is used to obtain the target queue identification from the target entry of the pre-acquired queue circular linked list, wherein the queue circular linked list includes the addresses of M nodes and the entries of each node, and the addresses of the M nodes include The circular linked list node address sequence, M is an integer greater than 1, any entry includes a queue identifier, the target entry is the entry of the target node, and the target node is the node corresponding to the address of the target node among the M nodes.
  • the MAC address hash table includes a first hash table and a second hash table
  • the first hash table includes a first hash value sequence and a first information group sequence corresponding to the first hash value sequence , any information group in the first information group sequence includes MAC address, REP identification and next address
  • the second hash table includes the hash address sequence and the second information group sequence corresponding to the hash address sequence, the second information group sequence Any of the information groups includes MAC address, REP identifier and next address;
  • obtaining the target proxy port REP identification corresponding to the destination MAC address in the MAC address hash table including any of the following :
  • the target hash value if the destination MAC address is not found in the first hash table, and according to the target next address, if the destination MAC address is found from the second hash table, from the second hash table Obtain the target REP identifier, and the target next address is the next address corresponding to the target hash value in the first hash table.
  • the first identification acquisition module includes:
  • the address processing module is used to use the target hash value to use the target next address as the current query address when the target MAC address is not found in the first hash table;
  • a determining module configured to determine the current information group based on the current query address, wherein the current information group is the information group corresponding to the first hash address in the second information group sequence, and the first hash address is the first hash address in the first hash address list The current query address of ;
  • the first update module is used to update the current query address to the next address in the current information group when the destination MAC address is not found in the current information group of the second hash table, and return based on the current query address, The process of determining the current information group, re-query in the current information group of the second hash table, until the destination MAC address is found in the current information group of the second hash table, obtain the target from the current information group REP logo.
  • the queue identification acquisition module includes:
  • the address obtaining module is used to obtain the first address corresponding to the target REP identifier from the MAC address hash table when the effective indication information corresponding to the target REP identifier in the MAC address hash table is a first preset value, and the first The address is determined as the target node address;
  • the target node address corresponding to the target REP identifier in the circular linked list node address table is acquired.
  • any entry further includes a next address
  • the device further includes:
  • the second update module is used for the queue identification acquisition module to perform the acquisition of the target queue identification from the target entry of the pre-acquired queue circular linked list, and use the next address in the target entry as the target REP identification in the circular linked list node address table node address;
  • the effective indication information is the first preset value, update the effective indication information to the second preset value.
  • the queue circular linked list includes N circular linked lists, where N is an integer greater than 1, and multiple queue identifiers in the same circular linked list correspond to the same REP identifier.
  • the device also includes:
  • the storage module is configured to store the newly added entry of the newly added instruction into the free first address in the storage space of the queue circular list in the case of receiving a newly added instruction to the first circular linked list in the N circular linked lists,
  • the newly added instruction indicates to add between the first node and the second node of the first circular linked list, the first circular linked list corresponds to the first REP identifier, and the newly added entry includes the first REP identifier and the corresponding newly added queue identifier ;
  • the third update module is configured to update the next address in the entry of the first node to the first address, and update the next address in the newly added entry to the second address of the second node.
  • the device also includes:
  • a deletion module configured to delete the target node from the second circular linked list in the case of receiving a delete instruction to the target node of the second circular linked list in the N circular linked lists, and delete the third node in the second circular linked list The next address in the entry is updated to the third address;
  • the third address is the next address in the entry of the target node
  • the third node is the node whose next address is the address of the target node in the second circular linked list before updating.
  • the message processing device of each of the above embodiments is a device for implementing the message processing method of each of the above embodiments, and the technical features and technical effects correspond to each other, and will not be repeated here.
  • the present disclosure also provides an electronic device, a readable storage medium, and a computer program product.
  • the non-transitory computer-readable storage medium in the embodiments of the present disclosure stores computer instructions, and the computer instructions are used to cause a computer to execute the message processing method provided in the present disclosure.
  • the computer program product of the embodiments of the present disclosure includes a computer program, and the computer program is used to cause a computer to execute the message processing method provided by each embodiment of the present disclosure.
  • FIG. 8 shows a schematic block diagram of an example electronic device 800 that may be used to implement embodiments of the present disclosure.
  • Electronic device is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other suitable computers.
  • Electronic devices may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smart phones, wearable devices, and other similar computing devices.
  • the components shown herein, their connections and relationships, and their functions, are by way of example only, and are not intended to limit implementations of the disclosure described and/or claimed herein.
  • the electronic device 800 includes a computing unit 801, which can be loaded into a random access memory (Random Access Memory, RAM) 803 to execute various appropriate actions and processes.
  • RAM Random Access Memory
  • various programs and data necessary for the operation of the device 800 can also be stored.
  • the computing unit 801, ROM 802, and RAM 803 are connected to each other through a bus 804.
  • An input/output (I/O) interface 805 is also connected to the bus 804 .
  • the I/O interface 805 includes: an input unit 806, such as a keyboard, a mouse, etc.; an output unit 807, such as various types of displays, speakers, etc.; a storage unit 808, such as a magnetic disk, an optical disk etc.; and a communication unit 809, such as a network card, a modem, a wireless communication transceiver, and the like.
  • the communication unit 809 allows the electronic device 800 to exchange information/data with other devices through a computer network such as the Internet and/or various telecommunication networks.
  • the computing unit 801 may be various general-purpose and/or special-purpose processing components having processing and computing capabilities. Some examples of computing unit 801 include, but are not limited to, a central processing unit (Central Processing Unit, CPU), a graphics processing unit (Graphics Processing Unit, GPU), various dedicated artificial intelligence (I) computing chips, various running machine learning models The calculation unit of the algorithm, the digital signal processor (Digital Signal Processing, DSP), and any suitable processor, controller, microcontroller, etc.
  • the computing unit 801 executes various methods and processes described above, such as message processing methods.
  • the message processing method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 808 .
  • part or all of the computer program may be loaded and/or installed on the device 800 via the ROM 802 and/or the communication unit 809.
  • the computer program When the computer program is loaded into RAM 803 and executed by computing unit 801, one or more steps of the message processing method described above can be executed.
  • the computing unit 801 may be configured to execute the packet processing method in any other appropriate manner (for example, by means of firmware).
  • Various implementations of the systems and techniques described above herein can be implemented in digital electronic circuit systems, integrated circuit systems, field programmable gate arrays (FPGA), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), application specific standard products (Application Specific Standard Parts, ASSP), system-on-chip (SOC), load programmable logic device (Complex Programmable Logic Device, CPLD), computer hardware, firmware, software, and/or their combination.
  • FPGA field programmable gate arrays
  • ASIC Application Specific Integrated Circuit
  • ASSP Application Specific Standard Parts
  • SOC system-on-chip
  • CPLD Complex Programmable Logic Device
  • computer hardware firmware, software, and/or their combination.
  • programmable processor can be special-purpose or general-purpose programmable processor, can receive data and instruction from storage system, at least one input device, and at least one output device, and transmit data and instruction to this storage system, this at least one input device, and this at least one output device an output device.
  • Program codes for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general-purpose computer, a special purpose computer, or other programmable data processing devices, so that the program codes, when executed by the processor or controller, make the functions/functions specified in the flow diagrams and/or block diagrams Action is implemented.
  • the program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
  • a machine-readable medium may be a tangible medium that may contain or store a program for use by or in conjunction with an instruction execution system, apparatus, or device.
  • a machine-readable medium can be a machine-readable signal medium or a machine-readable storage medium.
  • a machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing.
  • machine-readable storage media would include one or more wire-based electrical connections, portable computer discs, hard drives, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, compact disk read only memory (CD-ROM), optical storage, magnetic storage, or any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read only memory
  • EPROM or flash memory erasable programmable read only memory
  • CD-ROM compact disk read only memory
  • magnetic storage or any suitable combination of the foregoing.
  • the systems and techniques described herein can be implemented on a computer having a display device (e.g., a cathode ray tube (CRT) or a liquid crystal display (LCD)) for displaying information to the user.
  • a display device e.g., a cathode ray tube (CRT) or a liquid crystal display (LCD)
  • LCD liquid crystal display
  • keyboard and pointing device e.g., a mouse or trackball
  • Other kinds of devices can also be used to provide interaction with the user; for example, the feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and can be in any form (including Acoustic input, speech input or, tactile input) to receive input from the user.
  • the systems and techniques described herein can be implemented in a computing system that includes back-end components (e.g., as a data server), or a computing system that includes middleware components (e.g., an application server), or a computing system that includes front-end components (e.g., as a a user computer having a graphical user interface or web browser through which a user can interact with embodiments of the systems and techniques described herein), or including such backend components, middleware components, Or any combination of front-end components in a computing system.
  • the components of the system can be interconnected by any form or medium of digital data communication, eg, a communication network. Examples of communication networks include: Local Area Network (LAN), Wide Area Network (WAN), Internet, and blockchain networks.
  • a computer system may include clients and servers.
  • Clients and servers are generally remote from each other and typically interact through a communication network.
  • the relationship of client and server arises by computer programs running on the respective computers and having a client-server relationship to each other.
  • the server can be a cloud server, also known as cloud computing server or cloud host, which is a host product in the cloud computing service system to solve the problem of traditional physical host and virtual private server (Virtual Private Server, VPS service, referred to as VPS). , there are defects such as difficult management and weak business expansion.
  • the server can also be a server of a distributed system, or a server combined with a blockchain.
  • steps may be reordered, added or deleted using the various forms of flow shown above.
  • each step described in the present disclosure may be executed in parallel, sequentially, or in a different order, as long as the desired result of the technical solution disclosed in the present disclosure can be achieved, no limitation is imposed herein.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Small-Scale Networks (AREA)
  • Computer And Data Communications (AREA)

Abstract

本公开提供了一种报文处理方法、装置及电子设备,涉及云计算技术,该方法可应用于云平台、云服务器等,具体方案为:获取第一报文;对第一报文中的目的MAC地址进行哈希计算,得到目标哈希值;利用目标哈希值,在预先获取的MAC地址哈希表中查询到目的MAC地址的情况下,获取MAC地址哈希表中目的MAC地址对应的目标代理端口REP标识;获取目标REP标识对应的目标队列标识;将第一报文传输至目标队列标识对应的队列中。

Description

一种报文处理方法、装置及电子设备
相关申请的交叉引用
本公开主张在2021年12月23日在中国提交的中国专利申请No.202111590332.6的优先权,其全部内容通过引用包含于此。
技术领域
本公开涉及云计算技术领域,尤其涉及一种报文处理方法、装置及电子设备。
背景技术
在云服务器上插入数据处理单元(Data Processing Unit,DPU)卡,可将主机(Host)中的网络核、云盘核、管理核工作卸载到DPU卡上,消除了主机侧的核占用,起到提升网络性能的作用,并可提升可售卖核的数量。由此在云服务器中虚拟出多个弹性网卡,每个弹性网卡有一个媒体接入控制(Media Access Control,MAC)地址,全局唯一标识,它是一个用来确认网络设备位置的位址;每个弹性网卡的物理功能(Physical Function,PF)会根据业务需求可虚拟出若干个虚拟功能(Virtual Function,VF),每个VF也会被分配全局唯一MAC地址的标识。每个VF或PF通过DPU都会动态分配队列数量,DPU卡支持一定数量的队列数量,所有VF和PF会共享这些队列,当需要的时候由软件动态进行分配。当DPU的网卡接收到网络中的报文时,需要根据MAC地址确定该报文是传输到哪个VF或PF,相当于MAC地址与VF和PF存在一一映射的关系,在软件中存在代理端口(Represent Port,REP)与主机侧PF和VF一一对应,在确定哪个VF或PF后,由于一个VF或一个PF可分配多个队列,需要确定报文传输到其中哪个队列。
目前,在收到报文的情况下,进行队列查询的过程中,在提供查询的一个表格中进行集中查询,表格常采用的映射方式是通过位图(bitmap)的方式,即将所有队列中属于某个REP的队列对应的位设置为1,不属于该REP的队列对应的位设置为0,例如,有2048个队列,则需要2048位代表每个 队列分配给该REP的结果,表格中一个表项的大小至少256个字节。
发明内容
本公开提供一种报文处理方法、装置及电子设备。
第一方面,本公开一个实施例提供一种报文处理方法,所述方法包括:
获取第一报文,所述第一报文中包括目的媒体接入控制MAC地址;
对所述目的MAC地址进行哈希计算,得到目标哈希值;
利用所述目标哈希值,在预先获取的MAC地址哈希表中查询到所述目的MAC地址的情况下,获取所述MAC地址哈希表中所述目的MAC地址对应的目标代理端口REP标识;
获取所述目标REP标识对应的目标队列标识;
将所述第一报文传输至所述目标队列标识对应的队列中。
在本实施例中,获得目的MAC地址的目标哈希值之后,利用目标哈希值在MAC地址哈希表查询到MAC地址的情况下,在MAC地址哈希表中查询目的MAC地址对应的目标REP标识即可,后续根据从MAC地址哈希表中查询到的目标REP标识,获取对应的目标队列标识即可,实现队列的映射,将第一报文传输至所述目标队列标识对应的队列中,这样,无需在采用与队列数量相同数量的位来表示队列分给某个REP的结果的占用空间较大的表格中进行查询,通过本实施例的方法中,采用的是MAC地址哈希表,可减小表格占用的空间,在预先获取的MAC地址哈希表中查询到所述目的MAC地址的情况下,从所述MAC地址哈希表中获取所述目的MAC地址对应的目标REP标识,然后根据目标REP标识获取对应的目标队列标识,这也可避免在一张表格中进行目的MAC地址查询、目标REP标识查询以及目标队列标识查询。这样,可提高队列标识查询的稳定性,进而提高对第一报文的传输的稳定性。
第二方面,本公开一个实施例提供一种报文处理装置,所述装置包括:
第一获取模块,用于获取第一报文,所述第一报文中包括目的媒体接入控制MAC地址;
哈希处理模块,用于对所述目的MAC地址进行哈希计算,得到目标哈 希值;
第一标识获取模块,用于利用所述目标哈希值,在预先获取的MAC地址哈希表中查询到所述目的MAC地址的情况下,获取所述MAC地址哈希表中所述目的MAC地址对应的目标代理端口REP标识;
第二标识获取模块,用于获取所述目标REP标识对应的目标队列标识;
传输模块,用于将所述第一报文传输至所述目标队列标识对应的队列中。
第三方面,本公开一个实施例还提供一种电子设备,包括:
至少一个处理器;以及
与所述至少一个处理器通信连接的存储器;其中,
所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行本公开如第一方面提供的报文处理方法。
第四方面,本公开一个实施例还提供一种存储有计算机指令的非瞬时计算机可读存储介质,所述计算机指令用于使所述计算机执行本公开如第一方面提供的报文处理方法。
第五方面,本公开一个实施例提供一种计算机程序产品,包括计算机程序,所述计算机程序在被处理器执行时实现本公开如第一方面提供的报文处理方法。
应当理解,本部分所描述的内容并非旨在标识本公开的实施例的关键或重要特征,也不用于限制本公开的范围。本公开的其它特征将通过以下的说明书而变得容易理解。
附图说明
附图用于更好地理解本方案,不构成对本公开的限定。其中:
图1是本公开提供的一个实施例的报文处理方法的流程示意图之一;
图2是本公开提供的一个实施例的报文处理方法的流程示意图之二;
图3是本公开提供的一个实施例的报文处理方法的原理图;
图4是本公开提供的一个实施例的循环链表的示意图;
图5是本公开提供的一个实施例的循环链表中新增节点的原理图;
图6是本公开提供的一个实施例的循环链表中删除节点的原理图;
图7是本公开提供的一个实施例的报文处理装置的结构图;
图8是用来实现本公开实施例的报文处理方法的电子设备的框图。
具体实施方式
以下结合附图对本公开的示范性实施例做出说明,其中包括本公开实施例的各种细节以助于理解,应当将它们认为仅仅是示范性的。因此,本领域普通技术人员应当认识到,可以对这里描述的实施例做出各种改变和修改,而不会背离本公开的范围和精神。同样,为了清楚和简明,以下的描述中省略了对公知功能和结构的描述。
如图1所示,根据本公开的实施例,本公开提供一种报文处理方法,方法包括:
步骤S101:获取第一报文,第一报文中包括目的媒体接入控制MAC地址。
该方法可应用于云服务器,云服务器中包括主机(Host)和DPU卡,DPU卡中包括现场可编程门阵列(Field Programmable Gate Array,FPGA)、片上系统(System on Chip,SOC)和网卡,可通过网卡接收第一报文,本实施例的报文处理方法可由FPGA执行。需要说明的是,队列设置在Host侧,接收的第一报文是放入Host侧的某个队列中,首先需要确定第一报文放入哪个队列,即需要进行队列查询(队列映射)。
步骤S102:对目的MAC地址进行哈希计算,得到目标哈希值。
对第一报文进行解析,可获取其中的目的MAC地址,通过哈希算法对目的MAC地址进行哈希计算,即可得到目标哈希值,需要说明的是,对目的MAC地址进行哈希计算的哈希算法与预先初始化得到MAC地址哈希表过程中对MAC地址进行哈希计算的哈希算法相同。哈希算法的种类较多,在本公开中对具体采用的哈希算法不作限定。
步骤S103:利用目标哈希值,在预先获取的MAC地址哈希表中查询到目的MAC地址的情况下,获取MAC地址哈希表中目的MAC地址对应的目标代理端口REP标识。
需要说明的是,MAC地址哈希表中是包括哈希值序列、MAC地址序列、REP标识序列,这三种序列对应,预先利用支持的多个MAC地址、多个MAC地址的哈希值以及多个MAC地址对应的REP标识初始化(创建)MAC地址哈希表,例如,可以是SOC进行本公开实施例中各表格的初始化。在报文处理过程中,可预先获取创建的MAC地址哈希表,然后利用目标哈希值,在预先获取的MAC地址哈希表中查询目的MAC地址,在MAC地址哈希表中查询目的MAC地址的情况下,可以理解,目的MAC地址是支持的多个MAC地址中的一个,这样,可以在MAC地址哈希表中继续查询目的MAC地址对应的目标REP标识,以得到目标REP标识。
步骤S104:获取目标REP标识对应的目标队列标识。
获取到目标REP标识后,再根据目标REP标识,获取对应的目标队列标识,实现队列的映射。需要说明的是,REP标识与队列标识可以是一对多的关系,上述目标队列标识为目标REP标识对应的多个REP标识中的一个标识。
步骤S105:将第一报文传输至目标队列标识对应的队列中。
确定目标队列标识后,即可将报文送入目标队列标识对应的队列中,以便从队列中读取报文进行相应处理。需要说明的是,队列设置在Host,FPGA在获取目标队列标识后,将第一报文送入Host的目标队列标识对应的队列中。
在本实施例中,获得目的MAC地址的目标哈希值之后,利用目标哈希值在MAC地址哈希表查询到MAC地址的情况下,在MAC地址哈希表中查询目的MAC地址对应的目标REP标识即可,后续根据从MAC地址哈希表中查询到的目标REP标识,获取对应的目标队列标识即可,实现队列的映射,将第一报文传输至目标队列标识对应的队列中,这样,无需在采用与队列数量相同数量的位来表示队列分给某个REP的结果的占用空间较大的表格中进行查询,通过本实施例的方法中,采用的是MAC地址哈希表,可减小表格占用的空间,在预先获取的MAC地址哈希表中查询到目的MAC地址的情况下,从MAC地址哈希表中获取目的MAC地址对应的目标REP标识,然后根据目标REP标识获取对应的目标队列标识,这也可避免在一张表格中进行目的MAC地址查询、目标REP标识查询以及目标队列标识查询。这样,可提高队列标识查询的稳定性,进而提高对第一报文的传输的稳定性。
在一个实施例中,获取目标REP标识对应的目标队列标识,包括:
在预先获取的循环链表节点地址表中查询目标REP标识对应的目标节点地址,其中,循环链表节点地址表中包括REP标识序列以及REP标识序列对应的循环链表节点地址序列;
从预先获取的队列循环链表的目标表项中获取目标队列标识,其中,队列循环链表中包括M个节点的地址以及每个节点的表项,M个节点的地址包括循环链表节点地址序列,M为大于1的整数,任一表项中包括队列标识,目标表项为目标节点的表项,目标节点为M个节点中与目标节点地址对应的节点。
可以理解,在本实施例中,提供了一种报文处理方法,如图2所示,该方法包括:
步骤S201:获取第一报文,第一报文中包括目的媒体接入控制MAC地址。
步骤S202:对目的MAC地址进行哈希计算,得到目标哈希值。
步骤S203:利用目标哈希值,在预先获取的MAC地址哈希表中查询到目的MAC地址的情况下,获取MAC地址哈希表中目的MAC地址对应的目标代理端口REP标识。
步骤S204:在预先获取的循环链表节点地址表中查询目标REP标识对应的目标节点地址。
其中,循环链表节点地址表中包括REP标识序列以及REP标识序列对应的循环链表节点地址序列。
步骤S205:从预先获取的队列循环链表的目标表项中获取目标队列标识。
其中,队列循环链表中包括M个节点的地址以及每个节点的表项,M个节点的地址包括循环链表节点地址序列,M为大于1的整数,任一表项中包括队列标识,目标表项为目标节点的表项,目标节点为M个节点中与目标节点地址对应的节点。
步骤S206:将第一报文传输至目标队列标识对应的队列中。
上述步骤S201-S203与上述步骤S101-S103一一对应,步骤S105与步骤S206对应,在此不再赘述。
SOC可预先初始化队列循环链表和循环链表节点地址表,这两个表相关联,可以理解,队列循环链表是由M个节点链接而成的循环链表,每个节点中包括该节点的地址以及节点的表项,任一表项包括队列标识,从而可以认为是队列的循环链表,节点的地址也可以理解是该节点的表项存储的地址。而循环链表节点地址表中包括循环链表节点地址序列以及对应的REP标识序列,可以理解,REP标识序列中任一标识对应循环链表节点地址序列中一个循环链表节点地址,且REP标识序列中每个标识对应的循环链表节点地址不同,循环链表节点地址序列中的任一循环链表节点地址即为队列循环链表中的一个节点的地址,可包括N个节点的地址,N为大于1的整数,且小于M,循环链表节点地址序列中的循环链表节点地址属于M个节点的地址。上述目标节点地址即为循环链表节点地址表中目标REP标识对应的一个循环链表节点地址。
在本实施例中,在MAC地址哈希表中先查询目标REP标识,然后在循环链表节点地址表中查询目标REP标识对应的目标节点地址,再从队列循环链表的目标表项中获取目标队列标识,实现队列的映射,如此,将目标REP标识查询分担在MAC地址哈希表中,将目标节点地址查询分担在循环链表节点地址表中,将目标队列标识查询分担在队列循环链表中,如此,可实现流水线式查询,可避免查询集中在一个表格中影响查询稳定性,这样,可提高队列查询的稳定性,后续将第一报文传输至查询的队列中,提高第一报文的传输的稳定性。
在一个实施例中,MAC地址哈希表包括第一哈希表和第二哈希表,第一哈希表包括第一哈希值序列以及第一哈希值序列对应的第一信息组序列,第一信息组序列中任一信息组包括MAC地址、REP标识以及下一个地址,第二哈希表包括哈希地址序列以及哈希地址序列对应的第二信息组序列,第二信息组序列中任一信息组包括MAC地址、REP标识以及下一个地址;
其中,利用目标哈希值,在预先获取的MAC地址哈希表中查询到目的MAC地址的情况下,获取MAC地址哈希表中目的MAC地址对应的目标代理端口REP标识,包括以下任一项:
利用目标哈希值,在第一哈希表中查询到目的MAC地址的情况下,获 取第一哈希表中目标REP标识;
利用目标哈希值,在第一哈希表中未查询到目的MAC地址,以及根据目标下一个地址,从第二哈希表中查询到目的MAC地址的情况下,从第二哈希表中获取目标REP标识,目标下一个地址为第一哈希表中目标哈希值对应的下一个地址。
需要说明的是,第二哈希表可以理解是哈希桶。首先利用目标哈希值在第一哈希表中进行目的MAC地址的查询,若查询到目的MAC地址,则直接在第一哈希表中获取目的MAC地址对应的目标REP标识即可。若未查询到目的MAC地址,则需要从第一哈希表中获取目标哈希值对应的目标下一个地址,依据目标下一个地址在第二哈希表中继续查询,从第二哈希表中查询到目的MAC地址的情况下,从第二哈希表中获取目标REP标识即可,如此,可将MAC地址哈希表的查询压力分担到第一哈希表和第二哈希表中,避免目标REP标识的查询集中在一个表格中,实现流水式查询,提高查询稳定性,进而提高第一报文传输的稳定性。
在一个实施例中,利用目标哈希值,在第一哈希表中未查询到目的MAC地址,以及根据目标下一个地址,从第二哈希表中查询到目的MAC地址的情况下,从第二哈希表中获取目标REP标识,包括:
利用目标哈希值,在第一哈希表中未查询到目的MAC地址的情况下,将目标下一个地址作为当前查询地址;
基于当前查询地址,确定当前信息组,其中,当前信息组为第二信息组序列中第一哈希地址对应的信息组,第一哈希地址为第一哈希地址列表中的当前查询地址;
在第二哈希表的当前信息组中未查询到目的MAC地址的情况下,将当前查询地址更新为当前信息组中的下一个地址,并返回基于当前查询地址,确定当前信息组的步骤重新查询,直到在第二哈希表的当前信息组中查询到目的MAC地址的情况下,从当前信息组中获取目标REP标识。
由于可能存在不同MAC地址通过哈希处理后得到的哈希值相同的情况,对于具有相同的哈希值的MAC地址,通过哈希表中的下一个地址实现链接,例如,相同的哈希值的多个MAC地址根据对应的下一个地址依次链接。在 本实施例中,在第一哈希表中未查询到目的MAC地址的情况下,先将目标下一个地址作为当前查询地址,利用当前查询地址在第二哈希表中进行查询,获取其对应的当前信息组,判断第二哈希表的当前信息组中的MAC地址与目的MAC地址是否一致,若不一致标识在当前信息组中未查询到目的MAC地址,此时,可更新当前查询地址,即更新为当前信息组中的下一个地址,该下一个地址所在的第二信息组中有MAC地址,返回基于当前查询地址,重新确定当前信息组,在重新确定的当前信息组中进行查询目的MAC地址,即重新进行查询,若还是未查到,再更新当前查询地址,继续重新查询,直到在第二哈希表的当前信息组中查询到目的MAC地址,表示目的MAC地址查询成功,从该当前信息组中获取目的MAC地址对应的目标REP标识即可。
在本实施例中,在第二哈希表中查询的过程中,将下一个地址作为查询的依据,即首先利用目标下一个地址作为当前查询地址进行查询,若未查询到,则将当前查询地址更新为当前信息组中的下一个地址,再继续进行查询,根据当前查询地址查询当前信息组,在第二哈希表的当前信息组中查询到目的MAC地址的情况下,从当前信息组中获取目标REP标识。这样,即使不同的MAC地址有相同的哈希值,通过下一个地址链接这些MAC地址,通过下一个地址依次进行查询,以提高查询的效率。
在一个实施例中,在预先获取的循环链表节点地址表中查询目标REP标识对应的目标节点地址,包括:
在MAC地址哈希表中目标REP标识对应的有效指示信息为第一预设值的情况下,从MAC地址哈希表中获取目标REP标识对应的首地址,并将首地址确定为目标节点地址;
在MAC地址哈希表中目标REP标识对应的有效指示信息为第二预设值的情况下,获取循环链表节点地址表中目标REP标识对应的目标节点地址。
需要说明的是,在初始化循环链表节点地址表过程中,由于各REP标识对应的REP均还未传输过首报文,循环链表节点地址表中的循环链表节点地址可能为空,即循环链表节点地址表中每个REP标识对应的循环链表节点地址可能为空,通过在MAC地址哈希表中可设置有效指示信息序列,与REP标识序列一一对应,即MAC地址哈希表中每个REP标识有对应的一个有效 指示信息,通过一个REP标识的有效指示信息可用于指示报文对于该REP表示为首报文或为非首报文,例如,若一个REP标识对应的有效指示信息为第一预设值(例如为0),表示接收的报文对于该REP表示为首报文,此时,循环链表节点地址表中该REP标识对应的循环链表节点地址为空,从而,在MAC地址哈希表中目标REP标识对应的有效指示信息为第一预设值的情况下,需要先从MAC地址哈希表中获取目标REP标识对应的首地址,作为目标节点地址。若有效指示信息为第二预设值(例如为1),表示接收的报文对于该REP表示为非首报文,循环链表节点地址表中该REP标识对应的循环链表节点地址在接收到首报文后会更新,有效指示信息也会更新,从而,可以从循环链表节点地址表中直接获取该REP标识对应的循环链表节点地址。
在本实施例中,在目标REP标识对应的有效指示信息为不同值的情况下,采用不同的方式确定目标节点地址,以提高目标节点地址获取的灵活性,再根据目标节点地址获取目标队列标识,从而提高队列查询的灵活性。
在一个实施例中,任一表项还包括下一个地址,从预先获取的队列循环链表的目标表项中获取目标队列标识之后,还包括:
将目标表项中的下一个地址作为循环链表节点地址表中目标REP标识的节点地址;
若有效指示信息为第一预设值,将有效指示信息更新为第二预设值。
即每一次从队列循环链表的目标表项中获取目标队列标识之后,均需要对循环链表节点地址表中目标REP标识的节点地址进行更新,实现目标REP标识对应的节点地址的动态更新,如此,在下一次接收到报文,查询该目标REP标识的队列标识的情况下,可查询到新的队列标识,如此,可将该报文送入新的队列中,实现队列的动态映射,避免相同REP标识的报文连续送入相同的队列,减轻队列的压力。需要说明的是,队列循环链表中某个表项中的下一个地址,可以理解是队列循环链表中该表项对应的节点的下一个节点的地址。
在一个实施例中,队列循环链表包括N个循环链表,N为大于1的整数,同一循环链表中的多个队列标识对应同一REP标识。
即一个REP标识对应一个循环链表,其中的多个队列标识是与该REP 标识对应的,如此,得到目标REP标识,并在循环链表节点地址表中查询目标REP标识对应的目标节点地址后,通过该目标节点地址可在该目标REP标识对应的循环链表中的目标表项中查询目标队列标识,不同REP标识对应不同的循环链表,从而可在不同的循环链表中查询队列标识,避免将队列标识查询任务集中在一个循环链表中,提高查询稳定性和准确性。
在一个实施例中,该方法还包括:
在接收到对N个循环链表中第一循环链表的新增指令的情况下,将新增指令的新增表项存入队列循环列表的存储空间中空闲的第一地址,新增指令指示在第一循环链表的第一节点与第二节点之间进行新增,第一循环链表对应第一REP标识,新增表项中包括第一REP标识以及对应的新增队列标识;
将第一节点的表项中下一个地址更新为第一地址,以及将新增表项中下一个地址更新为第二节点的第二地址。
即在本实施例中,可在循环链表中新增节点,实现对循环链表的更新,新增节点的地址即为第一地址,新增节点的表项即为下一个地址更新后的新增表项,且新增表项中包括第一REP标识以及对应的新增队列标识如此,如此,可实现动态新增队列功能,以满足新增队列的需求。
在一个实施例中,该方法还包括:
在接收到对N个循环链表中第二循环链表的目标节点的删除指令的情况下,将目标节点从第二循环链表中删除,并将第二循环链表中第三节点的表项中下一个地址更新为第三地址;
其中,第三地址为目标节点的表项中下一个地址,第三节点为更新前的第二循环链表中下一个地址为目标节点的地址的节点。
即在本实施例中,可在循环链表中删除节点,实现对循环链表的更新,如此,可实现动态删除队列功能,以满足删除队列的需求。
下面以一个具体实施例对上述方法的过程加以说明。
本公开实施例采用两级查询的方式实现队列映射:第一级查询匹配报文与REP标识之间的映射关系,第二级查询则是在REP表示对应的循环链表中查询队列标识。
第一级查询:采用哈希(hash)表映射的方法。因为所能支持的MAC地 址的数量是固定的,每个MAC地址对应于SOC侧一个REP,同时也对应于Host侧一个PF或VF,而一个PF或一个VF可对应多个队列。在进行队列映射的过程中,需要将48bit的目的MAC地址映射为所支持REP标识。所以FPGA侧第一个存储空间(ram)为第一哈希表的存储空间,主要的目的是查找接收的报文的目的MAC地址与REP直接的映射关系,FPGA侧的第二个存储空间为第二哈希表的存储空间,第二哈希表可以理解是hash桶表,在第一哈希表中没有命中目的MAC地址的情况下,继续在第二哈希表中查询,比较理想的情况是没有hash冲突(每个MAC地址对应的哈希值不同),在第一哈希表中第一次查询就能匹配到;当有hash冲突(不同MAC地址哈希处理后的哈希值相同)的时候则通过第一哈希表中的目的MAC地址的目标哈希值对应的下一个地址(next地址)去第二哈希表中查询,依次类推直到查到最终的结果,所有的hash冲突全部放在第二哈希表中。如图3所示,SOC侧可预先获取所能支持的MAC地址,例如,256个MAC地址,通过哈希处理,得到256个哈希值,并获取对应的信息组,构建第一哈希表,其中一个哈希值和对应的一个信息组构成第一哈希表的一个信息表项,如果256个MAC地址的哈希值均不相同,则构建的第一哈希表中包括256个信息表项,此时,不再构建第二哈希表,若256个MAC地址的哈希值中存在相同,则先根据不同的哈希值以及对应的信息组构建第一哈希表,其中包括X个信息表项(对应X个MAC地址),X为正整数,小于或等于256,此时,需构建第二哈希表,将剩余的Y(Y=256-X)个MAC地址中每个MAC地址的第二信息组与哈希地址关联,得到第二哈希表,若一个MAC地址的第二信息组中的下一个地址不为空,则该下一个地址即为该MAC地址的下一个MAC地址(与该MAC地址的哈希值相同)的哈希地址。广播报文在逻辑侧做特殊处理,不去查找表项,每个PF都复制一份。Hash算法采用的方式可以是按字节异或的方式处理,也可以用比较复杂的循环冗余校验(Cyclic Redundancy Check,CRC)算法将hash结果离散开,因为所支持的MAC地址数量固定,推荐使用比较简单的hash算法。
第二级查询:采用循环链表的方式选择队列标识。例如,可例化一个深度能容纳所有队列数量的存储空间,所有队列的标识都存放在该存储空间中, 这样在FPGA内部资源是可以接受的,并且空间利用率较高,没有造成资源浪费,这些队列归属于所有REP共用,但同一时刻只能分配1个给REP使用,也就是说两个不同的REP不会共用同一个队列,在SOC侧内存中将归属于一个REP的队列用循环链表的方式连接起来,并下刷到FPGA内部存储空间中,这样FPGA内部的存储空间可以接收所有REP动态分配的队列标识(Q id),提高资源的利用效率,当逻辑内部首报文第一次查找匹配命中的时候,会获取到REP对应循环链表的首地址,并且查询到首报文的队列标识,同时在另外一个存储空间中通过循环链表节点地址表记录下来链表节点的地址,方便后续报文依次在循环链表中获取该REP标识(REP id)的队列标识,将报文发送到Host侧,同时循环链表节点地址表(缓存表)中应该标识该节点是否有效,若在第一哈希表中设置有效指示信息,用于表示有效或无效,若为第一预设值,则表示无效,若为第二预设值,表示有效,从而可通过有效指示信息(valid)来判断REP是否传输第一个报文。
通过上述查询过程以及循环链表的方式解决了在FPGA内部资源占用过大的问题,同时该方法不影响报文转发性能,可以做到流水线处理,处理的延迟为读取存储空间的时间,一般可以设计为1个时钟周期,当出现hash冲突的时候查询hash桶的时候会增加一次查找,但是只要hash桶的深度不影响整体的转发性能即可。如果该地方出现性能瓶颈,可以通过增加hash冲突ram流水的方式或第一级查询获取2个或3个hash桶的方式提升查找性能,也就是通过资源换性能的方式,目前经过评估本公开方案可以满足性能需求。
例如,所支持的MAC地址数量为256个,Host侧队列数量为2048个,则FPGA内部实现如图3所示:接收的报文通过SOC侧进入到FPGA内部,逻辑侧解析出目的MAC地址,通过hash算法将48bit的目的MAC地址映射为8bit的查表目标哈希值(key值),例如,如果目标哈希值为100,则将100作为第一哈希表的查哈希表地址,查到的结果显示没有命中,第一哈希表中100对应的表项中的下一个地址(n_addr),指向了第二哈希表的地址5,继续查第二哈希表的地址5,继续匹配又没有命中,但地址5对应的表项中下一个地址(n_addr)指向地址255,继续第二哈希表的地址255,取到后进行匹配命中,即地址255的表项中的MAC地址为目的MAC地址,从第二哈希 表中从该表项中获取对应的目标REP标识、有效指示信息和首地址,例如,目标REP标识为128,若有效指示信息为第二预设值,指示有效,则无需使用首地址,通过目标REP标识128对循环链表节点地址表进行查询,得到对应的目标节点地址,例如,128指向的地址为255,利用地址255查询循环链表,获取到最终的队列标识(Q id),同时将循环链表中地址255对应的表项中的下一个地址(next_addr)更新到循环链表节点地址表中,作为目标REP标识128对应的地址,此时,有效指示信息valid有效使能位,为后续报文查询链表使用,这样可以达到流水线的效果。如果是首报文,则查询到循环链接节点地址表中该REP标识的地址无效,通过第一哈希表中的首地址(开始地址)s_addr作为查询循环链表的地址,并更新有效指示信息,使其指示地址有效。
例如,如图4所示,一个循环链表中地址0到地址3分别各自存储着一份表项,地址0中表项中下一个地址(n_addr)指向地址1,地址1中的表项中下一个地址(n_addr)指向地址2,地址2中的表项中下一个地址(n_addr)指向地址3,地址3中的表项中下一个地址(n_addr)指向地址0,从而形成循环链表的形式。
对于循环链表的动态更新,例如,当动态分配队列需要新增到循环链表中的情况下,可新增节点,地址0、地址1、地址2、地址3形成循环链表,如果此时新增队列标识给该REP标识使用时,需要将新增队列的新增表项添加到存放该循环链表的存储空间中,例如,如图5所示,存放到其中空闲的地址4,可将新增表项中下一个地址(n_addr)指向地址0,将地址3中的表项中下一个地址(n_addr)指向地址4,从而形成新的循环链表,将地址0、地址1、地址2、地址3、地址4中的表项以链表形式串起来,实际应用中地址0、地址1、地址2、地址3、地址4存放的位置可以是开辟的存储空间(RAM)中的任意位置,只要链表的表项中下一个地址(n_addr)循环指向下一个地址形成循环链表,并且新增过程中不会造成查找功能异常,实现了动态新增队列功能。
当所归属的REP标识中的队列需要删减的时候,例如,如图6所示,删除地址3中表项,则可将地址2中表项的下一个地址(n_addr)指向地址0 即可,从而形成删减节点后的循环链表,如果删减的是第一哈希表中指向的首地址(是该REP标识表示对应的循环链表中的任一节点的地址)的链表,则需要先更新第一哈希表中首地址的指向,使得删减后可以有有效的指向,然后再用同样的方法将循环链表节点删除,使得在删除队列的时候不影响正常查找功能。
通过本公开实施例的方案,能有效节省SOC侧队列数量,因为每个队列要申请相应的内存空间用于收发网络报文,从而也达到释放SOC侧内存空间的效果,并提升保留队列数量的带宽利用率,相应的FPGA侧需要支持的数量随之减少,也会释放FPGA的资源占用,从而可以将更多的功能卸载到FPGA中进行加速,并且随着资源的减少时序也会随之优化,从而可以让FPGA的主频可以跑到更高。另外,在软件中做的动态队列映射过程卸载到FPGA中,避免了软件解析报文中的目的MAC地址和查表计算键(key)值操作,使得提升软件的报文收发效率,从而提升整体DPU的性能,并且提高了对Host的物理机和虚拟机的访问性能,缓解高频主机访问拥塞。
如图7所示,根据本公开的实施例,本公开还提供一种报文处理装置700,装置包括:
第一获取模块701,用于获取第一报文,第一报文中包括目的媒体接入控制MAC地址;
哈希处理模块702,用于对目的MAC地址进行哈希计算,得到目标哈希值;
第一标识获取模块703,用于利用目标哈希值,在预先获取的MAC地址哈希表中查询到目的MAC地址的情况下,获取MAC地址哈希表中目的MAC地址对应的目标代理端口REP标识;
第二标识获取模块704,用于获取目标REP标识对应的目标队列标识;
传输模块705,用于将第一报文传输至目标队列标识对应的队列中。
在一个实施例中,第二标识获取模块,包括:
第一查找模块,用于在预先获取的循环链表节点地址表中查询目标REP标识对应的目标节点地址,其中,循环链表节点地址表中包括REP标识序列以及REP标识序列对应的循环链表节点地址序列;
队列标识获取模块,用于从预先获取的队列循环链表的目标表项中获取目标队列标识,其中,队列循环链表中包括M个节点的地址以及每个节点的表项,M个节点的地址包括循环链表节点地址序列,M为大于1的整数,任一表项中包括队列标识,目标表项为目标节点的表项,目标节点为M个节点中与目标节点地址对应的节点。
在一个实施例中,MAC地址哈希表包括第一哈希表和第二哈希表,第一哈希表包括第一哈希值序列以及第一哈希值序列对应的第一信息组序列,第一信息组序列中任一信息组包括MAC地址、REP标识以及下一个地址,第二哈希表包括哈希地址序列以及哈希地址序列对应的第二信息组序列,第二信息组序列中任一信息组包括MAC地址、REP标识以及下一个地址;
其中,利用目标哈希值,在预先获取的MAC地址哈希表中查询到目的MAC地址的情况下,获取MAC地址哈希表中目的MAC地址对应的目标代理端口REP标识,包括以下任一项:
利用目标哈希值,在第一哈希表中查询到目的MAC地址的情况下,获取第一哈希表中目标REP标识;
利用目标哈希值,在第一哈希表中未查询到目的MAC地址,以及根据目标下一个地址,从第二哈希表中查询到目的MAC地址的情况下,从第二哈希表中获取目标REP标识,目标下一个地址为第一哈希表中目标哈希值对应的下一个地址。
在一个实施例中,第一标识获取模块,包括:
地址处理模块,用于利用目标哈希值,在第一哈希表中未查询到目的MAC地址的情况下,将目标下一个地址作为当前查询地址;
确定模块,用于基于当前查询地址,确定当前信息组,其中,当前信息组为第二信息组序列中第一哈希地址对应的信息组,第一哈希地址为第一哈希地址列表中的当前查询地址;
第一更新模块,用于在第二哈希表的当前信息组中未查询到目的MAC地址的情况下,将当前查询地址更新为当前信息组中的下一个地址,并返回基于当前查询地址,确定当前信息组的过程,在第二哈希表的当前信息组中进行重新查询,直到在第二哈希表的当前信息组中查询到目的MAC地址的 情况下,从当前信息组中获取目标REP标识。
在一个实施例中,队列标识获取模块,包括:
地址获取模块,用于在MAC地址哈希表中目标REP标识对应的有效指示信息为第一预设值的情况下,从MAC地址哈希表中获取目标REP标识对应的首地址,并将首地址确定为目标节点地址;
在MAC地址哈希表中目标REP标识对应的有效指示信息为第二预设值的情况下,获取循环链表节点地址表中目标REP标识对应的目标节点地址。
在一个实施例中,任一表项还包括下一个地址,装置还包括:
第二更新模块,用于队列标识获取模块执行从预先获取的队列循环链表的目标表项中获取目标队列标识之后,将目标表项中的下一个地址作为循环链表节点地址表中目标REP标识的节点地址;
若有效指示信息为第一预设值,将有效指示信息更新为第二预设值。
在一个实施例中,队列循环链表包括N个循环链表,N为大于1的整数,同一循环链表中的多个队列标识对应同一REP标识。
在一个实施例中,装置还包括:
存储模块,用于在接收到对N个循环链表中第一循环链表的新增指令的情况下,将新增指令的新增表项存入队列循环列表的存储空间中空闲的第一地址,新增指令指示在第一循环链表的第一节点与第二节点之间进行新增,第一循环链表对应第一REP标识,新增表项中包括第一REP标识以及对应的新增队列标识;
第三更新模块,用于将第一节点的表项中下一个地址更新为第一地址,以及将新增表项中下一个地址更新为第二节点的第二地址。
在一个实施例中,装置还包括:
删除模块,用于在接收到对N个循环链表中第二循环链表的目标节点的删除指令的情况下,将目标节点从第二循环链表中删除,并将第二循环链表中第三节点的表项中下一个地址更新为第三地址;
其中,第三地址为目标节点的表项中下一个地址,第三节点为更新前的第二循环链表中下一个地址为目标节点的地址的节点。
上述各实施例的报文处理装置为实现上述各实施例的报文处理方法的装 置,技术特征对应,技术效果对应,在此不再赘述。
根据本公开的实施例,本公开还提供了一种电子设备、一种可读存储介质以及一种计算机程序产品。
本公开实施例的非瞬时计算机可读存储介质存储计算机指令,该计算机指令用于使计算机执行本公开所提供的报文处理方法。
本公开实施例的计算机程序产品,包括计算机程序,计算机程序用于使计算机执行本公开各实施例提供的报文处理方法。
图8示出了可以用来实施本公开的实施例的示例电子设备800的示意性框图。电子设备旨在表示各种形式的数字计算机,诸如,膝上型计算机、台式计算机、工作台、个人数字助理、服务器、刀片式服务器、大型计算机、和其它适合的计算机。电子设备还可以表示各种形式的移动装置,诸如,个人数字处理、蜂窝电话、智能电话、可穿戴设备和其它类似的计算装置。本文所示的部件、它们的连接和关系、以及它们的功能仅仅作为示例,并且不意在限制本文中描述的和/或者要求的本公开的实现。
如图8所示,电子设备800包括计算单元801,其可以根据存储在只读存储器(Read-Only Memory,ROM)802中的计算机程序或者从存储单元808加载到随机访问存储器(Random Access Memory,RAM)803中的计算机程序,来执行各种适当的动作和处理。在RAM803中,还可存储设备800操作所需的各种程序和数据。计算单元801、ROM 802以及RAM 803通过总线804彼此相连。输入/输出(I/O)接口805也连接至总线804。
电子设备800中的多个部件连接至I/O接口805,包括:输入单元806,例如键盘、鼠标等;输出单元807,例如各种类型的显示器、扬声器等;存储单元808,例如磁盘、光盘等;以及通信单元809,例如网卡、调制解调器、无线通信收发机等。通信单元809允许电子设备800通过诸如因特网的计算机网络和/或各种电信网络与其他设备交换信息/数据。
计算单元801可以是各种具有处理和计算能力的通用和/或专用处理组件。计算单元801的一些示例包括但不限于中央处理单元(Central Processing Unit,CPU)、图形处理单元(Graphics Processing Unit,GPU)、各种专用的人工智能(I)计算芯片、各种运行机器学习模型算法的计算单元、数字信号 处理器(Digital Signal Processing,DSP)、以及任何适当的处理器、控制器、微控制器等。计算单元801执行上文所描述的各个方法和处理,例如报文处理方法。例如,在一些实施例中,报文处理方法可被实现为计算机软件程序,其被有形地包含于机器可读介质,例如存储单元808。在一些实施例中,计算机程序的部分或者全部可以经由ROM 802和/或通信单元809而被载入和/或安装到设备800上。当计算机程序加载到RAM803并由计算单元801执行时,可以执行上文描述的报文处理方法的一个或多个步骤。备选地,在其他实施例中,计算单元801可以通过其他任何适当的方式(例如,借助于固件)而被配置为执行报文处理方法。本文中以上描述的系统和技术的各种实施方式可以在数字电子电路系统、集成电路系统、现场可编程门阵列(FPGA)、专用集成电路(Application Specific Integrated Circuit,ASIC)、专用标准产品(Application Specific Standard Parts,ASSP)、芯片上系统的系统(SOC)、负载可编程逻辑设备(Complex Programmable Logic Device,CPLD)、计算机硬件、固件、软件、和/或它们的组合中实现。这些各种实施方式可以包括:实施在一个或者多个计算机程序中,该一个或者多个计算机程序可在包括至少一个可编程处理器的可编程系统上执行和/或解释,该可编程处理器可以是专用或者通用可编程处理器,可以从存储系统、至少一个输入装置、和至少一个输出装置接收数据和指令,并且将数据和指令传输至该存储系统、该至少一个输入装置、和该至少一个输出装置。
用于实施本公开的方法的程序代码可以采用一个或多个编程语言的任何组合来编写。这些程序代码可以提供给通用计算机、专用计算机或其他可编程数据处理装置的处理器或控制器,使得程序代码当由处理器或控制器执行时使流程图和/或框图中所规定的功能/操作被实施。程序代码可以完全在机器上执行、部分地在机器上执行,作为独立软件包部分地在机器上执行且部分地在远程机器上执行或完全在远程机器或服务器上执行。
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、 或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。
为了提供与用户的交互,可以在计算机上实施此处描述的系统和技术,该计算机具有:用于向用户显示信息的显示装置(例如,阴极射线管(Cathode Ray Tube,CRT)或者液晶显示器(Liquid Crystal Display,LCD)监视器);以及键盘和指向装置(例如,鼠标或者轨迹球),用户可以通过该键盘和该指向装置来将输入提供给计算机。其它种类的装置还可以用于提供与用户的交互;例如,提供给用户的反馈可以是任何形式的传感反馈(例如,视觉反馈、听觉反馈、或者触觉反馈);并且可以用任何形式(包括声输入、语音输入或者、触觉输入)来接收来自用户的输入。
可以将此处描述的系统和技术实施在包括后台部件的计算系统(例如,作为数据服务器)、或者包括中间件部件的计算系统(例如,应用服务器)、或者包括前端部件的计算系统(例如,具有图形用户界面或者网络浏览器的用户计算机,用户可以通过该图形用户界面或者该网络浏览器来与此处描述的系统和技术的实施方式交互)、或者包括这种后台部件、中间件部件、或者前端部件的任何组合的计算系统中。可以通过任何形式或者介质的数字数据通信(例如,通信网络)来将系统的部件相互连接。通信网络的示例包括:局域网(Local Area Network,LAN)、广域网(Wide Area Network,WAN)、互联网和区块链网络。
计算机系统可以包括客户端和服务器。客户端和服务器一般远离彼此并且通常通过通信网络进行交互。通过在相应的计算机上运行并且彼此具有客户端-服务器关系的计算机程序来产生客户端和服务器的关系。服务器可以是云服务器,又称为云计算服务器或云主机,是云计算服务体系中的一项主机产品,以解决了传统物理主机与虚拟专用服务(Virtual Private Server,VPS服务,简称VPS)中,存在的管理难度大,业务扩展性弱的缺陷。服务器也可以为分布式系统的服务器,或者是结合了区块链的服务器。
应该理解,可以使用上面所示的各种形式的流程,重新排序、增加或删除步骤。例如,本公开中记载的各步骤可以并行地执行也可以顺序地执行也可以不同的次序执行,只要能够实现本公开公开的技术方案所期望的结果,本文在此不进行限制。
上述具体实施方式,并不构成对本公开保护范围的限制。本领域技术人员应该明白的是,根据设计要求和其他因素,可以进行各种修改、组合、子组合和替代。任何在本公开的精神和原则之内所作的修改、等同替换和改进等,均应包含在本公开保护范围之内。

Claims (21)

  1. 一种报文处理方法,所述方法包括:
    获取第一报文,所述第一报文中包括目的媒体接入控制MAC地址;
    对所述目的MAC地址进行哈希计算,得到目标哈希值;
    利用所述目标哈希值,在预先获取的MAC地址哈希表中查询到所述目的MAC地址的情况下,获取所述MAC地址哈希表中所述目的MAC地址对应的目标代理端口REP标识;
    获取所述目标REP标识对应的目标队列标识;
    将所述第一报文传输至所述目标队列标识对应的队列中。
  2. 根据权利要求1所述的方法,其中,所述获取所述目标REP标识对应的目标队列标识,包括:
    在预先获取的循环链表节点地址表中查询所述目标REP标识对应的目标节点地址,其中,所述循环链表节点地址表中包括REP标识序列以及所述REP标识序列对应的循环链表节点地址序列;
    从预先获取的队列循环链表的目标表项中获取目标队列标识,其中,所述队列循环链表中包括M个节点的地址以及每个节点的表项,所述M个节点的地址包括所述循环链表节点地址序列,M为大于1的整数,任一表项中包括队列标识,所述目标表项为目标节点的表项,所述目标节点为所述M个节点中与所述目标节点地址对应的节点。
  3. 根据权利要求1所述的方法,其中,所述MAC地址哈希表包括第一哈希表和第二哈希表,所述第一哈希表包括第一哈希值序列以及所述第一哈希值序列对应的第一信息组序列,所述第一信息组序列中任一信息组包括MAC地址、REP标识以及下一个地址,所述第二哈希表包括哈希地址序列以及所述哈希地址序列对应的第二信息组序列,所述第二信息组序列中任一信息组包括MAC地址、REP标识以及下一个地址;
    其中,所述利用所述目标哈希值,在预先获取的MAC地址哈希表中查询到所述目的MAC地址的情况下,获取所述MAC地址哈希表中所述目的MAC地址对应的目标代理端口REP标识,包括以下任一项:
    利用所述目标哈希值,在所述第一哈希表中查询到所述目的MAC地址的情况下,获取所述第一哈希表中所述目标REP标识;
    利用所述目标哈希值,在所述第一哈希表中未查询到所述目的MAC地址,以及根据目标下一个地址,从所述第二哈希表中查询到所述目的MAC地址的情况下,从所述第二哈希表中获取所述目标REP标识,所述目标下一个地址为所述第一哈希表中所述目标哈希值对应的下一个地址。
  4. 根据权利要求3所述的方法,其中,所述利用所述目标哈希值,在所述第一哈希表中未查询到所述目的MAC地址,以及根据目标下一个地址,从所述第二哈希表中查询到所述目的MAC地址的情况下,从所述第二哈希表中获取所述目标REP标识,包括:
    利用所述目标哈希值,在所述第一哈希表中未查询到所述目的MAC地址的情况下,将所述目标下一个地址作为当前查询地址;
    基于所述当前查询地址,确定当前信息组,其中,所述当前信息组为所述第二信息组序列中第一哈希地址对应的信息组,所述第一哈希地址为所述第一哈希地址列表中的所述当前查询地址;
    在所述第二哈希表的所述当前信息组中未查询到所述目的MAC地址的情况下,将所述当前查询地址更新为所述当前信息组中的下一个地址,并返回所述基于所述当前查询地址,确定当前信息组的步骤重新查询,直到在所述第二哈希表的当前信息组中查询到所述目的MAC地址的情况下,从所述当前信息组中获取所述目标REP标识。
  5. 根据权利要求2所述的方法,其中,所述在预先获取的循环链表节点地址表中查询所述目标REP标识对应的目标节点地址,包括:
    在所述MAC地址哈希表中所述目标REP标识对应的有效指示信息为第一预设值的情况下,从所述MAC地址哈希表中获取所述目标REP标识对应的首地址,并将所述首地址确定为所述目标节点地址;
    在所述MAC地址哈希表中所述目标REP标识对应的有效指示信息为第二预设值的情况下,获取所述循环链表节点地址表中所述目标REP标识对应的所述目标节点地址。
  6. 根据权利要求5所述的方法,其中,任一表项还包括下一个地址,所述 从预先获取的队列循环链表的目标表项中获取目标队列标识之后,还包括:
    将所述目标表项中的下一个地址作为所述循环链表节点地址表中所述目标REP标识的节点地址;
    若所述有效指示信息为所述第一预设值,将所述有效指示信息更新为所述第二预设值。
  7. 根据权利要求2所述的方法,其中,所述队列循环链表包括N个循环链表,N为大于1的整数,同一循环链表中的多个队列标识对应同一REP标识。
  8. 根据权利要求7所述的方法,其中,所述方法还包括:
    在接收到对所述N个循环链表中第一循环链表的新增指令的情况下,将所述新增指令的新增表项存入所述队列循环列表的存储空间中空闲的第一地址,所述新增指令指示在所述第一循环链表的第一节点与第二节点之间进行新增,所述第一循环链表对应第一REP标识,所述新增表项中包括所述第一REP标识以及对应的新增队列标识;
    将所述第一节点的表项中下一个地址更新为所述第一地址,以及将所述新增表项中下一个地址更新为所述第二节点的第二地址。
  9. 根据权利要求7所述的方法,其中,所述方法还包括:
    在接收到对所述N个循环链表中第二循环链表的目标节点的删除指令的情况下,将所述目标节点从所述第二循环链表中删除,并将所述第二循环链表中第三节点的表项中下一个地址更新为第三地址;
    其中,所述第三地址为所述目标节点的表项中下一个地址,所述第三节点为更新前的第二循环链表中下一个地址为所述目标节点的地址的节点。
  10. 一种报文处理装置,所述装置包括:
    第一获取模块,用于获取第一报文,所述第一报文中包括目的媒体接入控制MAC地址;
    哈希处理模块,用于对所述目的MAC地址进行哈希计算,得到目标哈希值;
    第一标识获取模块,用于利用所述目标哈希值,在预先获取的MAC地址哈希表中查询到所述目的MAC地址的情况下,获取所述MAC地址哈希表中所述目的MAC地址对应的目标代理端口REP标识;
    第二标识获取模块,用于获取所述目标REP标识对应的目标队列标识;
    传输模块,用于将所述第一报文传输至所述目标队列标识对应的队列中。
  11. 根据权利要求10所述的装置,其中,所述第二标识获取模块,包括:
    第一查找模块,用于在预先获取的循环链表节点地址表中查询所述目标REP标识对应的目标节点地址,其中,所述循环链表节点地址表中包括REP标识序列以及所述REP标识序列对应的循环链表节点地址序列;
    队列标识获取模块,用于从预先获取的队列循环链表的目标表项中获取目标队列标识,其中,所述队列循环链表中包括M个节点的地址以及每个节点的表项,所述M个节点的地址包括所述循环链表节点地址序列,M为大于1的整数,任一表项中包括队列标识,所述目标表项为目标节点的表项,所述目标节点为所述M个节点中与所述目标节点地址对应的节点。
  12. 根据权利要求10所述的装置,其中,所述MAC地址哈希表包括第一哈希表和第二哈希表,所述第一哈希表包括第一哈希值序列以及所述第一哈希值序列对应的第一信息组序列,所述第一信息组序列中任一信息组包括MAC地址、REP标识以及下一个地址,所述第二哈希表包括哈希地址序列以及所述哈希地址序列对应的第二信息组序列,所述第二信息组序列中任一信息组包括MAC地址、REP标识以及下一个地址;
    其中,所述利用所述目标哈希值,在预先获取的MAC地址哈希表中查询到所述目的MAC地址的情况下,获取所述MAC地址哈希表中所述目的MAC地址对应的目标代理端口REP标识,包括以下任一项:
    利用所述目标哈希值,在所述第一哈希表中查询到所述目的MAC地址的情况下,获取所述第一哈希表中所述目标REP标识;
    利用所述目标哈希值,在所述第一哈希表中未查询到所述目的MAC地址,以及根据目标下一个地址,从所述第二哈希表中查询到所述目的MAC地址的情况下,从所述第二哈希表中获取所述目标REP标识,所述目标下一个地址为所述第一哈希表中所述目标哈希值对应的下一个地址。
  13. 根据权利要求12所述的装置,其中,所述第一标识获取模块,包括:
    地址处理模块,用于利用所述目标哈希值,在所述第一哈希表中未查询到所述目的MAC地址的情况下,将所述目标下一个地址作为当前查询地址;
    确定模块,用于基于所述当前查询地址,确定当前信息组,其中,所述当前信息组为所述第二信息组序列中第一哈希地址对应的信息组,所述第一哈希地址为所述第一哈希地址列表中的所述当前查询地址;
    第一更新模块,用于在所述第二哈希表的所述当前信息组中未查询到所述目的MAC地址的情况下,将所述当前查询地址更新为所述当前信息组中的下一个地址,并返回所述基于所述当前查询地址,确定当前信息组的过程,在所述第二哈希表的所述当前信息组中进行重新查询,直到在所述第二哈希表的当前信息组中查询到所述目的MAC地址的情况下,从所述当前信息组中获取所述目标REP标识。
  14. 根据权利要求11所述的装置,其中,所述队列标识获取模块,包括:
    地址获取模块,用于在所述MAC地址哈希表中所述目标REP标识对应的有效指示信息为第一预设值的情况下,从所述MAC地址哈希表中获取所述目标REP标识对应的首地址,并将所述首地址确定为所述目标节点地址;
    在所述MAC地址哈希表中所述目标REP标识对应的有效指示信息为第二预设值的情况下,获取所述循环链表节点地址表中所述目标REP标识对应的所述目标节点地址。
  15. 根据权利要求14所述的装置,其中,任一表项还包括下一个地址,所述装置还包括:
    第二更新模块,用于所述队列标识获取模块执行从预先获取的队列循环链表的目标表项中获取目标队列标识之后,将所述目标表项中的下一个地址作为所述循环链表节点地址表中所述目标REP标识的节点地址;
    若所述有效指示信息为所述第一预设值,将所述有效指示信息更新为所述第二预设值。
  16. 根据权利要求11所述的装置,其中,所述队列循环链表包括N个循环链表,N为大于1的整数,同一循环链表中的多个队列标识对应同一REP标识。
  17. 根据权利要求16所述的装置,其中,所述装置还包括:
    存储模块,用于在接收到对所述N个循环链表中第一循环链表的新增指令的情况下,将所述新增指令的新增表项存入所述队列循环列表的存储空间中空闲的第一地址,所述新增指令指示在所述第一循环链表的第一节点与第二 节点之间进行新增,所述第一循环链表对应第一REP标识,所述新增表项中包括所述第一REP标识以及对应的新增队列标识;
    第三更新模块,用于将所述第一节点的表项中下一个地址更新为所述第一地址,以及将所述新增表项中下一个地址更新为所述第二节点的第二地址。
  18. 根据权利要求16所述的装置,其中,所述装置还包括:
    删除模块,用于在接收到对所述N个循环链表中第二循环链表的目标节点的删除指令的情况下,将所述目标节点从所述第二循环链表中删除,并将所述第二循环链表中第三节点的表项中下一个地址更新为第三地址;
    其中,所述第三地址为所述目标节点的表项中下一个地址,所述第三节点为更新前的第二循环链表中下一个地址为所述目标节点的地址的节点。
  19. 一种电子设备,包括:
    至少一个处理器;以及
    与所述至少一个处理器通信连接的存储器;其中,
    所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行权利要求1-9任一所述的报文处理方法。
  20. 一种存储有计算机指令的非瞬时计算机可读存储介质,所述计算机指令用于使所述计算机执行权利要求1-9任一所述的报文处理方法。
  21. 一种计算机程序产品,包括计算机程序,所述计算机程序在被处理器执行时实现根据权利要求1-9中任一所述的报文处理方法。
PCT/CN2022/111397 2021-12-23 2022-08-10 一种报文处理方法、装置及电子设备 WO2023115978A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111590332.6 2021-12-23
CN202111590332.6A CN114253979B (zh) 2021-12-23 2021-12-23 一种报文处理方法、装置及电子设备

Publications (1)

Publication Number Publication Date
WO2023115978A1 true WO2023115978A1 (zh) 2023-06-29

Family

ID=80797185

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/111397 WO2023115978A1 (zh) 2021-12-23 2022-08-10 一种报文处理方法、装置及电子设备

Country Status (2)

Country Link
CN (1) CN114253979B (zh)
WO (1) WO2023115978A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117411738A (zh) * 2023-12-15 2024-01-16 格创通信(浙江)有限公司 组播复制方法、装置、电子设备和存储介质

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114253979B (zh) * 2021-12-23 2023-10-03 北京百度网讯科技有限公司 一种报文处理方法、装置及电子设备
CN115334035B (zh) * 2022-07-15 2023-10-10 天翼云科技有限公司 一种报文转发方法、装置、电子设备及存储介质
CN116055397B (zh) * 2023-03-27 2023-08-18 井芯微电子技术(天津)有限公司 队列表项维护方法与装置

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007064648A2 (en) * 2005-11-30 2007-06-07 Tellabs Bedford, Inc. Communications distribution system
CN103117931A (zh) * 2013-02-21 2013-05-22 烽火通信科技股份有限公司 基于哈希表和tcam表的mac地址硬件学习方法及系统
CN104125166A (zh) * 2014-07-31 2014-10-29 华为技术有限公司 一种队列调度方法及计算系统
US20140359232A1 (en) * 2013-05-10 2014-12-04 Hugh W. Holbrook System and method of a shared memory hash table with notifications
CN104636185A (zh) * 2015-01-27 2015-05-20 华为技术有限公司 业务上下文管理方法、物理主机、pcie设备及迁移管理设备
CN114253979A (zh) * 2021-12-23 2022-03-29 北京百度网讯科技有限公司 一种报文处理方法、装置及电子设备

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101826107B (zh) * 2010-04-02 2015-08-05 华为技术有限公司 哈希数据处理方法和装置
CN104702506B (zh) * 2013-12-09 2018-10-19 华为技术有限公司 一种报文传输方法、网络节点及报文传输系统
CN108306806B (zh) * 2018-02-06 2021-10-29 新华三技术有限公司 一种报文转发方法及装置
CN109194559B (zh) * 2018-08-29 2021-04-30 迈普通信技术股份有限公司 组播方法及vtep设备
US20220311643A1 (en) * 2019-06-21 2022-09-29 Telefonaktiebolaget Lm Ericsson (Publ) Method and system to transmit broadcast, unknown unicast, or multicast (bum) traffic for multiple ethernet virtual private network (evpn) instances (evis)
CN111585863B (zh) * 2020-06-11 2022-03-01 国家计算机网络与信息安全管理中心 虚拟可扩展局域网报文处理设备及其数据处理方法
CN112632079B (zh) * 2020-12-30 2023-07-21 联想未来通信科技(重庆)有限公司 一种数据流标识的查询方法及装置
CN113364662B (zh) * 2021-06-30 2023-03-24 北京天融信网络安全技术有限公司 一种报文处理方法、装置、存储介质和电子设备

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007064648A2 (en) * 2005-11-30 2007-06-07 Tellabs Bedford, Inc. Communications distribution system
CN103117931A (zh) * 2013-02-21 2013-05-22 烽火通信科技股份有限公司 基于哈希表和tcam表的mac地址硬件学习方法及系统
US20140359232A1 (en) * 2013-05-10 2014-12-04 Hugh W. Holbrook System and method of a shared memory hash table with notifications
CN104125166A (zh) * 2014-07-31 2014-10-29 华为技术有限公司 一种队列调度方法及计算系统
CN104636185A (zh) * 2015-01-27 2015-05-20 华为技术有限公司 业务上下文管理方法、物理主机、pcie设备及迁移管理设备
CN114253979A (zh) * 2021-12-23 2022-03-29 北京百度网讯科技有限公司 一种报文处理方法、装置及电子设备

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117411738A (zh) * 2023-12-15 2024-01-16 格创通信(浙江)有限公司 组播复制方法、装置、电子设备和存储介质
CN117411738B (zh) * 2023-12-15 2024-03-08 格创通信(浙江)有限公司 组播复制方法、装置、电子设备和存储介质

Also Published As

Publication number Publication date
CN114253979A (zh) 2022-03-29
CN114253979B (zh) 2023-10-03

Similar Documents

Publication Publication Date Title
WO2023115978A1 (zh) 一种报文处理方法、装置及电子设备
US9104676B2 (en) Hash algorithm-based data storage method and system
WO2018099107A1 (zh) 一种哈希表管理的方法和装置、计算机存储介质
JP5265758B2 (ja) 組込みまたは無線通信システムにおけるメモリ割当てのためのシステムおよび方法
CN109981493B (zh) 一种用于配置虚拟机网络的方法和装置
US9584481B2 (en) Host providing system and communication control method
US20200186616A1 (en) Kubernetes as a distributed operating system for multitenancy/multiuser
CN109309706B (zh) 在云局域网的存储系统间共享指纹和数据块的方法和系统
US10601711B1 (en) Lens table
CN113067860B (zh) 用于同步信息的方法、装置、设备、介质和产品
JP5610227B2 (ja) 計算機及び識別子管理方法
CN104702508A (zh) 表项动态更新方法及系统
CN105634999B (zh) 一种介质访问控制地址的老化方法及装置
WO2021088357A1 (zh) 一种用于生成转发信息的方法、装置和系统
US20220269411A1 (en) Systems and methods for scalable shared memory among networked devices comprising ip addressable memory blocks
US10355994B1 (en) Lens distribution
US10795873B1 (en) Hash output manipulation
JP6233846B2 (ja) 可変長ノンスの生成
US11398904B1 (en) Key management for remote device access
TWI580217B (zh) 管理伺服器及其操作方法與伺服器系統
US20220407916A1 (en) Network load balancer, request message distribution method, program product and system
CN114615273B (zh) 基于负载均衡系统的数据发送方法、装置和设备
CN114827159B (zh) 网络请求路径优化方法、装置、设备和存储介质
CN115242733B (zh) 报文组播方法、组播网关、电子设备及存储介质
US11314414B2 (en) Methods, devices, and computer program products for storage management

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22909308

Country of ref document: EP

Kind code of ref document: A1