CN115714744A - SRv6 message storage method and device and electronic equipment - Google Patents

SRv6 message storage method and device and electronic equipment Download PDF

Info

Publication number
CN115714744A
CN115714744A CN202211402994.0A CN202211402994A CN115714744A CN 115714744 A CN115714744 A CN 115714744A CN 202211402994 A CN202211402994 A CN 202211402994A CN 115714744 A CN115714744 A CN 115714744A
Authority
CN
China
Prior art keywords
address
storage
srv6
target
vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211402994.0A
Other languages
Chinese (zh)
Inventor
曾四鸣
段昕
罗蓬
马天祥
李卓
刘金典
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Corp of China SGCC
Electric Power Research Institute of State Grid Hebei Electric Power Co Ltd
State Grid Hebei Energy Technology Service Co Ltd
Original Assignee
State Grid Corp of China SGCC
Electric Power Research Institute of State Grid Hebei Electric Power Co Ltd
State Grid Hebei Energy Technology Service Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Corp of China SGCC, Electric Power Research Institute of State Grid Hebei Electric Power Co Ltd, State Grid Hebei Energy Technology Service Co Ltd filed Critical State Grid Corp of China SGCC
Priority to CN202211402994.0A priority Critical patent/CN115714744A/en
Publication of CN115714744A publication Critical patent/CN115714744A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention provides a storage method and a storage device of an SRv6 message and electronic equipment. The method comprises the following steps: acquiring an SRv6 message to be stored; carrying out binary conversion on a target IPv6 address in the SRv6 message to be stored to obtain a decimal target vector; predicting to obtain an accumulated distribution function value corresponding to the target vector based on the target vector and a preset address index model; the accumulated distribution function value is used for representing the offset between the storage address of the SRv6 message and the initial address of the storage space; and determining a target storage address of the SRv6 message in the storage space based on the accumulated distribution function value and a preset address mapping table, and storing the SRv6 message to the target storage address. The invention can relieve the crowdedness degree of the SRv6 message during storage, reduce the occupied space of the SRv6 message during storage and improve the storage efficiency of the SRv6 message.

Description

SRv6 message storage method and device and electronic equipment
Technical Field
The present invention relates to the field of internet technologies, and in particular, to a method and an apparatus for storing an SRv6 packet, and an electronic device.
Background
With the emergence and application of big data, cloud computing, artificial intelligence and the like, the internet has been a new revolution. Holographic communication, conscious communication and air-space-ground-sea integrated communication are still natural, and a world of intelligence interconnected by ten thousand networks is about to come under the perception of everything. Numerous new applications and new technologies present new requirements and challenges for current IP networks. However, compared with the fast iteration of internet application, the TCP/IP protocol has not made substantial changes in the last 40 years as the basis of the internet, and various capabilities of the IP network are in urgent need of enhancement. The IPv6 network, as an improvement of the IPv4 network, solves the address exhaustion problem and the security problem, but the TCP/IP technology core has not been changed all the time, and its inherent defect has not been solved all the time.
Therefore, in order to provide key requirements for a data network in a research team focused network 5.0 typical application scene, a novel network protocol system, namely NewIP, is provided. The SRv6 protocol (Segment Routing IPv6, SRv 6) is a technical means for implementing the NewIP network function, and is compatible with IPv6 in the network layer, and can implement Segment Routing in the forwarding plane.
At present, the storing and forwarding processes of the SRv6 message are mostly related algorithms based on prefix distribution characteristics. Due to the fact that large uncertainty exists in the scale and distribution of the IPv6 addresses, updating of the IPv6 addresses is relatively frequent, imbalance phenomena in a search tree formed by a correlation algorithm based on an prefix value are aggravated, storage paths of the SRv6 messages are crowded, occupied space is large, and storage efficiency is low.
Disclosure of Invention
The invention provides a method and a device for storing an SRv6 message and electronic equipment, which can relieve the congestion degree of the SRv6 message during storage, reduce the occupied space of the SRv6 message during storage and improve the storage efficiency of the SRv6 message.
In a first aspect, the present invention provides a method for storing an SRv6 packet, including: acquiring an SRv6 message to be stored; carrying out binary conversion on a target IPv6 address in the SRv6 message to be stored to obtain a decimal target vector; predicting to obtain an accumulated distribution function value corresponding to the target vector based on the target vector and a preset address index model; the cumulative distribution function value is used for representing the offset between the storage address of the SRv6 message and the initial address of the storage space; the address index model is used for limiting the mapping relation between the accumulative distribution function value and the vector value of the target vector; and determining a target storage address of the SRv6 message in the storage space based on the accumulated distribution function value and a preset address mapping table, and storing the SRv6 message to the target storage address.
The invention provides a storage method of SRv6 message, through setting up the index model of the address, limit the mapping relation between vector value of the cumulative distribution function value and target vector, get the vector value of the decimal vector from the IPv6 address conversion of purpose in SRv6 message to be stored, the cumulative distribution function value confirmed, can express the magnitude of offset between the initial address of the storage address relative to storage space of SRv6 message, namely represent the storage position of SRv6 message in the storage space; and then determining a target storage address of the SRv6 message in the storage space based on the score distribution function value and a preset address mapping table, and determining a determined storage position in the storage space for storing the SRv6 message. Because the target IPv6 addresses in the SRv6 messages are different, the vector values of the obtained target vectors are different, and the finally determined target storage addresses are different, the certainty of the SRv6 message storage addresses is realized, the congestion degree of the SRv6 messages during storage is relieved, the occupied space of the SRv6 messages during storage is reduced, and the storage efficiency of the SRv6 messages is improved.
In a possible implementation manner, predicting to obtain an accumulated distribution function value corresponding to a target vector based on the target vector and a preset address index model, includes: determining a vector value of a target vector; inputting the vector value of the target vector into a first-layer model of an address index model, and determining a classification number corresponding to the target vector; and inputting the classification number corresponding to the target vector into a second-layer model of the address index model to obtain a cumulative distribution function value corresponding to the target vector.
In a possible implementation manner, before predicting a cumulative distribution function value corresponding to a target vector based on the target vector and a preset address index model, the method further includes: acquiring training data, wherein the training data comprises a plurality of SRv6 messages and a storage address of each SRv6 message; converting target IPv6 addresses in the SRv6 messages into decimal vectors to generate a plurality of training samples, wherein each training sample comprises a decimal vector corresponding to one SRv6 message and a storage address of the SRv6 message; and training the new neural network model based on a plurality of training samples to obtain an address index model.
In a possible implementation manner, training the new neural network model based on a plurality of training samples to obtain an address index model, including: the method comprises the following steps: sequencing the training samples based on the vector value of the decimal vector, and determining a sequencing label of each training sample; step two: taking the vector value of the decimal vector in the training sample as input, taking the classification number of the training sample as output, and training the first layer model of the new neural network model to obtain a first layer model of the address index model; step three: training a second-layer model of the new neural network model by taking the classification number of the training sample as input and taking the accumulated distribution function value corresponding to the storage address of the SRv6 message in the training sample as output to obtain the second-layer model of the address index model; step four: taking training samples which do not participate in training as test samples, and testing the address index model in the training process; step five: if the test precision is more than or equal to the set precision, finishing the training process of the address index model to obtain a preset address index model; and if the testing precision is smaller than the set precision, repeating the second step, the third step and the fourth step, and retraining a new neural network model.
In a possible implementation manner, if the test accuracy is greater than or equal to the set accuracy, the method further includes, before the training process of the address index model is ended and the preset address index model is obtained: determining a difference value between the tested accumulative distribution function value and the accumulative distribution function value corresponding to the storage address of the SRv6 message in the test sample; if the ratio of the difference value to the cumulative distribution function value corresponding to the storage address of the SRv6 message in the test sample is larger than or equal to a set value, determining that the test precision is larger than or equal to the set precision; otherwise, determining that the testing precision is smaller than the set precision.
In a possible implementation manner, determining a target storage address of the SRv6 packet in the storage space based on the cumulative distribution function value and a preset address mapping table includes: determining the offset between the target storage address and the initial address of the storage space based on the accumulated distribution function value and the size of the storage space; and determining a target storage address based on the offset and an address mapping table, wherein the address mapping table is used for defining the mapping relation between the offset and the storage address in the storage space.
In a possible implementation manner, the address mapping table is further configured to define a mapping relationship between a storage address in the storage space and a next hop address; after the SRv6 packet is stored in the target storage address, the method further includes: determining a next hop address based on a target storage address of the SRv6 message and an address mapping table; and forwarding the SRv6 message to the equipment corresponding to the next hop address.
In a second aspect, an embodiment of the present invention provides a storage apparatus for an SRv6 packet, where the storage apparatus includes: the communication module is used for acquiring an SRv6 message to be stored; the processing module is used for carrying out binary conversion on a target IPv6 address in the SRv6 message to be stored to obtain a decimal target vector; predicting to obtain an accumulated distribution function value corresponding to the target vector based on the target vector and a preset address index model; the accumulated distribution function value is used for representing the offset between the storage address of the SRv6 message and the initial address of the storage space; the index model is used for limiting the mapping relation between the accumulative distribution function value and the vector value of the target vector; and determining a target storage address of the SRv6 message in the storage space based on the accumulated distribution function value and a preset address mapping table, and storing the SRv6 message to the target storage address.
In a possible implementation manner, the processing module is specifically configured to determine a vector value of the target vector; inputting the vector value of the target vector into a first-layer model of an address index model, and determining a classification number corresponding to the target vector; and inputting the classification number corresponding to the target vector into a second-layer model of the address index model to obtain a cumulative distribution function value corresponding to the target vector.
In a possible implementation manner, the processing module is further configured to obtain training data, where the training data includes a plurality of SRv6 packets and a storage address of each SRv6 packet; converting target IPv6 addresses in the SRv6 messages into decimal vectors to generate a plurality of training samples, wherein each training sample comprises the decimal vector corresponding to one SRv6 message and the storage address of the SRv6 message; and training the new neural network model based on a plurality of training samples to obtain an address index model.
In a possible implementation manner, the processing module is further configured to execute the following steps: the method comprises the following steps: sequencing the training samples based on the vector value of the decimal vector, and determining a sequencing label of each training sample; step two: taking the vector value of the decimal vector in the training sample as input, taking the classification number of the training sample as output, and training the first layer model of the new neural network model to obtain a first layer model of the address index model; step three: training a second-layer model of the new neural network model by taking the classification number of the training sample as input and taking the accumulated distribution function value corresponding to the storage address of the SRv6 message in the training sample as output to obtain the second-layer model of the address index model; step four: taking training samples which do not participate in training as test samples, and testing the address index model in the training process; step five: if the testing precision is more than or equal to the set precision, finishing the training process of the address index model to obtain a preset address index model; and if the test precision is smaller than the set precision, repeating the step two, the step three and the step four, and retraining a new neural network model.
In a possible implementation manner, the processing module is further configured to determine a difference between an accumulated distribution function value obtained through the test and an accumulated distribution function value corresponding to a storage address of an SRv6 packet in the test sample; if the ratio of the difference value to the cumulative distribution function value corresponding to the storage address of the SRv6 message in the test sample is greater than or equal to a set value, determining that the test precision is greater than or equal to the set precision; otherwise, determining that the testing precision is smaller than the set precision.
In a possible implementation manner, the processing module is specifically configured to determine, based on the cumulative distribution function value and the size of the storage space, an offset between a target storage address and a start address of the storage space; and determining a target storage address based on the offset and an address mapping table, wherein the address mapping table is used for defining the mapping relation between the offset and the storage address in the storage space.
In a possible implementation manner, the address mapping table is further configured to define a mapping relationship between a storage address in the storage space and a next hop address; the processing module is also used for determining a next hop address based on the target storage address of the SRv6 message and an address mapping table; and the communication module is also used for forwarding the SRv6 message to the equipment corresponding to the next hop address.
In a third aspect, an embodiment of the present invention provides an electronic device, where the electronic device includes a memory and a processor, where the memory stores a computer program, and the processor is configured to call and execute the computer program stored in the memory to perform the steps of the method according to any one of the foregoing first aspect and possible implementation manners of the first aspect.
In a fourth aspect, the present invention provides a computer-readable storage medium, where a computer program is stored, where the computer program is configured to, when executed by a processor, implement the steps of the method according to the first aspect and any possible implementation manner of the first aspect.
For technical effects brought by any one of the implementation manners of the second aspect to the fourth aspect, reference may be made to technical effects brought by a corresponding implementation manner of the first aspect, and details are not described here.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings required to be used in the embodiments or the prior art description will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings may be obtained according to these drawings without inventive labor.
Fig. 1 is a schematic flowchart of a method for storing an SRv6 packet according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of another SRv6 message storage method according to an embodiment of the present invention;
fig. 3 is a schematic flowchart of another SRv6 message storage method according to an embodiment of the present invention;
fig. 4 is a schematic flowchart of another SRv6 message storage method according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a storage device for an SRv6 packet according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
In the description of the present invention, "/" means "or" unless otherwise specified, for example, a/B may mean a or B. "and/or" herein is merely an association describing an associated object, and means that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. Further, "at least one" or "a plurality" means two or more. The terms "first," "second," and the like do not denote any order or importance, but rather the terms "first," "second," and the like do not denote any order or importance.
In the embodiments of the present application, words such as "exemplary" or "for example" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "e.g.," is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present relevant concepts in a concrete fashion for ease of understanding.
Furthermore, the terms "including" and "having," and any variations thereof, as referred to in the description of the present application, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or modules is not limited to only those steps or modules recited, but may alternatively include other steps or modules not recited, or may alternatively include other steps or modules inherent to such process, method, article, or apparatus.
In order to make the objects, technical solutions and advantages of the present invention more apparent, the following description is made by way of specific embodiments with reference to the accompanying drawings.
As described in the background art, the storage path of the SRv6 packet is congested, the occupied space is large, and the storage efficiency is low.
To solve the above technical problem, as shown in fig. 1, an embodiment of the present invention provides a method for storing an SRv6 packet. And the execution main body is a storage device of the SRv6 message. The storage method comprises steps S101-S104.
S101, obtaining an SRv6 message to be stored.
In some embodiments, the SRv6 message to be stored may be an SRv6 message received by the storage device.
S102, carrying out binary conversion on the target IPv6 address in the SRv6 message to be stored to obtain a decimal target vector.
As a possible implementation, the storage means may convert the hexadecimal destination IPv6 address into a decimal target vector, as shown in fig. 2.
And S103, predicting to obtain an accumulated distribution function value corresponding to the target vector based on the target vector and a preset address index model.
In this embodiment of the present application, the cumulative distribution function value is used to represent the offset between the storage address of the SRv6 packet and the start address of the storage space.
In the embodiment of the present application, the address index model is used to define a mapping relationship between the cumulative distribution function value and the vector value of the target vector.
As a possible implementation, the storage means may determine vector values of the target vectors; inputting the vector value of the target vector into a first-layer model of an address index model, and determining a classification number corresponding to the target vector; and inputting the classification number corresponding to the target vector into a second-layer model of the address index model to obtain an accumulated distribution function value corresponding to the target vector.
In some embodiments, the vector value of the target vector may be the sum of the elements in the target vector. Alternatively, the vector value of the target vector may also be the value of the first element in the target vector.
For example, as shown in fig. 2, the storage device inputs the vector value of the target vector into the first-stage model of the index unit to obtain the classification number corresponding to the target vector; determining a sub-model in a second-level model of the index unit based on the classification number; and predicting to obtain an accumulated distribution function value corresponding to the target vector based on the sub-model.
And S104, determining a target storage address of the SRv6 message in the storage space based on the accumulated distribution function value and a preset address mapping table, and storing the SRv6 message to the target storage address.
In some embodiments, the address mapping table is used to define a mapping relationship between an offset and a storage address in the storage space.
For example, the address mapping table may include a plurality of slots, each slot corresponding to a segment of a storage address in the storage space. For example, the address mapping table may include 16 slots.
As a possible implementation manner, the storage device may determine an offset between the target storage address and the start address of the storage space based on the cumulative distribution function value and the size of the storage space; based on the offset and the address mapping table, a target memory address is determined.
For example, the storage device may determine a 9 th slot in the address mapping table based on the cumulative distribution function value, and determine an offset corresponding to the 9 th slot based on the size of each slot, thereby determining a target storage address in the storage space.
The invention provides a storage method of SRv6 message, through setting up the index model of the address, limit the mapping relation between vector value of the cumulative distribution function value and target vector, get the vector value of the decimal vector from the IPv6 address conversion of purpose in SRv6 message to be stored, the cumulative distribution function value confirmed, can express the magnitude of offset between the initial address of the storage address relative to storage space of SRv6 message, namely represent the storage position of SRv6 message in the storage space; and then determining a target storage address of the SRv6 message in the storage space based on the score distribution function value and a preset address mapping table, and determining a determined storage position in the storage space for storing the SRv6 message. Because the target IPv6 addresses in the SRv6 messages are different, the vector values of the obtained target vectors are different, and the finally determined target storage addresses are different, the certainty of the SRv6 message storage addresses is realized, the congestion degree of the SRv6 messages during storage is relieved, the occupied space of the SRv6 messages during storage is reduced, and the storage efficiency of the SRv6 messages is improved.
It should be noted that the storage unit processes the 128-bit IPv6 address into a neural network input vector with a moderate length. And for each input IPv6 address, converting the caps hexadecimal into decimal representation to obtain an input vector. Considering the classification accuracy of a neural network, for 128-bit IPv6 addresses, in order to make the data corresponding to each dimension vector as uniform as possible, the input vector is considered to be designed into 2n (1 ≦ n ≦ 7) dimensions. The storage device learns the distribution rule of the IPv6 addresses in the memory, and the neural network model structure used by the storage device is related to the indexing efficiency and the storage consumption. Therefore, a feedforward neural network with a small structure is selected to realize quick calculation, and the feedforward neural network is built into a two-layer tower structure to cope with the data volume of million levels. The first layer of the tower structure comprises a feed-forward neural network, and the million-level data volume is divided into 1,000 classes. The second stage comprises 1,000 feed-forward neural networks which respectively correspond to the 1,000 classification results of the first stage, and each feed-forward neural network learns the distribution of data in various classes in the memory. The storage device also deploys an enhanced Bitmap structure for predicting the mapping of the cumulative distribution function values to the memory to obtain the offset of the actual index address. The Bitmap is evenly divided into a plurality of parts, wherein each slot records the sequence of inserting the IPv6 address into the part so as to realize the dynamic memory allocation of the storage unit. And multiplying the predicted cumulative distribution function value by the total number of the slots in the Bitmap to obtain the mapping position. And finally, according to the address offset recorded in the groove, the actual storage address under the chip can be accessed.
It should be noted that the learning type address index model under the SRv6 protocol in the new ip network and the training method thereof of the present invention are used to perform software deployment test on a small workstation configured as Intel Xeon E5-1650 v2.50ghz and DDR3 24GB SDRAM. In consideration of the data volume of the actual routing table index, one hundred million pieces of routing table index data are used as a training set in the experiment, and a test set is composed of two million pieces of brand new routing table index data. Experiment results show that under the condition that the misjudgment probability is 1%, the storage consumption of the mapping model is only 26.14MB, which is 6.6% of that of the traditional hash table, and the mapping model can be directly deployed in a high-speed on-chip memory. For the search speed, the method is far higher than traditional hash functions such as MD5 and CityHash256 and a Patricia Trie based on a search tree scheme. Therefore, the mapping model is feasible in practical applications. Therefore, the learning type address index model based on the SRv6 protocol designed in the invention can improve the storage efficiency while ensuring the data retrieval speed, and has good comprehensive performance.
Optionally, as shown in fig. 3, the method for storing an SRv6 packet provided in the embodiment of the present invention further includes, before step S103, S201 to S203.
S201, obtaining training data.
In the embodiment of the present application, the training data includes a plurality of SRv6 packets and a storage address of each SRv6 packet.
S202, converting the target IPv6 addresses in the SRv6 messages into decimal vectors, and generating a plurality of training samples.
Each training sample comprises a decimal vector corresponding to one SRv6 message and a storage address of the SRv6 message.
As a possible implementation manner, as shown in fig. 4, the storage device may generate a decimal vector based on a plurality of destination IPv6 addresses, and further generate a plurality of training samples to form a training set.
S203, training the new neural network model based on the plurality of training samples to obtain an address index model.
As a possible implementation, the storage device may be trained to obtain the address index model based on the following steps.
The method comprises the following steps: and sequencing the training samples based on the vector values of the decimal vectors, and determining a sequencing label of each training sample.
Step two: and training the first layer model of the new neural network model by taking the vector value of the decimal vector in the training sample as input and the classification number of the training sample as output to obtain the first layer model of the address index model.
Step three: and training a second-layer model of the new neural network model by taking the classification number of the training sample as input and taking the accumulated distribution function value corresponding to the storage address of the SRv6 message in the training sample as output to obtain the second-layer model of the address index model.
Step four: and testing the address index model in the training process by taking the training sample which does not participate in the training as a test sample.
Step five: if the test precision is more than or equal to the set precision, finishing the training process of the address index model to obtain a preset address index model; and if the test precision is smaller than the set precision, repeating the step two, the step three and the step four, and retraining a new neural network model.
As a possible implementation manner, before the fifth step, the storage device may further determine a difference between the cumulative distribution function value obtained through the test and the cumulative distribution function value corresponding to the storage address of the SRv6 packet in the test sample; if the ratio of the difference value to the cumulative distribution function value corresponding to the storage address of the SRv6 message in the test sample is larger than or equal to a set value, determining that the test precision is larger than or equal to the set precision; otherwise, determining that the testing precision is smaller than the set precision.
Therefore, the embodiment of the invention can obtain the address index model based on training of the training data, thereby realizing the prediction of the storage path of the SRv6 message.
It should be noted that, the present invention also provides a training method for the learning-type address index model under the SRv6 protocol in the NewIP network, where the storage device ranks the IPv6 addresses in the training set according to the vector values, and uses the current ranking number as a label to calibrate the training set. And uniformly dividing the sorted training set data into 1,000 classes according to the sorting size, and calibrating each class of data by taking the class number as a label. And training the first layer of neural network by using the classification number label, and outputting a classification value of the multi-modal data by using the obtained neural network model. And (4) reclassifying the training set by using the trained first-layer neural network, and taking the classification result of the 1,000 classes as the training set of the second-layer neural network, wherein each class of training set corresponds to one neural network of the second layer respectively. And taking the sequencing value of the IPv6 address as a distribution rule of the label learning data of the second-layer neural network training set in the memory. The fully trained neural network can output the cumulative distribution function value of the unknown data distributed in the memory. The relative position of the data in the mapping table can be obtained by multiplying the value by the total slot number of the mapping table, and further the actual storage position can be obtained according to the offset address.
It should be noted that the address mapping table is also used to define a mapping relationship between a storage address in the storage space and a next hop address.
Optionally, after step S104, the method for storing an SRv6 packet according to the embodiment of the present invention further includes steps S301 to S302:
s301, determining a next hop address based on the target storage address of the SRv6 message and an address mapping table.
S302, forwarding the SRv6 message to the equipment corresponding to the next hop address.
Therefore, the embodiment of the invention can forward the SRv6 message based on the address mapping table, has higher storage speed and improves the forwarding speed of the SRv6 message.
For example, in the embodiment of the present invention, an example of performing address mapping by using the index model after training is shown in fig. 2, in which an arrowed line indicates a process of obtaining an index address from actual index data. For an input certain node, namely an IPv6 address CDCD, 910A, 2222. And inputting the vector into an index unit, calculating a classification value 99 by a first-layer neural network BPNN1.0, and then inputting the input vector into a second-layer neural network BPNN2.99 to calculate a CDF value. Assuming that the CDF value calculated by the BPNN2.99 is 0.6, the mapping position of the IPv6 address in the Bitmap is 0.6 × 15 (slot number) =9. Since location 9 is located in the second part of the Bitmap tile, where the recorded offset is 2, the actual address of the final index is equal to the base address corresponding to the second part plus the address offset 2.
The learning type data index model of the high-performance router and the training method thereof are used for carrying out software deployment test on a small-sized workstation configured as Intel Xeon E5-1650 v2.50GHz and DDR3 GB 24GB SDRAM. In consideration of the data volume of the actual routing table index, one hundred million pieces of routing table index data are used as a training set in the experiment, and a test set is composed of two million pieces of brand new routing table index data. Experimental results show that under the condition that the misjudgment probability is 1%, the storage consumption of the mapping model is only 26.14MB, which is 6.6% of that of the traditional hash table, and the mapping model can be directly deployed in a high-speed on-chip memory. For the search speed, the method is much higher than traditional hash functions such as MD5 and CityHash256 and a scheme Patricia Trie based on a search tree. Therefore, the mapping model is feasible in practical applications. Therefore, the learning type address index model based on the SRv6 protocol, which is designed in the invention, can improve the storage efficiency while ensuring the data retrieval speed, and has good comprehensive performance.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by functions and internal logic of the process, and should not limit the implementation process of the embodiments of the present invention in any way.
The following are embodiments of the apparatus of the invention, reference being made to the corresponding method embodiments described above for details which are not described in detail therein.
Fig. 5 is a schematic structural diagram illustrating a storage apparatus for an SRv6 packet according to an embodiment of the present invention. The configuration apparatus 400 of the embedded device includes a communication module 401 and a processing module 402.
The communication module 401 is configured to obtain an SRv6 packet to be stored.
A processing module 402, configured to perform binary conversion on a target IPv6 address in the SRv6 message to be stored, to obtain a decimal target vector; predicting to obtain an accumulated distribution function value corresponding to the target vector based on the target vector and a preset address index model; the accumulated distribution function value is used for representing the offset between the storage address of the SRv6 message and the initial address of the storage space; the index model is used for limiting the mapping relation between the accumulative distribution function value and the vector value of the target vector; and determining a target storage address of the SRv6 message in the storage space based on the accumulated distribution function value and a preset address mapping table, and storing the SRv6 message to the target storage address.
In a possible implementation, the processing module 402 is specifically configured to determine a vector value of the target vector; inputting the vector value of the target vector into a first-layer model of an address index model, and determining a classification number corresponding to the target vector; and inputting the classification number corresponding to the target vector into a second-layer model of the address index model to obtain a cumulative distribution function value corresponding to the target vector.
In a possible implementation manner, the processing module 402 is further configured to obtain training data, where the training data includes a plurality of SRv6 messages and a storage address of each SRv6 message; converting target IPv6 addresses in the SRv6 messages into decimal vectors to generate a plurality of training samples, wherein each training sample comprises a decimal vector corresponding to one SRv6 message and a storage address of the SRv6 message; and training the new neural network model based on a plurality of training samples to obtain an address index model.
In a possible implementation manner, the processing module 402 is further configured to execute the following steps: the method comprises the following steps: sequencing the training samples based on the vector value of the decimal vector, and determining a sequencing label of each training sample; step two: training a first layer model of the new neural network model by taking the vector value of the decimal vector in the training sample as input and the classification number of the training sample as output to obtain a first layer model of the address index model; step three: training a second-layer model of the new neural network model by taking the classification number of the training sample as input and taking the accumulated distribution function value corresponding to the storage address of the SRv6 message in the training sample as output to obtain the second-layer model of the address index model; step four: taking a training sample which does not participate in training as a test sample, and testing an address index model in the training process; step five: if the test precision is more than or equal to the set precision, finishing the training process of the address index model to obtain a preset address index model; and if the testing precision is smaller than the set precision, repeating the second step, the third step and the fourth step, and retraining a new neural network model.
In a possible implementation manner, the processing module 402 is further configured to determine a difference between an accumulated distribution function value obtained through the test and an accumulated distribution function value corresponding to a storage address of an SRv6 packet in the test sample; if the ratio of the difference value to the cumulative distribution function value corresponding to the storage address of the SRv6 message in the test sample is greater than or equal to a set value, determining that the test precision is greater than or equal to the set precision; otherwise, determining that the testing precision is smaller than the set precision.
In a possible implementation manner, the processing module 402 is specifically configured to determine, based on the cumulative distribution function value and the size of the storage space, an offset between a target storage address and a start address of the storage space; and determining a target storage address based on the offset and an address mapping table, wherein the address mapping table is used for defining the mapping relation between the offset and the storage address in the storage space.
In a possible implementation manner, the address mapping table is further configured to define a mapping relationship between a storage address in the storage space and a next hop address; the processing module 402 is further configured to determine a next hop address based on a target storage address of the SRv6 packet and an address mapping table; and the communication module is also used for forwarding the SRv6 message to the equipment corresponding to the next hop address.
Fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention. As shown in fig. 6, the electronic apparatus 500 of this embodiment includes: a processor 501, a memory 502 and a computer program 503 stored in said memory 502 and executable on said processor 501. The processor 501, when executing the computer program 503, implements the steps in the above-described method embodiments, such as the steps 101 to 104 shown in fig. 1. Alternatively, the processor 501, when executing the computer program 503, implements the functions of each module/unit in each device embodiment described above, for example, the functions of the communication module 401 and the processing module 402 shown in fig. 5.
Illustratively, the computer program 503 may be partitioned into one or more modules/units that are stored in the memory 502 and executed by the processor 501 to implement the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution process of the computer program 503 in the electronic device 500. For example, the computer program 503 may be divided into a communication module 401 and a processing module 402 shown in fig. 5.
The Processor 501 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 502 may be an internal storage unit of the electronic device 500, such as a hard disk or a memory of the electronic device 500. The memory 502 may also be an external storage device of the electronic device 500, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the electronic device 500. Further, the memory 502 may also include both internal storage units and external storage devices of the electronic device 500. The memory 502 is used for storing the computer programs and other programs and data required by the terminal. The memory 502 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the description of each embodiment has its own emphasis, and reference may be made to the related description of other embodiments for parts that are not described or recited in any embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal and method may be implemented in other ways. For example, the above-described apparatus/terminal embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated module/unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, read-Only Memory (ROM), random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.

Claims (10)

1. A storage method of an SRv6 message is characterized by comprising the following steps:
acquiring an SRv6 message to be stored; carrying out binary conversion on the target IPv6 address in the SRv6 message to be stored to obtain a decimal target vector;
predicting to obtain an accumulated distribution function value corresponding to the target vector based on the target vector and a preset address index model; the cumulative distribution function value is used for representing the offset between the storage address of the SRv6 message and the initial address of the storage space; the address index model is used for limiting the mapping relation between the accumulative distribution function value and the vector value of the target vector;
and determining a target storage address of the SRv6 message in the storage space based on the accumulated distribution function value and a preset address mapping table, and storing the SRv6 message to the target storage address.
2. The method according to claim 1, wherein the predicting a cumulative distribution function value corresponding to the target vector based on the target vector and a preset address index model comprises:
determining a vector value of the target vector;
inputting a vector value of a target vector into a first-layer model of the address index model, and determining a classification number corresponding to the target vector;
and inputting the classification number corresponding to the target vector into a second-layer model of the address index model to obtain a cumulative distribution function value corresponding to the target vector.
3. The method according to claim 1, wherein before predicting an accumulated distribution function value corresponding to the target vector based on the target vector and a preset address index model, the method further comprises:
acquiring training data, wherein the training data comprises a plurality of SRv6 messages and a storage address of each SRv6 message;
converting the target IPv6 addresses in the SRv6 messages into decimal vectors to generate a plurality of training samples, wherein each training sample comprises the decimal vector corresponding to one SRv6 message and the storage address of the SRv6 message;
and training the new neural network model based on the plurality of training samples to obtain an address index model.
4. The method for storing the SRv6 packet according to claim 3, wherein the training the new neural network model based on the training samples to obtain an address index model comprises:
the method comprises the following steps: based on the vector value of the decimal vector, sequencing the training samples, and determining a sequencing label of each training sample;
step two: training a first layer model of the new neural network model by taking the vector value of the decimal vector in the training sample as input and the classification number of the training sample as output to obtain a first layer model of the address index model;
step three: training a second-layer model of the new neural network model by taking the classification number of the training sample as input and taking the accumulated distribution function value corresponding to the storage address of the SRv6 message in the training sample as output to obtain the second-layer model of the address index model;
step four: taking a training sample which does not participate in training as a test sample, and testing an address index model in the training process;
step five: if the testing precision is more than or equal to the set precision, finishing the training process of the address index model to obtain the preset address index model; and if the test precision is smaller than the set precision, repeating the step two, the step three and the step four, and retraining a new neural network model.
5. The method for storing the SRv6 packet according to claim 4, wherein the step of ending the training process of the address index model if the test precision is greater than or equal to the set precision, and before obtaining the preset address index model, further comprises:
determining a difference value between the tested accumulative distribution function value and the accumulative distribution function value corresponding to the storage address of the SRv6 message in the test sample;
if the ratio of the difference value to the cumulative distribution function value corresponding to the storage address of the SRv6 message in the test sample is greater than or equal to a set value, determining that the test precision is greater than or equal to the set precision;
otherwise, determining that the testing precision is smaller than the set precision.
6. The method according to claim 1, wherein the determining a target storage address of the SRv6 packet in the storage space based on the cumulative distribution function value and a preset address mapping table comprises:
determining the offset between a target storage address and the initial address of the storage space based on the accumulated distribution function value and the size of the storage space;
and determining the target storage address based on the offset and the address mapping table, wherein the address mapping table is used for defining the mapping relation between the offset and the storage address in the storage space.
7. The SRv6 packet storage method according to claim 1, wherein the address mapping table is further configured to define a mapping relationship between a storage address in the storage space and a next hop address;
after the storing the SRv6 packet to the target storage address, the method further includes:
determining a next hop address based on the target storage address of the SRv6 message and the address mapping table;
and forwarding the SRv6 message to the equipment corresponding to the next hop address.
8. A storage device for SRv6 messages, comprising:
the communication module is used for acquiring an SRv6 message to be stored;
the processing module is used for carrying out binary conversion on the target IPv6 address in the SRv6 message to be stored to obtain a decimal target vector; predicting to obtain an accumulated distribution function value corresponding to the target vector based on the target vector and a preset address index model; the cumulative distribution function value is used for representing the offset between the storage address of the SRv6 message and the initial address of the storage space; the index model is used for limiting the mapping relation between the accumulative distribution function value and the vector value of the target vector; and determining a target storage address of the SRv6 message in the storage space based on the accumulated distribution function value and a preset address mapping table, and storing the SRv6 message to the target storage address.
9. An electronic device, comprising a memory storing a computer program and a processor for invoking and executing the computer program stored in the memory to perform the method of any one of claims 1 to 7.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
CN202211402994.0A 2022-11-09 2022-11-09 SRv6 message storage method and device and electronic equipment Pending CN115714744A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211402994.0A CN115714744A (en) 2022-11-09 2022-11-09 SRv6 message storage method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211402994.0A CN115714744A (en) 2022-11-09 2022-11-09 SRv6 message storage method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN115714744A true CN115714744A (en) 2023-02-24

Family

ID=85232880

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211402994.0A Pending CN115714744A (en) 2022-11-09 2022-11-09 SRv6 message storage method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN115714744A (en)

Similar Documents

Publication Publication Date Title
JP4452183B2 (en) How to create a programmable state machine data structure to parse the input word chain, how to use the programmable state machine data structure to find the resulting value corresponding to the input word chain, deep wire speed A method for performing packet processing, a device for deep packet processing, a chip embedding device, and a computer program including programming code instructions (method and device for deep packet processing)
CN109218301B (en) Method and device for mapping frame header defined by software between multiple protocols
US8972450B2 (en) Multi-stage parallel multi-character string matching device
CN113297396B (en) Method, device and equipment for updating model parameters based on federal learning
US11327974B2 (en) Field variability based TCAM splitting
CN110008385B (en) Quick matching identification method and device based on character strings
CN113806403B (en) Method for reducing search matching logic resources in intelligent network card/DPU
CN112350956B (en) Network traffic identification method, device, equipment and machine readable storage medium
CN113220679A (en) Mixed FIB storage structure facing multi-mode network and data processing method thereof
CN114422385B (en) Method and system for generating network system test case
US9201982B2 (en) Priority search trees
US20090171651A1 (en) Sdram-based tcam emulator for implementing multiway branch capabilities in an xml processor
CN115714744A (en) SRv6 message storage method and device and electronic equipment
US9703484B2 (en) Memory with compressed key
US10795580B2 (en) Content addressable memory system
US20160105363A1 (en) Memory system for multiple clients
CN114079634B (en) Message forwarding method and device and computer readable storage medium
KR101587756B1 (en) Apparatus and method for searching string data using bloom filter pre-searching
CN115473846A (en) Router forwarding information retrieval method and related device
CN113642594A (en) Message classification method and device, electronic equipment and readable medium
US9032142B2 (en) System and method for storing integer ranges in a memory
CN117540071B (en) Configuration method and device for attribute table item of search engine
CN110474844B (en) Training method and chip for learning type index data structure of high-performance intelligent router
US20180145903A1 (en) Ip routing search
KR20010056948A (en) Method of IP subnet information management on database using binary string

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination