CN110138661A - Name data network content storage pool neural network based - Google Patents

Name data network content storage pool neural network based Download PDF

Info

Publication number
CN110138661A
CN110138661A CN201910260746.9A CN201910260746A CN110138661A CN 110138661 A CN110138661 A CN 110138661A CN 201910260746 A CN201910260746 A CN 201910260746A CN 110138661 A CN110138661 A CN 110138661A
Authority
CN
China
Prior art keywords
data
node
name
skip list
prefix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910260746.9A
Other languages
Chinese (zh)
Inventor
李卓
刘开华
周美丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN201910260746.9A priority Critical patent/CN110138661A/en
Publication of CN110138661A publication Critical patent/CN110138661A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/12Arrangements for detecting or preventing errors in the information received by using return channel
    • H04L1/16Arrangements for detecting or preventing errors in the information received by using return channel in which the return channel carries supervisory signals, e.g. repetition request signals
    • H04L1/1607Details of the supervisory signal
    • H04L1/1614Details of the supervisory signal using bitmaps
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/74Address processing for routing
    • H04L45/742Route cache; Operation thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/74Address processing for routing
    • H04L45/745Address table lookup; Address filtering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/30Peripheral units, e.g. input or output ports

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The present invention provides a kind of name data network content storage pool neural network based, it include: storage unit and piece external memory storage unit in piece, described interior storage unit uses high-speed memory, one neural network model of deployment is in piece to realize the uniform mapping to data name, and a modified bitmap is disposed, will include that the data packet of same names prefix is mapped in same bucket;Piece external memory storage unit uses slow memory, multiple dynamic memories corresponding with the slot slot of dynamic index unit of modified bitmap are disposed thereon, to store the skip list information of each name prefix, for instructing next time lookup of the same names prefix Data packet in two-way skip list, to improve data retrieval speed;Separately one two-way skip list structure of deployment to store storage location information of the data packet in study bitmap content storage pool, and has First Input First Output FIFO simple pointer and least recently used LRU two fingers needle in each two-way skip list node.

Description

Name data network content storage pool neural network based
Technical field
The invention belongs in high-performance router field of structural design, particular for content in name data network Forwarding plane The design of storage pool (Content Store) novel storage junction structure and its algorithmic issue.
Background technique
With the explosive growth of internet scale, innovative technology is continued to bring out with the mode of calculating, accelerates internet Diversification in role from " communication channel " to " data processing platform (DPP) ".In order to cope with internet content, personalization, superelevation movement Property, the future traffic demands such as " zero " time delay, ultra high flux density, thoroughly solve brought many under current internet IP framework Problem, it is a kind of using content caching as characteristic, towards Content of Communication name data network application and give birth to.
Name data network not only can realize the communication pattern of internet content oriented by using name data;May be used also To shorten the response time of user's access cache data, realize real meaning by disposing buffer storage in routing node On content it is shared, greatly reduce network load, effectively improve network data transmission rate.It therefore is considered as the following interconnection One of most promising developing direction in planar network architecture field.
However name data network is also faced with a series of urgent problems to be solved and challenge[1], especially route data plane In, the problem of support for Content Store line-speed processing[2].Name data network in routing table list item data be usually by What number and character formed, and have the characteristics that elongated, non-boundary character string to name, causing Content Store to need can Store the data storage capacity of millions of scales.In addition, Content Store is as interim content caching, capacity is limited, institute With Content Store need can Efficient Compression storing data to reduce storage consumption, and carry out caching in time and replace with new insert The data packet cleaning space entered.In addition, the title of data packet have the characteristics that it is opaque to transmission network.In Forwarding plane, Name the types of applications program in data network that can use according to their own needs under the premise of abiding by Uniform Name strategy Different naming schemes, and Content Store is to two kinds of data packet: interest (Interest) packet and data (Data) Packet, treatment process have otherness, and therefore, to complete content forwarding, Content Store is needed can be under all kinds of naming schemes Quickly support different names data retrievad algorithm[2]
Bibliography:
[1]L.Zhang et al.,“Named Data Networking,”ACM SIGCOMM Computer Communication Review,vol.44,no.3,pp.66-73,2014.
[2]Z.Li,Y.Xu,B,Zhang,L.Yan,and K.Liu,“Packet Forwarding in Named Data Networking Requirements and Survey of Solutions,”IEEE Communications Surveys& Tutorials,DOI:10.1109/COMST.2018.2880444,2018.
Summary of the invention
The object of the present invention is to provide a kind of novel storage organizations to learn bitmap content storage pool (Learnedbitmap- Content Store, LBM-CS), which can promote retrieval rate, support simultaneously on the basis of guaranteeing storage efficiency Data buffer storage replacement policy and all sub- name-matches and accurate name-matches name data searching algorithm.The present invention by using High-efficiency dynamic index data structure neural network based learns bitmap (Learnedbitmap, LBM), in combination with two-way jump Table (Double skip list, DSL) realizes above-mentioned purpose.Technical solution is as follows:
A kind of name data network content storage pool neural network based, comprising: storage unit and piece external storage list in piece Member, described interior storage unit use high-speed memory, and one neural network model of deployment is in piece to realize to data name Uniform mapping, and a modified bitmap (D-bitmap) is disposed, will include that the data packet of same names prefix is mapped to In same bucket (bucket);Piece external memory storage unit uses slow memory, disposes multiple and modified bitmap dynamic rope thereon Draw the corresponding dynamic memory of slot slot of unit, to store the skip list information of each name prefix, for instructing same names Next time lookup of the prefix Data packet in two-way skip list, to improve data retrieval speed;Separately one two-way skip list structure of deployment, To store storage location information of the data packet in study bitmap content storage pool, and have first to enter in each two-way skip list node First dequeue FIFO simple pointer and least recently used LRU two fingers needle;
Neural network model is as follows to the uniform mapping process of data name to realize:
Firstly, neural network collecting sample is trained, a large amount of unifications similar with name data network name data format Resource Locator URL is as sample data;Secondly, calculating cumulative distribution function F (x) value of sample data as label;Then, Training reverse transmittance nerve network, learns the neural network model that can reflect index data distribution situation out, finally, by data name The name character string of title trains neural network model, obtains the real number value between one 0~1, the numerical value as input, input Multiplied by the slot sum of modified bitmap, mapping label is obtained, that is, realizes the uniform mapping to data name;
Two-way skip list Data Structure Design is as follows:
Two-way skip list uses multilayered structure, and every layer is made of a doubly linked list, and skip list node is according to name prefix ID number increasing arrangement;It is connected with FIFO simple pointer with LRU two fingers needle between skip list node, ID is stored in each node, is directed toward Pointer (next) information of forward direction node pointer (prev) and backward node.
Detailed description of the invention
Fig. 1 is that novel storage junction structure learns bitmap content storage pool system structure diagram in the present invention.
Fig. 2 is present invention study bitmap content storage pool storage organization to Interest packet search operaqtion flow chart.
Fig. 3 is present invention study bitmap content storage pool storage organization to Data packet search operaqtion flow chart.
Fig. 4 is present invention study bitmap content storage pool storage organization to Data packet insertion operation flow chart.
Fig. 5 is present invention study bitmap content storage pool storage organization to Data packet delete operation flow chart.
Fig. 6 is data name search operation flow diagram in two-way skip list in the present invention.
Fig. 7 is to learn two-way skip list data structure diagram in bitmap content storage pool storage organization in the present invention.
Fig. 8 is to learn two-way skip list node structure figure in bitmap content storage pool storage organization in the present invention.
Fig. 9 learns dynamic memory structures schematic diagram in bitmap content storage pool storage organization.
Specific embodiment
In the present invention, the novel storage organization study bitmap content storage pool design of name data network Forwarding plane, such as Fig. 1 Shown, the structure includes: storage unit and a piece external memory storage unit composition in a piece.Wherein, it stores in described There is a high-speed memory, one neural network model (NN Model) of deployment is realized to the equal of data name in piece in unit Even mapping is disposed a modified bitmap (Dynamic-bitmap, D-bitmap) and is realized the number comprising same names prefix It is mapped in same bucket (bucket) according to packet.Piece external memory storage unit uses a slow memory, disposes multiple and improves thereon The corresponding dynamic memory of slot (slot) (Packet Store) of the dynamic index unit of type bitmap, before storing each title The skip list information sewed.Separately one two-way skip list structure of deployment, to store storage location letter of the data packet in Content Store Breath, and have First Input First Output (First Input First Output, FIFO) simple pointer in each two-way skip list node With least recently used (Least Recently Used, LRU) two fingers needle.By efficiently cooperating between above each data structure, The novel storage junction structure study bitmap content storage pool of design is set to support the retrieval, insertion, delete operation of data.
1, sub- title is used in designed name data network Forwarding plane study bitmap content storage pool storage organization The step of matching algorithm retrieval Interest packet, Interest packet of every retrieval, is as follows:
Step 1: input Interest packet name prefix and ID: the name prefix and ID of input Interest packet
Into study bitmap content storage pool storage organization.
Step 2: all sub- name-matches: the subnames progress in study bitmap to all comprising the name prefix Match, and selects the name prefix of best match.
Step 3: calculate mapping label: the name prefix of the best match obtains between one 0~1 through neural network computing Index mapping value, the value multiply modified bitmap slot sum, show that the Interest packet is mapped to reflecting on modified bitmap Penetrate label.
Step 4: calculating base address and offset address: being rounded to obtain the title divided by each barrel of slot total amount by mapping label Bucket serial number where prefix is base address, and the serial number that name prefix enters this barrel is the offset address of the name prefix.
Step 5: judging in modified bitmap with the presence or absence of best match name prefix: if in slot pointed by offset address Value is 0, then proves that there is no best match name prefixes in modified bitmap, execute step 10, otherwise, it was demonstrated that there are best With name prefix, step 6 is executed.
Step 6: access dynamic memory: the base obtained after study bitmap training mapping by the best match name prefix Address and offset address access dynamic memory.
Step 7: Data packet is searched in two-way skip list: the optimal two-way skip list of selection searches node and determines search direction, Data packet is searched in two-way skip list.
7-1: input data x name prefix and ID: being input to two-way skip list for the name prefix of data x and ID number, this When data x ID be denoted as id1, it is assumed that id1 is to be searched in the two-way skip list structure corresponding to the name prefix for the first time.
7-2: head node is searched: top from the head node in two-way skip list skip list according to the lookup mode of traditional skip list Start to search backward, and accessed path nodal information record and is continued into step 8 with dynamic memory.
7-3: input data y name prefix and ID: by the name prefix title for the data y for having same prefix with data x Prefix and ID number are input to two-way skip list input, and the ID number for extracting data y is denoted as id2, it is assumed that the y of input is not to exist for the first time It is searched in two-way skip list structure corresponding to the name prefix.
7-4: selection best start searches node and search direction: if id1 is greater than id2, from being recorded in dynamic memory It turns back before in device and ID value is selected in node to differ the smallest node as optimal node, otherwise, select ID in node of turning back from afterwards Value differs the smallest node as optimal node.
7-5: it searches two-way skip list: searching node from best start and searched according to search direction, while updating dynamic The nodal information of turning back recorded in memory.
Step 8: judging in two-way skip list with the presence or absence of best match Data packet: being searched in two-way skip list corresponding with ID number Data packet illustrate to then follow the steps 9 there are best match Data packet, otherwise, execute step if finding the identical node of ID Rapid 10.
Step 9: there are best match Data packets for output: study bitmap content storage pool exports best match Data packet, And continue to execute step 11.
Step 10: best match Data packet is not present in output: best match is not present in study bitmap content storage pool output Data packet, and continue to execute step 11.
Retrieval of the step 11:Interest packet in study bitmap content storage pool structure terminates.
2, accurate name is used in designed name data network Forwarding plane study bitmap content storage pool storage organization The step of title matching algorithm retrieval Data packet, Data packet of every retrieval, is as follows:
Step 1: input Data packet name prefix and ID: the name prefix and ID of input Data packet to study
In bitmap content storage pool storage organization.
Step 2: map operation directly accurate name-matches: being carried out to the name prefix in study bitmap.
Step 3: calculate mapping label: the name prefix obtains the index between one 0~1 through neural network computing and maps Value, the value multiply the slot sum of modified bitmap, show that the Data packet is mapped to the mapping label on modified bitmap.
Step 4: calculating base address and offset address: being rounded to obtain the title divided by each barrel of slot total amount by mapping label Bucket serial number where prefix is base address, and the serial number that name prefix enters this barrel is the offset address of the name prefix.
Step 5: judge in modified bitmap with the presence or absence of the name prefix: if slot intermediate value pointed by offset address for 0, Then prove that there is no the name prefixes in modified bitmap, execute step 10, otherwise, it was demonstrated that there are the name prefixes, execute step Rapid 6.
Step 6: access dynamic memory: by the name prefix after study bitmap training mapping obtained base address and partially Address is moved to access dynamic memory.
Step 7: Data packet is searched in two-way skip list: the optimal two-way skip list of selection searches node and determines search direction, Data packet is searched in two-way skip list.
7-1: input data x name prefix and ID: being input to two-way skip list for the name prefix of data x and ID number, this When data x ID be denoted as id1, and assume that id1 is to be looked into the two-way skip list structure corresponding to the name prefix for the first time It looks for.
7-2: head node is searched: top from the head node in two-way skip list skip list according to the lookup mode of traditional skip list Start to search backward, and accessed path nodal information record and is continued into step 8 with dynamic memory.
7-3: input data y name prefix and ID: by the name prefix title for the data y for having same prefix with data x Prefix and ID number are input to two-way skip list input, and the ID number for extracting data y is denoted as id2, it is assumed that the y of input is not to exist for the first time It is searched in two-way skip list structure corresponding to the name prefix.
7-4: selection best start searches node and search direction: if id1 is greater than id2, from being recorded in dynamic memory It turns back before in device and ID value is selected in node to differ the smallest node as optimal node, otherwise, select ID in node of turning back from afterwards Value differs the smallest node as optimal node.
7-5: it searches two-way skip list: searching node from best start and searched according to search direction, while updating dynamic The nodal information of turning back recorded in memory.
Step 8: judging in two-way skip list with the presence or absence of the Data packet: searching Data corresponding with ID number in two-way skip list Packet, if finding the identical node of ID, illustrates that there are the Data packets, thens follow the steps 9, otherwise, executes step 10.
Step 9: export the Data packet: study bitmap content storage pool exports the Data packet, and continues to execute step 11.
Step 10: Data packet is not present in output: the Data packet is not present in study bitmap content storage pool output, and continues to hold Row step 11.
Retrieval of the step 11:Data packet in study bitmap content storage pool structure terminates.
3, it is inserted into Data packet in designed name data network Forwarding plane study bitmap content storage pool storage organization, The step of one Data packet of every insertion, is as follows:
Step 1: input Data packet name prefix and ID: the name prefix and ID of input Data packet
Into study bitmap content storage pool storage organization.
Step 2: map operation directly accurate name-matches: being carried out to the name prefix in study bitmap.
Step 3: calculate mapping label: the name prefix obtains the index between one 0~1 through neural network computing and maps Value, the value multiply the slot sum of modified bitmap, show that the Data packet is mapped to the mapping label on modified bitmap.
Step 4: calculating base address and offset address: being rounded to obtain the title divided by each barrel of slot total amount by mapping label Bucket serial number where prefix is base address, and the serial number that name prefix enters this barrel is the offset address of the name prefix.
Step 5: judging in modified bitmap with the presence or absence of the name prefix: if slot intermediate value pointed by offset address is not 0, then prove that there are the name prefixes in modified bitmap, execute step 6, otherwise, it was demonstrated that the name prefix is not present, executes step Rapid 10.
Step 6: access dynamic memory: by the name prefix after study bitmap training mapping obtained base address and partially Address is moved to access dynamic memory.
Step 7: Data packet is searched in two-way skip list: the optimal two-way skip list of selection searches node and determines search direction, Data packet is searched in two-way skip list.
7-1: input data x name prefix and ID: being input to two-way skip list for the name prefix of data x and ID number, this When data x ID be denoted as id1, and assume that id1 is to be looked into the two-way skip list structure corresponding to the name prefix for the first time It looks for.
7-2: head node is searched: top from the head node in two-way skip list skip list according to the lookup mode of traditional skip list Start to search backward, and accessed path nodal information record and is continued into step 8 with dynamic memory.
7-3: input data y name prefix and ID: by the name prefix title for the data y for having same prefix with data x Prefix and ID number are input to two-way skip list input, and the ID number for extracting data y is denoted as id2, it is assumed that the y of input is not to exist for the first time It is searched in two-way skip list structure corresponding to the name prefix.
7-4: selection best start searches node and search direction: if id1 is greater than id2, from being recorded in dynamic memory It turns back before in device and ID value is selected in node to differ the smallest node as optimal node, otherwise, select ID in node of turning back from afterwards Value differs the smallest node as optimal node.
7-5: it searches two-way skip list: searching node from best start and searched according to search direction, while updating dynamic The nodal information of turning back recorded in memory.
Step 8: judging in two-way skip list with the presence or absence of the Data packet: searching Data corresponding with ID number in two-way skip list Packet illustrates that there is no the Data packets, thens follow the steps 9 if not finding the identical node of ID, otherwise, executes step 11.
Step 9: being inserted into two-way skip list: the Data packet being inserted into two-way skip list according to ID number increasing, and name prefix is remembered In record and dynamic memory, and continue to execute step 11.
Step 10: it creates two-way skip list: a two-way skip list being created to the name prefix, which is inserted into skip list, And name prefix record and is continued to execute into step 11 with dynamic memory.
Insertion of the step 11:Data packet in study bitmap content storage pool structure terminates.
4, Data packet is deleted in designed name data network Forwarding plane study bitmap content storage pool storage organization, The step of one Data packet of every deletion, is as follows:
Step 1: input Data packet name prefix and ID: the name prefix and ID of input Data packet to study
In bitmap content storage pool storage organization.
Step 2: map operation directly accurate name-matches: being carried out to the name prefix in study bitmap.
Step 3: calculate mapping label: the name prefix obtains the index between one 0~1 through neural network computing and maps Value, the value multiply the slot sum of modified bitmap, show that the Data packet is mapped to the mapping label on modified bitmap.
Step 4: calculating base address and offset address: being rounded to obtain the title divided by each barrel of slot total amount by mapping label Bucket serial number where prefix is base address, and the serial number that name prefix enters this barrel is the offset address of the name prefix.
Step 5: judge in modified bitmap with the presence or absence of the name prefix: if slot intermediate value pointed by offset address for 0, Then prove that there is no the name prefixes in modified bitmap, execute step 10, otherwise, it was demonstrated that there are the name prefixes, execute step Rapid 6.
Step 6: access dynamic memory: by the name prefix after study bitmap training mapping obtained base address and partially Address is moved to access dynamic memory.
Step 7: Data packet is searched in two-way skip list: the optimal two-way skip list of selection searches node and determines search direction, Data packet is searched in two-way skip list.
7-1: input data x name prefix and ID: being input to two-way skip list for the name prefix of data x and ID number, this When data x ID be denoted as id1, and assume that id1 is to be looked into the two-way skip list structure corresponding to the name prefix for the first time It looks for.
7-2: head node is searched: top from the head node in two-way skip list skip list according to the lookup mode of traditional skip list Start to search backward, and accessed path nodal information record and is continued into step 8 with dynamic memory.
7-3: input data y name prefix and ID: by the name prefix title for the data y for having same prefix with data x Prefix and ID number are input to two-way skip list input, and the ID number for extracting data y is denoted as id2.
7-4: selection best start searches node and search direction: if id1 is greater than id2, from being recorded in dynamic memory It turns back before in device and ID value is selected in node to differ the smallest node as optimal node, otherwise, select ID in node of turning back from afterwards Value differs the smallest node as optimal node.
7-5: it searches two-way skip list: searching node from best start and searched according to search direction, while updating dynamic The nodal information of turning back recorded in memory.
Step 8: judging in two-way skip list with the presence or absence of the Data: searching Data corresponding with ID number in two-way skip list Packet illustrates that there is no the Data packets, thens follow the steps 10 if not finding the identical node of ID, otherwise, executes step 9.
Step 9: deleting the Data packet node, and continue to execute step 10.
Deletion of the step 10:Data packet in study bitmap content storage pool structure terminates.
5, the two-way search name in designed name data network Forwarding plane study bitmap content storage pool storage organization The step of title data, name data of every lookup, is as follows:
Step 1: the name prefix and ID of input data x: being input to two-way skip list for the name prefix of data x and ID number, The ID of data x is denoted as id1 at this time, and assumes that id1 is carried out in first time two-way skip list structure corresponding to the name prefix It searches.
Step 2: head node is searched: the head node highest according to the lookup mode of traditional skip list, from two-way skip list skip list Layer is searched until finding id1 node, and return to the actual storage address of data x.
Step 3: record is turned back nodal information: will turn back before every layer of skip list in id1 search procedure node and after turn back section Point information is all recorded in the dynamic memory of the affiliated name prefix.
Step 4: the name prefix and ID of input data y: by the name prefix name for the data y for having same prefix with data x Prefix and ID number is claimed to be input to two-way skip list input, and the ID number for extracting data y is denoted as id2.
Step 5: judging whether ID is equal: if id1=id2, it was demonstrated that data y is equal with data x, thens follow the steps 10, no Then follow the steps 6.
Step 6: selection best start searches node and search direction: if id1 is greater than id2, depositing from dynamic is recorded in It turns back before in reservoir and ID value is selected in node to differ the smallest node as optimal node, otherwise, selected in node of turning back from afterwards ID value differs the smallest node as optimal node.
Step 7: searching two-way skip list: searching node from best start and searched according to search direction, while updating dynamic The nodal information of turning back recorded in state memory.
Step 8: judging whether there is the Data packet: if proving to exist and be somebody's turn to do there are the node that ID is equal in two-way skip list Data packet executes step 9, otherwise, it was demonstrated that the Data packet is not present, executes step 10.
Step 9: output actual storage address: returning to the storage address of the actual content of the Data packet, continue to execute step 11。
Step 10: the Data packet is not present in output: the Data packet is not present in output, and continues to execute step 11.
Step 11: lookup of the data name in two-way skip list structure terminates.
The present invention realizes the Fast Compression processing operation to data, it is important in Content Store, with nerve net To improve storage efficiency and support different names data retrievad algorithm, design, which meets, supports fast cache to replace for the network mapping of network Change strategy special two-way skip list data structure and can guide data retrieval dynamic memory data structure.It is specifically designed And embodiment is as follows:
(1) the two-way skip list Data Structure Design of cache replacement policy is quickly supported
Designed two-way skip list data structure is improved on the basis of traditional skip list in the present invention, therefore two-way Skip list is similar to traditional skip list, also uses multilayered structure, and every layer is made of a doubly linked list, skip list node is according to title The ID number increasing of prefix arranges, and specific structure is as shown in Figure 5.It is connected with FIFO simple pointer with LRU two fingers needle between skip list node, There are the pointer (next) before ID, direction to node pointer (prev) and backward node, node address, with Fig. 7 institute in each node For the node that ID is 6 in the two-way skip list node diagram shown, node structure schematic diagram is as shown in Figure 8.
(2) the dynamic memory Data Structure Design for guide data retrieval
Dynamic memory structures in the present invention, for section of turning back before and after recording node in two-way skip list search procedure Point information, to instruct the lookup next time of identical data name prefix.Have in being recorded in dynamic memory: name prefix, Forward direction node (prev_nodes), forward direction node address (next_node_addr), backward node (next_nodes), backward section Dot address (next_node_addr) and present node (recent_node).Wherein name prefix is from package name < title Prefix, ID > in extract;Forward direction node and backward node are the key nodes pair that some node is transferred when searching, preceding It is node address to node address and backward node address;Present node is the node just searched recently.Its structural schematic diagram is such as Shown in Fig. 9.Assuming that searching node is h, X is the node in two-way skip list, if X < h≤X next-hop node ID, forward direction node X, backward nodes records X next-hop node ID are recorded, forward direction node address is recorded as each corresponding X node address, backward to save Dot address is recorded as each corresponding X next-hop node address, present node h.It is arrived for the first time with data name/A/B/C/7 Come for two-way skip list data structure shown in Fig. 7 to illustrate above procedure.
Extracting ID is 7, and accessed path is -6 [1] -7 [4] -6 [4]-NIL [4] -6 [3] -25 [3] -6 [2] -9 [2] of head. The name prefix recorded in dynamic memory at this time be /A/B/C, backward node be<NIL, 25,9,7>, corresponding backward node Address be<0xB, 0x9,0x4,0x3>, forward direction node be<6,6,6,6>, corresponding forward direction node address be<0x2,0x2,0x2, 0x2 >, present node 7.
(3) data name prefix obtains mapping label through neural network model efficient retrieval
Data name prefix is through neural network model efficient retrieval, it is preferred that emphasis is learns to sample training, building meets need The neural network model asked.Neural network collecting sample first is trained, for sample, use and name in the present invention The similar a large amount of uniform resource locator of data network name data format (Uniform Resource Locator, URL) conduct Sample data, using sample data, in conjunction with reverse transmittance nerve network, learning training, which goes out, can reflect index data distribution situation Cumulative distribution function, according to mathematical theorem, if x obeys Arbitrary distribution, as the input of the cumulative distribution function F (x) of oneself, The distribution being then worth after y=F (x) transformation will obey U (0,1).Therefore, the name prefix character string of sample is considered as numerical value, it is defeated Enter its cumulative distribution function, the mapping value between one 0~1 can be obtained, the value multiplied by modified bitmap total slot number The mapping label of accomplished uniform mapping.
(4) quick two-way search is realized in two-way skip list
Utilize the nodal information in dynamic memory: name prefix, forward direction node, forward direction node address, backward node, after To node address and present node, instruct have the data search process of same names prefix as follows: before there are same names The data j sewed compares the ID of j and the ID size of present node first, if ID < present node ID, from forward direction node It selects to search node as best start with the node of turning back that data j is nearest according to ID number;If ID > present node ID, Basis for selecting ID number selects to search node as best start with the node of turning back that data j is nearest from backward node.If selection Best start node ID be less than j ID, then search backward, otherwise, think preceding lookup from optimal node.And in data search In the process, timely update the nodal information of the above record.Fig. 4 institute is reached with data name/A/B/19 ,/A/B/26 ,/A/B/21 The two-way skip list structure shown, and the record information in dynamic memory be Fig. 9 shown in for, illustrate above procedure.
As lookup/A/B/19, extracting title ID is 19, because 19 > 7, optimal node is searched from backward node And the number of plies.Judge 9<19<25, so from (19-9)>(25-19), so begun looking for from 25 [3], i 3, and 19<25, so Look-ahead.Path are as follows: 25 [3] -6 [3] -25 [2] -17 [2] -25 [1] -21 [1] -19;More new node is believed in dynamic memory Cease as follows: backward node is<NIL, 25,25,21>, corresponding backward node address is<0xB, 0x9,0x9,0x8>, forward direction section Point for<6,6,17,19>, corresponding forward direction node address is<0x2,0x2,0x6,0x7>, present node 19.
As lookup/A/B/26, propose that extracting title ID is 26, and 19 < 26, so selection is searched most from backward node Excellent node and the number of plies.Judge 25<26<NIL, and (26-25)<(NIL-26), so begun looking for from 25 [2], i 3, and 26> 25, so searching backward.Path are as follows: [2] -25 [1] -26 [3] -25 [2]-NIL of 25 [3]-NIL;Section is updated in dynamic memory Point information is as follows: and backward node is<and NIL, NIL, NIL, 26>corresponding backward node address is<0xB, 0xB, 0xB, 0xA>, it is preceding To node be<6,25,25,25>, corresponding forward direction node address be<0x2,0x9,0x9,0x9>, present node 26.
As lookup/A/B/C/21, propose that extracting title ID is 21, and 26 > 21, so selecting to search from forward direction node Optimal node and the number of plies.Judge 6<21<25, and (25-21)<(21-6), so begun looking for from 25 [3], i 3, and 25>21, So searching backward.Path are as follows: 25 [3] -6 [3] -25 [2] -17 [2] -25 [1] -21;Nodal information is updated in dynamic memory As follows: backward node is<NIL, and 25,25,25>corresponding backward node address is<0xB, 0x9,0x9,0x9>, forward direction node is <6,6,17,21>, corresponding forward direction node address be<0x2,0x2,0x6,0x8>, present node 21.

Claims (1)

1. a kind of name data network content storage pool neural network based, comprising: storage unit and piece external memory storage unit in piece, Described interior storage unit uses high-speed memory, and one neural network model of deployment is in piece to realize to the uniform of data name Mapping, and dispose a modified bitmap (D-bitmap), to by include same names prefix data packet be mapped to it is same In bucket (bucket);Piece external memory storage unit uses slow memory, disposes multiple and modified bitmap dynamic index list thereon The corresponding dynamic memory of slot slot of member, to store the skip list information of each name prefix, for instructing same names prefix Next time lookup of the Data packet in two-way skip list, to improve data retrieval speed;Separately one two-way skip list structure of deployment, to deposit Storage location information of the data packet in study bitmap content storage pool is put, and has first in, first out in each two-way skip list node Queue FIFO simple pointer and least recently used LRU two fingers needle.
Neural network model is as follows to the uniform mapping process of data name to realize:
Firstly, neural network collecting sample is trained, a large amount of unified resources similar with name data network name data format Finger URL URL is as sample data;Secondly, calculating cumulative distribution function F (x) value of sample data as label;Then, training Reverse transmittance nerve network learns the neural network model that can reflect index data distribution situation out, finally, by data name Name character string as input, input train neural network model, obtain the real number value between one 0~1, the numerical value multiplied by The slot sum of modified bitmap obtains mapping label, that is, realizes the uniform mapping to data name;
Two-way skip list Data Structure Design is as follows:
Two-way skip list uses multilayered structure, and every layer is made of a doubly linked list, skip list node according to name prefix ID The arrangement of number increasing;Be connected with LRU two fingers needle between skip list node with FIFO simple pointer, be stored in each node before ID, direction to Pointer (next) information of node pointer (prev) and backward node.
CN201910260746.9A 2019-04-02 2019-04-02 Name data network content storage pool neural network based Pending CN110138661A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910260746.9A CN110138661A (en) 2019-04-02 2019-04-02 Name data network content storage pool neural network based

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910260746.9A CN110138661A (en) 2019-04-02 2019-04-02 Name data network content storage pool neural network based

Publications (1)

Publication Number Publication Date
CN110138661A true CN110138661A (en) 2019-08-16

Family

ID=67568997

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910260746.9A Pending CN110138661A (en) 2019-04-02 2019-04-02 Name data network content storage pool neural network based

Country Status (1)

Country Link
CN (1) CN110138661A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110851658A (en) * 2019-10-12 2020-02-28 天津大学 Tree index data structure, content storage pool, router and tree index method
WO2023028833A1 (en) * 2021-08-31 2023-03-09 深圳市大疆创新科技有限公司 Data processing method and apparatus, device, program, and medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101364237A (en) * 2008-09-05 2009-02-11 成都市华为赛门铁克科技有限公司 Multi-keyword matching method and device
CN104410655A (en) * 2014-09-26 2015-03-11 清华大学 Named mechanism based storage system and method of distributed network
CN107832343A (en) * 2017-10-13 2018-03-23 天津大学 A kind of method of MBF data directories structure based on bitmap to data quick-searching
CN107908357A (en) * 2017-10-13 2018-04-13 天津大学 Name data network Forwarding plane PIT storage organizations and its data retrieval method
US20180278522A1 (en) * 2017-03-24 2018-09-27 Cisco Technology, Inc. System and method to facilitate content forwarding using bit index explicit replication (bier) in an information-centric networking (icn) environment
CN109271390A (en) * 2018-09-30 2019-01-25 天津大学 Index data structure based on neural network and data retrieval method thereof

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101364237A (en) * 2008-09-05 2009-02-11 成都市华为赛门铁克科技有限公司 Multi-keyword matching method and device
CN104410655A (en) * 2014-09-26 2015-03-11 清华大学 Named mechanism based storage system and method of distributed network
US20180278522A1 (en) * 2017-03-24 2018-09-27 Cisco Technology, Inc. System and method to facilitate content forwarding using bit index explicit replication (bier) in an information-centric networking (icn) environment
CN107832343A (en) * 2017-10-13 2018-03-23 天津大学 A kind of method of MBF data directories structure based on bitmap to data quick-searching
CN107908357A (en) * 2017-10-13 2018-04-13 天津大学 Name data network Forwarding plane PIT storage organizations and its data retrieval method
CN109271390A (en) * 2018-09-30 2019-01-25 天津大学 Index data structure based on neural network and data retrieval method thereof

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110851658A (en) * 2019-10-12 2020-02-28 天津大学 Tree index data structure, content storage pool, router and tree index method
WO2023028833A1 (en) * 2021-08-31 2023-03-09 深圳市大疆创新科技有限公司 Data processing method and apparatus, device, program, and medium

Similar Documents

Publication Publication Date Title
CN105224692B (en) Support the system and method for the SDN multilevel flow table parallel searchs of multi-core processor
CN104009920B (en) The processing method of data source movement, the method for E-Packeting and its device
CN102307149B (en) IP (internet protocol) lookup method and device and route updating method and device
CN102855260B (en) Process the method and system of picture
US9871727B2 (en) Routing lookup method and device and method for constructing B-tree structure
CN110808910B (en) OpenFlow flow table energy-saving storage framework supporting QoS and method thereof
CN110096458A (en) Name data network content storage pool data retrieval method neural network based
CN103227778B (en) Memory pool access method, equipment and system
CN104331428B (en) The storage of a kind of small documents and big file and access method
CN106326381A (en) HBase data retrieval method based on MapDB construction
CN107533518A (en) Distributed index for fault tolerant object memory construction
CN103873371A (en) Name routing fast matching search method and device
CN104412266A (en) Method and apparatus for multidimensional data storage and file system with a dynamic ordered tree structure
CN109271390A (en) Index data structure based on neural network and data retrieval method thereof
CN110321325A (en) File inode lookup method, terminal, server, system and storage medium
CN110138661A (en) Name data network content storage pool neural network based
CN109446358A (en) A kind of chart database accelerator and method based on ID caching technology
CN103905538A (en) Neighbor cooperation cache replacement method in content center network
CN110109616A (en) Name data network content storage pool data-erasure method neural network based
CN102736986A (en) Content-addressable memory and data retrieving method thereof
WO2015032214A1 (en) High-speed routing lookup method and device simultaneously supporting ipv4 and ipv6
CN107426315A (en) A kind of improved method of the distributed cache system Memcached based on BP neural network
CN107808006A (en) Fuzzy query method, equipment and system based on big data quantity
CN110196938A (en) Named data network content storage pool data insertion method based on neural network
CN110851658B (en) Tree index data structure, content storage pool, router and tree index method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20190816

WD01 Invention patent application deemed withdrawn after publication