CN108092803B - Method for realizing network element level parallelization service function in network function virtualization environment - Google Patents

Method for realizing network element level parallelization service function in network function virtualization environment Download PDF

Info

Publication number
CN108092803B
CN108092803B CN201711292739.4A CN201711292739A CN108092803B CN 108092803 B CN108092803 B CN 108092803B CN 201711292739 A CN201711292739 A CN 201711292739A CN 108092803 B CN108092803 B CN 108092803B
Authority
CN
China
Prior art keywords
service function
message
network element
queue
service
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711292739.4A
Other languages
Chinese (zh)
Other versions
CN108092803A (en
Inventor
东方
师晓敏
罗军舟
汪立鹤
王睿
李玉萍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
China Information Consulting and Designing Institute Co Ltd
Original Assignee
Southeast University
China Information Consulting and Designing Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University, China Information Consulting and Designing Institute Co Ltd filed Critical Southeast University
Priority to CN201711292739.4A priority Critical patent/CN108092803B/en
Publication of CN108092803A publication Critical patent/CN108092803A/en
Application granted granted Critical
Publication of CN108092803B publication Critical patent/CN108092803B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/12Discovery or management of network topologies
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45595Network integration; Enabling network access in virtual machine instances

Abstract

The invention discloses a method for realizing a network element level parallelization service function in a network function virtualization environment, which comprises the following steps: and traversing the service function chain, operating a service function decomposition analysis algorithm, decomposing each service function into a basic message processing unit called a network element, analyzing and integrating network element characteristics in the service functions, determining and storing the operation of each service function on the message and operation domain information. And by using the obtained operation and operation domain information of each service function, running a parallelizable judgment algorithm, determining a service function combination capable of performing parallelizable optimization in a service function chain, and then executing a parallelizable optimization algorithm to parallelize the service function combination. Combining the service functions which cannot be paralleled and the service functions which are optimized in parallel, arranging and combining according to the sequence of the original service functions, creating and generating a new service function chain, effectively shortening the length of the original service function chain by the new service function chain, improving the parallelism among the service functions, and obviously reducing the delay overhead when the message passes through the service function chain.

Description

Method for realizing network element level parallelization service function in network function virtualization environment
Technical Field
The invention relates to the field of Network Function Virtualization (NFV) and Service Function Chaining (SFC), in particular to a method for realizing a network element-level parallelization service function in a network function virtualization environment.
Background
In a data center, a plurality of network functions covering layers 4 to 7 in the form of physical special equipment or virtual machines are generally deployed, and mainly include firewalls, network address translation NATs and the like. Traffic in a data center from generation, transmission to termination needs to be processed through a variety of service functions. At present, the network functions are mainly realized based on special equipment, so that the problems of poor expandability, poor flexibility, long updating period, high purchase and operation expenses and the like are caused. To solve these problems, a new concept, Network Function Virtualization (NFV), is created accordingly. NFV utilizes virtualization technology to provide new methods of designing, deploying, and managing network services to solve many problems with traditional proprietary devices. The main idea of NFV is the separation of the physical network devices from the functions running on them. This means that Network Functions (NF) can run as an instance of ordinary software in an Internet Service Provider (ISP) device. This allows integration of many network functions into mass servers, switches and storage, which may be located in a data center, distributed network nodes or in an end user facility. In this way, a given service may be decomposed into a set of virtual network functions, which may then be implemented in software running on one or more commercially available physical servers. Furthermore, Virtual Network Function (VNF) instances can be relocated and instantiated in different network locations without requiring the purchase and installation of new hardware. NFV has great flexibility, can further open network functions and services to users and other services, and the ability to deploy or support new network services faster and cheaper. To achieve these advantages, NFV has the following differences compared to the implementation of the now traditional proprietary physical device:
(1) and decoupling software and hardware. Since the network elements are no longer a combination of proprietary hardware and proprietary software entity integration, the evolution of the two is independent of each other. This allows for separate development schedules and separate maintenance of the software and hardware.
(2) Flexible network function deployment. The separation of software from hardware facilitates the reallocation and sharing of infrastructure resources so that hardware and software can perform different functions at different times. This helps service providers deploy new network services more quickly on the same physical platform. Thus, a service component can be instantiated on any NFV-enabled device in the network and its connections can be set in a flexible way.
(3) And (4) dynamically zooming. Decomposing network functions into instantiatable software service components provides greater flexibility to extend actual VNF performance in a more dynamic manner, with finer granularity.
The concept of Service Function Chain (SFC) was also developed based on the emergence of NFV technology. The SFC defines and instantiates a set of chained, ordered Service functions (SFs, equivalent to NFs in NFV) that are responsible for traffic distribution (data plane), control and monitoring (control plane) of specific services/applications between SFs. SFC is a supporting technology that can flexibly manage specific service/application flows, provide solutions for flow classification, and enforce appropriate policies along with the flow routing according to service requirements and availability status of the network.
Recently, on the basis of using a Network function virtualization technology NFV to implement flexible VNF deployment, flexible extension and cost reduction, SFC further optimizes flexibility and extensibility of SFC Network transmission using a Software Defined Network (SDN). In essence, the transition from traditional proprietary devices SF to Virtual Network Functions (VNFs) is achieved using NFV, providing more efficient and effective SF deployment and orchestration; the control and data planes are decoupled using SDN and programming abstractions applicable in SFC are introduced for dynamic control of SFC topology and flow routing across SFs.
Disclosure of Invention
Aiming at the defects of the prior art, the invention discloses a method for realizing network element level parallelization service function in a network function virtualization environment,
the method comprises the following steps:
step 1, acquiring and traversing a service function chain, running a service function decomposition analysis algorithm, decomposing each service function into basic message processing units called network elements, analyzing and integrating network element characteristics in each service function, determining and storing the operation of each service function on a message and operation domain information;
step 2, according to the operation and operation domain information of each service function obtained in the step 1, running a parallelizable judgment algorithm and determining a service function combination capable of performing parallel optimization in a service function chain;
step 3, executing a parallel optimization algorithm on the service function queue which can be paralleled and obtained in the step 2, generating a new service function, and adding a new service function chain queue; adding the non-parallelizable service function into a new service function chain queue;
and 4, combining the new service functions generated in the step 3 and the non-parallelizable service functions, arranging and combining the new service functions and the non-parallelizable service functions according to the original service function sequence, and outputting the new service function chain which is subjected to parallel optimization.
The step 1 comprises the following steps:
step 1-1, acquiring a service function chain input by an administrator;
step 1-2, reading a first service function in a service function chain, creating a hash map HashMap < key, values > (the unique key corresponds to the value values thereof) named as SFHM (service function HashMap), and storing operation and operation domain information values of each service function key, wherein the operation domain information values comprise a write operation domain and a read operation domain;
step 1-3, judging whether the next service function is in the service function chain, if so, entering step 1-4; otherwise, entering the step 1-5;
step 1-4, operating a service function decomposition analysis algorithm:
firstly, the current service function is decomposed into network elements, and then the operation of each network element on the message and the operation domain information are analyzed and synthesized according to rules, so as to determine the operation of the service function on the message and the operation domain. For the service function decomposition analysis algorithm, the decomposition process mainly combines the basic message processing units as network elements according to the operations of each part in the service function on the messages, including obtaining, outputting, reading, writing (modifying or adding/deleting message domains), discarding the messages and the like, thereby forming a directed acyclic graph capable of realizing the specific service function.
Analysis and synthesis: the analysis and integration process needs to comprehensively consider the operation and operation domain information of each network element forming the service function to determine the possible operation and operation domain of the service function, and the operation and operation domain information of the network element is related to the characteristics of each network element and the user rule, for example, FW classifies (reads) the network element according to the user setting rule and the message header information and needs to determine the message domain to be read according to the user setting. Therefore, in the analysis and synthesis, all network elements except the message acquisition network element, the message output network element and the message discard network element need to be subjected to the following rules: when the network element executes the reading operation, the reading operation domains of all the network elements of the service function are merged; when the network element executes the write operation, the write operation domains of all the network elements of the service function are merged. The service function decomposition analysis algorithm realized according to the rules can determine the possible operation of the service function to the message and the operation domain information by utilizing the operation and operation domain information of each network element and store the possible operation and operation domain information into the SFHM;
step 1-5, creating a parallelable service function Queue PQ (Parallel Queue) and a New service function Chain Queue NC (New Chain), wherein the parallelable service function Queue PQ is used for storing parallelable service function combination, and the New service function Chain Queue NC is used for storing a New service function Chain after Parallel optimization; reading a first service function in a service function chain and operation domain information thereof, adding the first service function into a service function queue which can be paralleled, and marking the first service function as a queue head and a queue tail; and setting the End flag of the service function chain to be 0, and entering the step 2.
The step 2 comprises the following steps:
step 2-1, judging whether a queue PQ capable of serving the parallel service function is empty, if not, entering step 2-2; otherwise, entering step 2-3;
step 2-2, adding the currently read service function into a queue PQ of the parallel service function, and pointing the head and the tail of the queue to the current service function;
step 2-3, reading the next service function in the service function chain and the operation and operation domain information thereof;
step 2-4, a parallel judgment algorithm is operated, and the main contents are as follows: and if and only if the currently read service function and all service functions in the service function queue PQ can be parallelly read, the message operation domains of all the service functions do not have intersection under the condition that the message discarding is not considered, or if the operation of the service functions on the same message domain (namely the operation domains have intersection) according to the sequence of the service functions in the PQ (the sequence of the current service function is the tail) is read-after-write or write-after-write, the service functions in the PQ and the currently read service functions can be parallelly connected. Judging whether the currently read service function can be paralleled with all service functions in the service function queue PQ which can be paralleled, if so, entering the step 2-5; otherwise, entering step 3;
step 2-5, adding the current service function into a queue PQ of the service function which can be paralleled, marking the queue as the tail, and entering step 4;
the step 3 comprises the following steps:
step 3-1, judging whether the head queue and the tail queue in the current Service Function queue PQ capable of being paralleled point to the same Service Function (SF), namely whether only one Service Function is contained, if so, indicating that the Service Function can not be paralleled with other adjacent Service functions, storing the Service Function in a new Service Function chain queue NC, and entering step 3-2; otherwise, entering step 3-3;
step 3-2, emptying the current queue PQ with the service function capable of being paralleled, and entering step 4, wherein the head queue and the tail queue of the queue are empty;
step 3-3, executing a parallel optimization algorithm: firstly, establishing a network element for acquiring messages from a network card, realizing the function of acquiring the messages from the network card, connecting the input end of the network element with the network card for acquiring the messages, and initializing the output end to be null;
step 3-4, a copy distribution network element is created, the functions of copying messages and distributing copies of each message to each parallel service function are realized, the input end of the copy distribution network element is connected with the output end of the network card network element and sends the messages to the network card network element, and the output end is initialized to be null;
step 3-5, a message merging network element is established, message copies passing through all service functions are merged and finally an output message is formed, the input end of the message merging network element is initialized to be null, and the output end of the message merging network element is initialized to be null;
step 3-6, acquiring a current parallel service function queue PQ head service function, and setting the value of the current service function operation Priority PID (Priority Identification) as the queue length, wherein the PID is used for merging priorities of more than two message copies when the same message domain data is different;
step 3-7, deleting the current service function to obtain a message network element from the network card, wherein the original output network element is connected to the copy distribution output end;
step 3-8, judging whether the service function sends a message to the input end of the network card network element to be connected with the queue network element, if so, entering step 3-9, otherwise, entering step 3-10;
3-9, replacing the sent message to the network card network element by using the queue connected with the input end of the network card network element from the original sent message in the step 3-8 as an output queue network element, increasing the input of the sent message to the network card on the basis of keeping the original input of the queue, and entering the step 3-11;
3-10, replacing the service function to send a message to the network card network element as an output queue network element, and sending the message to the network card network element input as the input of the output queue network element;
step 3-11, the discarded message network element in the service replacing function is set as a discarded network element, the function of marking the discarded message to replace a simple discarded message is realized, the input of the discarded message network element is the input of the original discarded message network element, and the output network element is an output queue network element;
step 3-12, setting the network element at the output end of each output queue network element created in the step 3-9 and the step 3-10 as the message merging network element created in the step 3-5, and connecting;
3-13, judging whether the current service function is the tail of the current service function queue PQ, if so, entering the step 3-14, otherwise, entering the step 3-7;
3-14, creating a new sending message to a network card network element, wherein the output end of the new sending message is connected with the network card and is used for outputting a final message, and the input end of the new sending message is connected with a message merging network element, so far, the execution of the parallel optimization algorithm is finished, and all service functions in the PQ are parallelized into a new service function;
step 3-15, adding the newly generated service function in the step 3-14 into a new service function chain queue NC, and executing the step 4;
step 4 comprises the following steps:
step 4-1, when judging that the value of the service function link End identifier End is 1, entering step 4-4; otherwise, entering the step 4-2;
step 4-2, judging whether the currently read service function is positioned at the tail of the original service function chain, if so, indicating that the original service function chain is judged to be finished, and entering step 4-3; otherwise, entering step 2-1;
4-3, setting an ending identifier End to be 1, indicating that the original service function chain is judged to be finished, and returning to the step 3;
and 4-4, arranging and combining according to the original service function sequence, and outputting a new service function chain queue NC, namely the service function chain after parallel optimization.
However, the conventional chain-connected SFC causes the continuous increase of the SFC length with the steady increase of the number of services/applications, so that the delay overhead caused by the message passing through such SFC is also increased sharply, and the SFC cannot meet the strict heterogeneous Quality of Service (QoS) requirement and Service level Agreement (S L a, Service-L el event), so that the development of the SFC is in a predicament.
(1) Eliminating redundant operations between network functions;
(2) the parallelism among network functions is realized to the maximum extent.
In the study (1), researchers find that there are often more redundant operations between NFs, for example, each service function needs to acquire a message from a network card and send a message to the network card. Thus, eliminating the repetitive process between NFs can optimize NF serial processing efficiency. However, although reducing the repeated operations between NFs reduces the delay overhead of the packets when they pass through the SFC to some extent, the SFC still adopts serial processing, which fails to fully utilize the parallelism between packet operations, and when the SFC is long, there is still a performance bottleneck.
In the study (2), researchers found that 53.8% of the currently used NFs in the data center can be parallel, so they proposed a parallel implementation of NF stage. They focus on parallel optimization between multiple VNFs in a single server, implementing one NF for generating copies of messages and one NF for merging multiple copies of messages. Although such an implementation is lower in latency than the implementation in study (1), a copy operation of a full packet is required in parallel, and a redundant operation on a packet between multiple VNFs will also introduce a part of latency overhead.
In summary, although an SFC composed of VNFs implemented based on the NFV technology has the advantages of flexible deployment, strong expandability, and the like, the problems that QoS and S L a cannot be satisfied due to large delay overhead of such an SFC also exist, but current research mainly includes reducing redundant operations in a serial chain and directly optimizing in a parallel manner, but these research works have respective disadvantages, but considering the development trend of the computer field and the huge delay reduction due to parallelization, the parallelization manner will inevitably become the mainstream in the future.
Has the advantages that:
the invention has the advantages that:
by the method for realizing the network element level parallelization service function in the network function virtualization environment, the length of an original service function chain is effectively shortened, the parallelism of each service function is improved, and the delay overhead is obviously reduced.
Compared with the prior art, the invention has the following advantages:
1. according to the method, the service functions are decomposed into the network element combinations through the service function decomposition analysis algorithm, and the operation and operation domain information of the service functions are analyzed in a finer granularity, so that the parallelism among the service functions is determined, and reference is provided for the parallelism analysis of the service functions;
2. the service function parallelization of the network element level is used for replacing the parallelization of the service function level, so that redundant operation among service functions and the operation process of a network card on a message when a virtual machine is crossed are avoided, and a new thought is provided for the service function parallelization research in the NFV in the future;
3. the service function decomposition analysis algorithm, the parallelizable judgment algorithm and the parallel optimization algorithm are simple and effective, high in accuracy and low in complexity, and can be suitable for a long service function chain environment;
4. due to the modular design, the coupling degree of each component of the system is low, and each module interacts with each other through global shared variables, so that new requirements and expansion can be met.
The invention provides a service function chain for realizing network element level parallelization service function in a network function virtualization environment, decomposes a service function into a combination of network elements through a service function decomposition analysis algorithm, determines the operation of each network function on messages and operation domain information according to the network element characteristic analysis in each service function, then operates a parallelization judgment algorithm, determines a service function combination capable of performing parallelization optimization in the service function chain, executes a parallelization optimization algorithm to generate a new service function, combines a non-parallelization service function, and outputs a new service function chain after the parallelization optimization, thereby shortening the length of the original service function chain and reducing the delay overhead.
The method of the invention decomposes a directed acyclic graph formed by combining service functions into network elements through a service function decomposition analysis algorithm, analyzes and synthesizes the operation and operation domain information of the network elements to determine the operation and operation domain information of the message by the service functions, runs a parallelizable judgment algorithm to determine a parallelizable service function combination, parallelizes the service functions through a parallelized optimization algorithm, combines non-parallelizable service functions, and generates a new service function chain after parallel optimization, thereby effectively shortening the length of the original service function chain and obviously reducing the time delay of the message when passing through the service function chain.
Drawings
The foregoing and other advantages of the invention will become more apparent from the following detailed description of the invention when taken in conjunction with the accompanying drawings.
Figure 1 is a diagram of a service function network element level parallelization system architecture.
Figure 2 service function network element level parallelization flow diagram.
FIG. 3 is a flow chart of a parallel optimization algorithm.
Figure 4 shows an example of a service function network element.
Fig. 5 is an example of network element level parallelism for the service functions of fig. 4.
Fig. 6 is a schematic diagram of a main implementation process of a parallelization system at a service function network element level.
Detailed Description
The invention is further explained below with reference to the drawings and the embodiments.
The invention discloses a method for realizing a network element-level parallelization service function in a network function virtualization environment, which comprises the steps of firstly realizing a controller on a server, acquiring service function chain information input by an administrator, parallelizing the service function chain information and outputting a new service function chain after parallel optimization, wherein the controller mainly comprises four modules, namely a service function decomposition analysis module, a service function parallelization analysis module, a parallelization service function parallelization optimization module and a new service function chain generation module. The specific implementation method comprises the following steps:
in the service function decomposition analysis module, firstly, a service function chain input by an administrator needs to be acquired and traversed, a service function decomposition analysis algorithm is operated, each service function is decomposed into a basic message processing unit called a network element, then, network element characteristics in each service function are analyzed and synthesized, and the rule is as follows: when the network element executive read operation exists, the read operation domains of all the network elements of the service function are subjected to union; when the network element executes the write operation, the service function integrates the write operation domains of all the network elements. And determining the operation of each service function on the message and the operation domain information according to the rule, and storing the operation and the operation domain information into HashMap < key, values >, wherein each service function is a key, and the operation domain corresponding to the service function are values.
The execution steps are as follows:
step 1, acquiring a service function chain input by an administrator, and entering step 2;
step 2, reading a first service function in the service function chain, creating HashMap < key, values > named as SFHM for storing operation and operation domain information (values including a write operation domain and a read operation domain) of each service function (key), and entering step 3;
step 3, judging whether the next service function is in the service function chain, if so, entering step 4; otherwise, entering step 5;
and 4, operating a service function decomposition analysis algorithm, firstly decomposing the current service function into a directed acyclic graph formed by combining basic message processing units called network elements, then analyzing and comprehensively decomposing the message operation and operation domain of each network element obtained by the current service function according to rules, determining the message operation and operation domain information of the service function, storing the message operation and operation domain information into the SFHM, and entering the step 3.
Step 5, establishing a service function queue PQ and a new service function chain queue NC which can be in parallel; reading a first service function in a service function chain and operation domain information thereof, and adding the first service function into the PQ; and setting End mark of the service function chain as 0, and calling the parallelizable analysis module of the service function.
And in the service function parallelizable analysis module, running a parallelizable judgment algorithm according to the operation and operation domain information of each service function obtained in the service function decomposition analysis module, and determining a service function combination capable of performing parallel optimization in the service function chain. For the parallelizable decision algorithm, the main basis is as follows: and if and only if the continuous service functions in the service function chain do not consider message discarding, the message operation domains of the service functions have no intersection, or the operation of each service function on the same message domain according to the sequence of the service function chain (namely the intersection) is read, written and then parallelized.
The execution steps are as follows:
step 1, when PQ is not empty, if not, entering step 2; if yes, go to step 3;
step 2, adding the currently read service function into PQ, and entering step 3;
step 3, reading the next service function in the service function chain and the operation and operation domain information thereof, and entering step 4;
and 4, operating a parallelizable judgment algorithm to judge whether the currently read service function can be paralleled with all service functions in the PQ. When the parallelism can be performed, entering the step 5; otherwise, calling a parallelizable service function parallelization optimization module;
step 5, adding the current service function into PQ, and calling a new service function chain generation module;
in the parallelizable service function parallelization optimization module, inputting a parallelizable optimization service function combination obtained in the service function parallelization analysis module, executing a parallelizable optimization algorithm, outputting a parallelized new service function, and adding the parallelizable new service function into a new service function chain queue; for non-parallelizable service functions, add them to the new service function chain queue.
The execution steps are as follows:
step 1, judging whether the current PQ only comprises one service function, if so, processing the non-parallelizable service function, and entering step 2; otherwise, running a parallel optimization algorithm and entering the step 3;
step 2, indicating that the service function can not be paralleled, storing the service function in NC, emptying PQ, and calling a new service function chain generation module;
step 3, starting to run a parallel optimization algorithm, firstly creating a new network element for acquiring messages from the network card, realizing the function of acquiring the messages from the network card, connecting the input end of the network element with the network card for acquiring the messages, temporarily setting the output end of the network element to be empty, and entering step 4;
step 4, creating a copy distribution network element, realizing the functions of copying the message and distributing each message copy to each parallel service function, connecting the input end of the copy distribution network element with the new slave network card created in the step 1 to obtain the output end of the message network element, temporarily setting the output end to be null, and entering the step 5;
step 5, establishing a message merging network element, realizing merging of message copies passing through each service function and finally forming an output message, wherein the input end is temporarily set to be null, the output end is temporarily set to be null, and entering step 6;
step 6, acquiring a PQ queue head service function, setting the PID value of the operation priority of the current service function as the queue length, wherein the PID is used for merging the priorities of a plurality of message copies in the same message domain when data are different, and entering step 7;
step 7, deleting the current service function to obtain the message network element from the network card, connecting the original output network element to the copy distribution output end, and entering step 8;
and step 8, judging whether the service function sends the message to the network card network element input end to be connected with the queue network element. If yes, go to step 9; otherwise, entering the step 10;
step 9, the queue is used as an output queue to output a queue network element to replace and send the message to a network card network element, on the basis of keeping the original input of the queue, the input of sending the message to the network card is added, and the step 11 is entered;
step 10, replacing the service function to send a message to a network card network element as an output queue network element, sending the message to a network card network element input as an output queue network element input, and entering step 11;
step 11, replacing the discarded message network element in the service function with a discarded network element, and realizing the function of marking the discarded message to replace the simple discarded message, wherein the input is the original discarded input, and the output network element is an output queue network element, and the step 12 is carried out;
and step 12, setting the output network element of the output queue network element as a message merging network element and connecting. Entering step 13;
step 13, judging whether the current service function is the queue tail of the PQ, if so, entering step 14; otherwise, entering step 7;
and step 14, creating a new sending message to the network card network element, wherein the output end of the new sending message is connected with the network card for outputting the final message, and the input end of the new sending message is connected with the output end of each created service function output queue. So far, after the parallel optimization algorithm is executed, generating a new service function after parallelizing all service functions in the PQ, and entering step 15;
step 15, adding the new service function chain into the NC, and calling a new service function chain generation module;
and in the new service function chain generation module, combining the new service functions generated after parallelization in the parallelizable service function parallelization optimization module and a small number of service functions which cannot be parallelized, arranging and combining according to the original service function sequence, and finally outputting the new service function chain which is subjected to parallelization optimization.
The execution steps are as follows:
step 1, when judging that the value of a service function link End identifier End is 1, entering step 4; otherwise, go to step 2.
Step 2, judging whether the currently read service function is positioned at the tail of the original service function chain, if so, indicating that the original service function chain is judged to be finished, and entering step 3; otherwise, calling the service function parallelizable analysis module;
step 3, setting an End identifier End to be 1, indicating that the original service function chain is judged to be finished, and calling a parallelizable service function parallelization optimization module;
and 4, arranging and combining according to the original service function sequence, and outputting a new service function chain queue, namely the service function chain after parallel optimization.
Examples of the embodiments
Considering that the invention realizes the parallelization of the service function network element level, the network element decomposition of the service function is needed, and the realization of the service function by adopting a decomposable and modular realization mode is more convenient. Thus, the open source software employed with most VNFs today is a click implementation. In particular, the Click modular router is an extensible software architecture for building flexible and configurable network functions, which runs on x86 architecture generic servers. A service function implemented on a click basis is assembled from packet processing modules called click elements. A single click element may implement simple message processing functions such as message classification, queuing, scheduling, and network device interaction, while multiple click elements may be used to combine, implement advanced routing and packet processing functions for implementing network functions. Meanwhile, the user can also add custom click elements using the API provided by click. Essentially, a service function is a directed graph acyclic graph consisting of click elements, where vertices are click elements and messages travel along graph edges. In this case, the present invention only makes the key operations of reading, writing (modifying and adding/deleting message fields), discarding, etc. on the message in the click element correspond to the network elements defined in the present invention, and correspondingly, the service function is the directed acyclic graph formed by the network elements. In the implementation process of the invention, only the influence of the network elements on the service function operation and the operation domain is considered, and the influence of other click elements is not considered, wherein the network elements comprise a click element Classiier which corresponds to the read network elements and realizes the classification operation according to the user rule; IPRewriter, corresponding to the writing network element, realizes the modification of IP source and destination address; and the Discard corresponds to the discarding network element, so that the simple message discarding operation is realized.
Therefore, in order to implement the method for implementing the network element level parallelization service function in the network function virtualization environment, the service function network element level parallelization system framework is as shown in fig. 1, firstly, the service function network element level parallelization optimization controller is run on the control layer server node, and then the service function instance is run in the KVM virtual machine of the data layer server node. In the virtual machine, an expanded click software toolkit (including a copy distribution network element, a message merging network element, a set discarding network element and the like) is used for realizing a service function, a Netmap message processing acceleration framework is used for accelerating the message processing of the virtual machine, and simultaneously, in order to ensure the message processing performance of physical nodes, each data layer server node runs an Intel DPDK message processing acceleration framework. The physical nodes are connected through the Ethernet, and the virtual machine nodes in each server are connected by adopting an Openvswitch virtual switch based on DPDK. In a control layer server node, an operating service function network element level parallel optimization controller upwards provides functions of service function chain input, modification, deletion and the like for an administrator, and four modules are downwards realized: the system comprises a service function decomposition analysis module, a service function parallelizable analysis module, a parallelizable service function parallelization optimization module and a new service function chain generation module. With reference to fig. 2, the specific implementation steps of the service function network element level parallelism optimization are as follows:
in the service function decomposition analysis module, the controller first obtains a chain of service functions entered by the administrator, where each service function is presented in a click profile. Then, traversing service function chains, wherein the service functions are all formed by a click element in a click modular router software toolkit, so that a service function decomposition analysis algorithm needs to be executed, the click element is screened to correspond to a network element, the service function is decomposed into a directed acyclic graph formed by combining the network elements, as shown in fig. 4, and then the characteristics of the network elements in each service function are analyzed, and the rule is as follows: when the network element executive read operation exists, the read operation domains of all the network elements of the service function are subjected to union; when the network element executes the write operation, the service function integrates the write operation domains of all the network elements. And according to the rules, determining the operation of each network function on the message and the operation domain information.
The service function decomposition analysis algorithm comprises the following steps: firstly, the current service function is decomposed into network elements, and then the operation of each network element on the message and the operation domain information are analyzed and synthesized according to rules, so as to determine the operation of the service function on the message and the operation domain. Specifically, referring to fig. 3, for the service function decomposition analysis algorithm, the decomposition process mainly includes obtaining, outputting, reading, writing (modifying or adding/deleting message fields), discarding messages, etc. according to the operations of each part of the service function on the messages, and these basic message processing units are used as network elements, thereby combining into a directed acyclic graph capable of implementing specific service functions. For example, in the firewall in fig. 3, in combination with the message operation logic of the service function, the firewall may be decomposed into a directed acyclic graph formed by combining four network elements, such as a network card reading a message (acquiring), classifying (reading) according to a user-set rule and message header information, discarding a message that does not satisfy the rule (discarding), and outputting a message that satisfies the rule to the network card (outputting);
analysis and synthesis: the analysis and synthesis process needs to comprehensively consider the operation and operation domain information of each network element forming the service function so as to determine the possible operation and operation domain of the service function, and the operation and operation domain information of the network element is related to the characteristics of each network element and the user rules, for example, the firewall classifies (reads) the network element according to the user setting rules and the message header information and the network element needs to determine the message domain to be read according to the user setting. Therefore, in the analysis and synthesis, all network elements except the message acquisition network element, the message output network element and the message discard network element need to be subjected to the following rules: when the network element executes the reading operation, the reading operation domains of all the network elements of the service function are merged; when the network element executes the write operation, the write operation domains of all the network elements of the service function are merged. The service function decomposition analysis algorithm realized according to the rules can determine the possible operation of the service function to the message and the operation domain information by utilizing the operation and operation domain information of each network element and store the possible operation and operation domain information into the SFHM;
in the service function parallelizable analysis module, traversing a service function chain, combining each service function operation and operation domain information SFHM obtained in the service function operation information analysis module, executing a parallelizable judgment algorithm on the current service function and the parallelizable service function queue PQ, and judging whether the service function and the parallelizable service function queue PQ can be parallelized, wherein the condition is as follows: and if and only if the continuous service functions in the service function chain do not consider message discarding, the message operation domains of the service functions have no intersection, or the operation of the service functions on the same message domain (namely, the intersection) according to the sequence of the service functions in the service function chain is read-after-read/write or write-after-write, the service functions can be parallelized. And calling the parallelizable service function parallelization optimization module to parallelize and generate a new service function each time a parallelizable service function set is obtained until the service function chain is traversed.
The parallelizable decision algorithm has the following contents: and if and only if the currently read service function and all service functions in the service function queue PQ can be parallelly read, the message operation domains of all the service functions do not have intersection under the condition that the message discarding is not considered, or if the operation of the service functions on the same message domain (namely the operation domains have intersection) according to the sequence of the service functions in the PQ (the sequence of the current service function is the tail) is read-after-write or write-after-write, the service functions in the PQ and the currently read service functions can be parallelly connected.
In the parallelizable service function parallelization optimization module, firstly, a parallelizable service function queue input by a parallelizable service function analysis module is obtained, and then whether the head of the queue and the tail of the queue point to the same service function is judged, namely whether the number of the service functions in the queue is only one, if the number is 1, the network element is represented to be non-parallelizable, and the network element is added into a new service function chain queue; when the value is greater than 1, which indicates that the service functions can be parallelized, a parallel optimization algorithm is executed, and a flowchart of the algorithm is shown in fig. 3, so that a new service function is finally generated and added into a new service function chain queue. After the operation is finished, whether the service function chain is traversed or not needs to be judged, if the operation is finished, the new service function chain generating module is called, and if the operation is not finished, the service function parallelizable analysis module is called to read the next network element. In addition, in order to implement the parallelization at the network element level, the parallelizable service function parallelization optimization module also needs to support a new network element, including implementing a copy distribution network element, a packet merging network element, a set discarding network element, and the like, and the specific functions thereof are as follows:
copying a distribution network element:
the functions are as follows: firstly, acquiring a message from a network card, and printing a message ID to distinguish the message; then, copying the message and the message ID thereof, and printing a default operation OID (default is read operation), and in addition, printing a corresponding priority PID for the message copy sent to each service function according to the operation priority PID of each service function so as to ensure the accuracy when the message copies are merged; and finally, sending out the corresponding message copies from the output ports corresponding to the service functions.
A message merging network element:
the functions are as follows: firstly, storing a copy of a message received by each input port in a cache queue exclusive to each port, selecting the port cache queue with the shortest current length, and acquiring a queue head message; then, obtaining each message copy from each port queue according to the message ID, wherein the total copy number should be equal to the total number of ports of the message merging network element; then, if the copy operation OID is set to be discarded, discarding all message copies, and if the copy operation OID is set to be read and the copy is not the last copy, only discarding the message copy; and finally, determining the merging sequence of the messages according to the priority PID in a mode of descending the priority (namely, when the data is different, the data with higher priority is reserved), and performing copy merging operation on the messages to generate the final message.
Setting to discard the network element:
the functions are as follows: the method is used for replacing a disc element in a click in a parallel service function, realizing that the message copy is prevented from being directly discarded by setting operation OID as discarding, increasing output ports, and the output ends of the output ports are output queues of the service function;
and modifying elements for realizing writing (including modifying and adding/deleting message fields) and other functions in the click, and adding a function of setting the operation OID as writing.
In the new service function chain generation module, the new service functions generated after parallelization in the parallelizable service function parallelization optimization module and a small number of service functions which cannot be parallelized are required to be arranged and combined according to the service function sequence in the original service function chain, that is, parallelization is realized to the maximum extent on the basis of not changing the service function chain function.
The main implementation process of the service function network element level parallelization system is shown in fig. 6, and if the service function chain shown in fig. 4 exists, the parallelization result of fig. 4 will be shown in fig. 5 through the parallelization process.
The present invention provides a method for implementing a network element level parallelization service function in a network function virtualization environment, and a plurality of methods and approaches for implementing the technical solution are provided, the above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, a plurality of improvements and modifications may be made without departing from the principle of the present invention, and these improvements and modifications should also be considered as the protection scope of the present invention. All the components not specified in the present embodiment can be realized by the prior art.

Claims (1)

1. The method for realizing the network element level parallelization service function in the network function virtualization environment is characterized by comprising the following steps of:
step 1, acquiring and traversing a service function chain, running a service function decomposition analysis algorithm, decomposing each service function into basic message processing units called network elements, analyzing and integrating network element characteristics in each service function, determining and storing the operation of each service function on a message and operation domain information;
step 2, according to the operation and operation domain information of each service function obtained in the step 1, running a parallelizable judgment algorithm and determining a service function combination capable of performing parallel optimization in a service function chain;
step 3, executing a parallel optimization algorithm on the service function queue which can be paralleled and obtained in the step 2, generating a new service function, and adding a new service function chain queue; adding the non-parallelizable service function into a new service function chain queue;
step 4, combining the new service functions generated in the step 3 and the non-parallelizable service functions, arranging and combining the new service functions and the non-parallelizable service functions according to the sequence of the original service functions, and outputting a new service function chain which is subjected to parallel optimization;
the step 1 comprises the following steps:
step 1-1, acquiring a service function chain input by an administrator;
step 1-2, reading a first service function in a service function chain, creating a HashMap < key, values > named as an SFHM hash mapping table, and storing operation of each service function key and operation domain information values, wherein the operation domain information values comprise a write operation domain and a read operation domain;
step 1-3, judging whether the next service function is in the service function chain, if so, entering step 1-4; otherwise, entering the step 1-5;
step 1-4, operating a service function decomposition analysis algorithm, comprising:
the current service function is decomposed into network elements: according to the basic operation of the service function on the message, the method comprises the steps of obtaining the message from a network card, reading the message, writing the message, sending the message to the network card, discarding the message, and decomposing the service function into a directed acyclic graph formed by combining the basic message processing units, namely network elements;
analysis and synthesis: after the current service function is decomposed into a directed acyclic graph formed by combining network elements, analyzing and integrating the operation of each network element on the message and the operation domain information according to a certain rule, thereby determining the operation of the service function on the message and the operation domain information and storing the operation domain information in the SFHM;
step 1-5, creating a service function queue PQ and a new service function chain queue NC which can be paralleled, wherein the service function queue PQ is used for storing service function combinations which can be paralleled, and the new service function chain queue NC is used for storing new service function chains after parallel optimization; reading a first service function in a service function chain and operation domain information thereof, adding the first service function into a service function queue which can be paralleled, and marking the first service function as a queue head and a queue tail; setting End flag of service function chain to be 0, and entering step 2;
the step 2 comprises the following steps:
step 2-1, judging whether a queue PQ capable of serving the parallel service function is empty, if not, entering step 2-2; otherwise, entering step 2-3;
step 2-2, adding the currently read service function into a queue PQ of the parallel service function, and pointing the head and the tail of the queue to the current service function;
step 2-3, reading the next service function in the service function chain and the operation and operation domain information thereof;
step 2-4, running a parallelizable decision algorithm: if and only if the currently read service function and all service functions in the service function queue PQ can be paralleled, under the condition that the discarding of the message is not considered, the message operation domains of all the service functions do not have intersection, or the operation of all the service functions on the same message domain according to the sequence of the service functions in the PQ is read after read, write after write, or write after write, the service function in the PQ and the currently read service function can be paralleled, whether the currently read service function can be paralleled with all the service functions in the service function queue PQ which can be paralleled is judged, and if yes, the step 2-5 is carried out; otherwise, entering step 3;
step 2-5, adding the current service function into a queue PQ of the service function which can be paralleled, marking the queue as the tail, and entering step 4;
in step 3, the parallel optimization algorithm includes the following steps:
step 3-1, judging whether the head queue and the tail queue in the current service function queue PQ capable of being paralleled point to the same service function, namely whether only one service function is contained, if so, indicating that the service function cannot be paralleled with other adjacent service functions, storing the service function into a new service function chain queue NC, and entering step 3-2; otherwise, entering step 3-3;
step 3-2, emptying the current queue PQ with the service function capable of being paralleled, and entering step 4, wherein the head queue and the tail queue of the queue are empty;
step 3-3, creating a network element for acquiring messages from the network card, realizing the function of acquiring the messages from the network card, connecting the input end of the network element with the network card for acquiring the messages, and initializing the output end to be null;
step 3-4, a copy distribution network element is created, the functions of copying messages and distributing copies of each message to each parallel service function are realized, the input end of the copy distribution network element is connected with the output end of the network card network element and sends the messages to the network card network element, and the output end is initialized to be null;
step 3-5, a message merging network element is established, message copies passing through all service functions are merged and finally an output message is formed, the input end of the message merging network element is initialized to be null, and the output end of the message merging network element is initialized to be null;
step 3-6, acquiring a current parallel service function queue PQ head service function, setting the current service function operation priority PID value as the queue length, wherein PID is used for merging the priority when the same message domain data of more than two message copies is different;
step 3-7, deleting the current service function to obtain a message network element from the network card, wherein the original output network element is connected to the copy distribution output end;
step 3-8, judging whether the service function sends a message to the input end of the network card network element to be connected with the queue network element, if so, entering step 3-9, otherwise, entering step 3-10;
3-9, replacing the sent message to the network card network element by using the queue connected with the input end of the network card network element from the originally sent message in the step 3-8 as an output queue network element, increasing the input of the sent message to the network card on the basis of keeping the original input of the queue, and entering the step 3-11;
3-10, replacing the service function to send a message to the network card network element as an output queue network element, and sending the message to the network card network element input as the input of the output queue network element;
step 3-11, the discarded message network element in the service replacing function is set as a discarded network element, the function of marking the discarded message to replace a simple discarded message is realized, the input of the discarded message network element is the input of the original discarded message network element, and the output network element is an output queue network element;
step 3-12, setting the network element at the output end of each output queue network element created in the step 3-9 and the step 3-10 as the message merging network element created in the step 3-5, and connecting;
3-13, judging whether the current service function is the tail of the current service function queue PQ, if so, entering the step 3-14, otherwise, entering the step 3-7;
3-14, creating a new sending message to the network card network element, wherein the output end of the new sending message is connected with the network card and is used for outputting a final message, and the input end of the new sending message is connected with the message merging network element, so far, the execution of the parallel optimization algorithm is finished, and the service function in the PQ is parallelized into a new service function;
step 3-15, adding the newly generated service function in the step 3-14 into a new service function chain queue NC, and executing the step 4;
step 4 comprises the following steps:
step 4-1, when judging that the value of the service function link End identifier End is 1, entering step 4-4; otherwise, entering the step 4-2;
step 4-2, judging whether the currently read service function is positioned at the tail of the original service function chain, if so, indicating that the original service function chain is judged to be finished, and entering step 4-3; otherwise, entering step 2-1;
4-3, setting an ending identifier End to be 1, indicating that the original service function chain is judged to be finished, and returning to the step 3;
and 4-4, arranging and combining according to the original service function sequence, and outputting a new service function chain queue NC, namely the service function chain after parallel optimization.
CN201711292739.4A 2017-12-08 2017-12-08 Method for realizing network element level parallelization service function in network function virtualization environment Active CN108092803B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711292739.4A CN108092803B (en) 2017-12-08 2017-12-08 Method for realizing network element level parallelization service function in network function virtualization environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711292739.4A CN108092803B (en) 2017-12-08 2017-12-08 Method for realizing network element level parallelization service function in network function virtualization environment

Publications (2)

Publication Number Publication Date
CN108092803A CN108092803A (en) 2018-05-29
CN108092803B true CN108092803B (en) 2020-07-17

Family

ID=62174827

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711292739.4A Active CN108092803B (en) 2017-12-08 2017-12-08 Method for realizing network element level parallelization service function in network function virtualization environment

Country Status (1)

Country Link
CN (1) CN108092803B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11962514B2 (en) 2018-12-14 2024-04-16 At&T Intellectual Property I, L.P Parallel data processing for service function chains spanning multiple servers

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108494574B (en) * 2018-01-18 2020-06-19 清华大学 Network function parallel processing infrastructure in NFV
CN110557343A (en) * 2018-05-31 2019-12-10 中国电信股份有限公司 SFC service data forwarding method and SFC network system
WO2020034465A1 (en) * 2018-11-14 2020-02-20 Zte Corporation Method of communication for service framework
US10805164B2 (en) 2018-12-14 2020-10-13 At&T Intellectual Property I, L.P. Controlling parallel data processing for service function chains
CN110022230B (en) * 2019-03-14 2021-03-16 北京邮电大学 Deep reinforcement learning-based service chain parallel deployment method and device
CN109842528B (en) * 2019-03-19 2020-10-27 西安交通大学 Service function chain deployment method based on SDN and NFV
CN110227265B (en) * 2019-06-18 2020-06-26 贵阳动视云科技有限公司 Computer graphic resource sharing method and device
CN112333035B (en) * 2020-12-30 2021-04-02 中国人民解放军国防科技大学 Real-time hybrid service function chain embedding cost optimization method and device
CN113411207B (en) * 2021-05-28 2022-09-20 中国人民解放军战略支援部队信息工程大学 Service function circulation arrangement basic platform and method of intelligent network service function chain
CN114124713B (en) * 2022-01-26 2022-04-08 北京航空航天大学 Service function chain arrangement method for operation level function parallel and self-adaptive resource allocation

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104780099A (en) * 2014-01-10 2015-07-15 瞻博网络公司 Dynamic end-to-end network path setup across multiple network layers with network service chaining
CN105406992A (en) * 2015-10-28 2016-03-16 浙江工商大学 Business requirement transformation and deployment method for SDN (Software Defined Network)
CN105721535A (en) * 2014-12-23 2016-06-29 英特尔公司 Parallel processing of service functions in service function chains

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104780099A (en) * 2014-01-10 2015-07-15 瞻博网络公司 Dynamic end-to-end network path setup across multiple network layers with network service chaining
CN105721535A (en) * 2014-12-23 2016-06-29 英特尔公司 Parallel processing of service functions in service function chains
CN105406992A (en) * 2015-10-28 2016-03-16 浙江工商大学 Business requirement transformation and deployment method for SDN (Software Defined Network)

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Accelerating skycube computation with partial and parallel processing for service selection;Fang Dong;《IEEE》;20171013;3384-3388 *
一种保证NFV 可靠性的最优备份拓扑生成方法;韩青;《计算机应用研究》;20161128;1-16 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11962514B2 (en) 2018-12-14 2024-04-16 At&T Intellectual Property I, L.P Parallel data processing for service function chains spanning multiple servers

Also Published As

Publication number Publication date
CN108092803A (en) 2018-05-29

Similar Documents

Publication Publication Date Title
CN108092803B (en) Method for realizing network element level parallelization service function in network function virtualization environment
US10986041B2 (en) Method and apparatus for virtual network functions and packet forwarding
US11669488B2 (en) Chassis controller
Kaur et al. A comprehensive survey of service function chain provisioning approaches in SDN and NFV architecture
EP2748990B1 (en) Network virtualization apparatus with scheduling capabilities
US6393026B1 (en) Data packet processing system and method for a router
Ghaznavi et al. Service function chaining simplified
CA2326851A1 (en) Policy change characterization method and apparatus
JP6939775B2 (en) Network system, its management method and equipment
CN112166579B (en) Multi-server architecture cluster providing virtualized network functionality
Wang et al. A survey of coflow scheduling schemes for data center networks
US9923997B2 (en) Systems and methods for packet classification
CN105282057B (en) Flow table updating method, controller and flow table analysis device
WO2011142227A1 (en) Computer system, method and program
Moro et al. Network function decomposition and offloading on heterogeneous networks with programmable data planes
Kogan et al. A programmable buffer management platform
US11252072B1 (en) Graph-based rebinding of packet processors
Cao et al. A study on application-towards bandwidth guarantee based on SDN
Liu et al. A software defined network design for analyzing streaming data in transit
Chen et al. Easy path programming: Elevate abstraction level for network functions
Boucetta et al. Review on Networks Defined by Software
Geier et al. Improving the deployment of multi-tenant containerized network function acceleration
Ai et al. The Design and Specification of Path Adjustable SFC Using YANG Data Model
Jiang et al. RADU: Bridging the divide between data and infrastructure management to support data-driven collaborations
Liang et al. Efficient loop detection and congestion-free network update for SDN

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
CB02 Change of applicant information

Address after: 210019 No. 58 East Street, Nanxi River, Jianye District, Nanjing, Jiangsu

Applicant after: Zhong Tong clothing consulting and Design Research Institute Co., Ltd.

Applicant after: Southeast University

Address before: 210019 No. 58 East Street, Nanxi River, Jianye District, Nanjing, Jiangsu

Applicant before: Jiangsu Posts & Telecommunications Planning and Designing Institute Co., Ltd.

Applicant before: Southeast University

CB02 Change of applicant information
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant