CN111163004B - Service chain data processing method and device and computer equipment - Google Patents

Service chain data processing method and device and computer equipment Download PDF

Info

Publication number
CN111163004B
CN111163004B CN201911424500.7A CN201911424500A CN111163004B CN 111163004 B CN111163004 B CN 111163004B CN 201911424500 A CN201911424500 A CN 201911424500A CN 111163004 B CN111163004 B CN 111163004B
Authority
CN
China
Prior art keywords
service
service chain
data packet
data
chain
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911424500.7A
Other languages
Chinese (zh)
Other versions
CN111163004A (en
Inventor
张磊
张思琴
吴涛
胡松
李红光
吴亚东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qianxin Technology Group Co Ltd
Secworld Information Technology Beijing Co Ltd
Original Assignee
Qianxin Technology Group Co Ltd
Secworld Information Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qianxin Technology Group Co Ltd, Secworld Information Technology Beijing Co Ltd filed Critical Qianxin Technology Group Co Ltd
Priority to CN201911424500.7A priority Critical patent/CN111163004B/en
Publication of CN111163004A publication Critical patent/CN111163004A/en
Application granted granted Critical
Publication of CN111163004B publication Critical patent/CN111163004B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/12Shortest path evaluation
    • H04L45/124Shortest path evaluation using a combination of metrics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/30Routing of multiclass traffic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • H04L47/125Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering

Abstract

The disclosure provides a service chain data processing method applied to host equipment. The method comprises the following steps: at least one service chain is deployed in the hosting device. A data stream is received, the data stream including at least one data packet. For any data packet in at least one data packet, the data packet is shunted to a service chain aiming at the data packet in at least one service chain in the host device based on a multi-layer shunting strategy, so that the service chain aiming at the data packet processes the data packet. The disclosure also provides a service chain data processing device and computer equipment.

Description

Service chain data processing method and device and computer equipment
Technical Field
The disclosure relates to a service chain data processing method, a service chain data processing device and computer equipment.
Background
In a network scenario, a data stream usually needs to pass through a plurality of network service devices, and finally reaches a destination. Each network Service device may be referred to as a Service node (node), and a packet processing path formed by each Service node through which a data packet in a data stream passes according to a processing sequence required by Service logic is referred to as a Service Function Chain (SFC). The service chain technology can effectively improve the network construction efficiency and reduce the overall operation cost, and is an important technology for promoting the network architecture innovation.
At present, the service chain standardization work of each standard organization in the industry is researched, but each organization has different technical background and emphasis and does not have a unified standard. The existing service chain arrangement technology has the advantages of theory larger than practice, higher complexity and lower feasibility and operability.
Disclosure of Invention
One aspect of the present disclosure provides a service chaining data processing method, which is applied to a host device. The method comprises the following steps: at least one service chain is deployed in the hosting device. A data stream is received, the data stream including at least one data packet. For any data packet in at least one data packet, the data packet is shunted to a service chain aiming at the data packet in at least one service chain in the host device based on a multi-layer shunting strategy, so that the service chain aiming at the data packet processes the data packet.
Optionally, the deploying at least one service chain in the host device includes: request data is received from a first tenant, the request data including at least one service request indicator. Then, based on the request data, deploying a service chain architecture for the first tenant in the host device, wherein the service chain architecture for the first tenant comprises at least one service chain, and the at least one service chain respectively corresponds to at least one service request index in the request data.
Optionally, any service chain in the at least one service chain includes at least one service node. The at least one service node comprises: at least one serial service node, and/or at least one bypass service node.
Optionally, the offloading, based on the multi-layer offloading policy, any data packet to a service chain for the any data packet in at least one service chain deployed in the host device includes: based on the first shunt strategy, extracting the tenant identification information from any data packet, and determining the service chain architecture aiming at the tenant identification information in the host equipment. Then, based on the second shunt strategy, the service identification information is extracted from any data packet, and a service chain aiming at the service identification information is determined from a service chain framework aiming at the tenant identification information. And then, the data packet is shunted to the service chain which is determined according to the service identification information.
Optionally, the offloading any data packet to the service chain for the service identification information includes: when the service chain aiming at the service identification information comprises at least one serial service node, sequentially forwarding any data packet to each serial service node in the at least one serial service node according to the serial sequence of the at least one serial service node so as to be sequentially processed by each serial service node in the at least one serial service node. When the service chain aiming at the service identification information comprises at least one bypass service node, forwarding any data packet to the at least one bypass service node respectively so that the at least one bypass service node processes any data packet independently.
Optionally, at least one service node in any service chain performs load balancing based on the service request index for the service chain.
Optionally, any service node in the at least one service node is a virtual network function device operating in a virtual machine environment, and different service nodes in the at least one service node operate in different virtual machine environments.
Optionally, the method further includes: before shunting any data packet to a service chain aiming at the data packet in at least one service chain deployed in host equipment based on a multi-layer shunting strategy, decapsulating the data packet.
Another aspect of the present disclosure provides a service chaining data processing apparatus applied to a host device. The device includes: the device comprises a deployment module, a receiving module and a processing module. The deployment module is used for deploying at least one service chain in the host device. The receiving module is used for receiving a data stream, and the data stream comprises at least one data packet. The processing module is configured to, for any data packet in the at least one data packet, offload the any data packet to a service chain for the any data packet in at least one service chain deployed by the host device based on a multi-layer offload policy, so that the service chain for the any data packet processes the any data packet.
Another aspect of the present disclosure provides a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor when executing the computer program being adapted to implement the method as described above.
Another aspect of the present disclosure provides a computer-readable storage medium storing computer-executable instructions for implementing the method as described above when executed.
Another aspect of the disclosure provides a computer program product comprising computer executable instructions for implementing the method as described above when executed.
Drawings
For a more complete understanding of the present disclosure and the advantages thereof, reference is now made to the following descriptions taken in conjunction with the accompanying drawings, in which:
fig. 1 schematically illustrates an application scenario of a service chain data processing method and apparatus according to an embodiment of the present disclosure;
FIG. 2A schematically illustrates a flow diagram of a service chaining data processing method according to an embodiment of the present disclosure;
fig. 2B schematically shows a flow chart of a service chaining data processing method according to another embodiment of the present disclosure;
FIG. 3A schematically illustrates an example architecture diagram of a host device in accordance with an embodiment of the disclosure;
FIG. 3B schematically shows an example architecture diagram of a host device according to another embodiment of the disclosure;
fig. 4 schematically shows a block diagram of a service chaining data processing apparatus according to an embodiment of the present disclosure; and
fig. 5 schematically shows a block diagram of a computer device according to an embodiment of the present disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is illustrative only and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It is noted that the terms used herein should be interpreted as having a meaning that is consistent with the context of this specification and should not be interpreted in an idealized or overly formal sense.
Where a convention analogous to "A, B and at least one of C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B and C" would include, but not be limited to, systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.). Where a convention analogous to "A, B or at least one of C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B or C" would include, but not be limited to, systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).
Some block diagrams and/or flow diagrams are shown in the figures. It will be understood that some blocks of the block diagrams and/or flowchart illustrations, or combinations thereof, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the instructions, which execute via the processor, create means for implementing the functions/acts specified in the block diagrams and/or flowchart block or blocks. The techniques of this disclosure may be implemented in hardware and/or software (including firmware, microcode, etc.). In addition, the techniques of this disclosure may take the form of a computer program product on a computer-readable storage medium having instructions stored thereon for use by or in connection with an instruction execution system.
The embodiment of the disclosure provides a service chain data processing method and device, which can be applied to host (host) equipment in a service chain scene. The service chain data processing method can comprise the following steps: a deployment procedure and a service procedure. At least one service chain is deployed in the hosting device during the deployment process. Then, a service procedure can be performed, which can be divided into a reception procedure and a processing procedure. In the receiving process, a data stream is received, the data stream including at least one data packet. In the processing process, for any data packet in at least one data packet, the any data packet is shunted to a service chain for the any data packet in at least one service chain in the host device based on a multi-layer shunting strategy, so that the service chain for the any data packet processes the any data packet.
In a network scenario, a data stream usually needs to pass through multiple network service devices, such as an Intrusion Detection System (IDS), an Intrusion Prevention System (IPS), a Firewall (FW), deep Packet Inspection (DPI), information Content Audit (Information Content Audit), and Anti Virus, to finally reach a destination. Each network service device may be referred to as a service node, and a packet processing path formed by a data packet in a data stream passing through each service node according to a processing sequence required by service logic is referred to as a service chain. The service chain technology can effectively improve the network construction efficiency and reduce the overall operation cost, and is an important technology for promoting the network architecture innovation.
With the rapid development of new services such as mobile Internet, cloud service, internet of Things (IoT), etc., the demand of users on service traffic is continuously expanding, and the service mode of the traditional service chain brings many problems, such as poor expandability, high network maintenance cost, and unstable network operation. Software Defined Networking (SDN) architecture and Network Function Virtualization (NFV) technology are continuously developed and mature, and control and forwarding of the SDN architecture are separated, centralized, and Application Programming Interface (API) is opened. The NFV technology decouples the Network service Function from the professional hardware device, and the Network service Function is completed by a Virtual Network Function (VNF), which can greatly reduce the cost. The combination of the two enhances the flexibility and expansibility of the network, achieves the effect of sharing network resources, and provides technical support for the development of service chains.
Fig. 1 schematically illustrates an application scenario of a service chain data processing method and apparatus according to an embodiment of the present disclosure. It should be noted that fig. 1 is only an example of a scenario in which the embodiments of the present disclosure may be applied to help those skilled in the art understand the technical content of the present disclosure, but does not mean that the embodiments of the present disclosure may not be applied to other devices, systems, environments or scenarios.
As shown in fig. 1, the application scenario illustrates a service chain based network service scenario 100. The application scenario may include an identifier (SFC Classifier) 101, a Service Function Instance (SFI) 102, and a Forwarder (SFF) 103.
The identifier 101 is used to identify the data flow and assign it to different service chains. The identifier 101 may run on any device and each service chain may include one or more identifiers. Typically, the identifier 101 is located at the head node of the service chain.
At least one service function instance 102 may serve as at least one service node that forms a service chain. Each service function instance 102 may be a process or a server. For example, based on NFV technology, the service function instance 102 may be a VNF device, with at least one service function instance 102 running in the hosting device.
The repeater 103 is used to provide service layer forwarding. For example, the forwarder 103 receives packets in a data stream and forwards the packets to the corresponding service function instance 102.
It should be understood that the number of identifiers, service function instances, and repeaters in fig. 1 are merely illustrative. There may be any number of end devices, networks, and server/server clusters, as desired for implementation.
At present, the service chain standardization work of each standard organization in the industry is researched, but each organization is different in technical background and emphasis, and has no unified standard. The existing service chain arranging technology has the advantages of theory larger than practice, higher complexity and lower feasibility and operability.
According to an embodiment of the present disclosure, a service chain data processing method is provided, which is exemplarily described below. It should be noted that the sequence numbers of the respective steps in the following methods are merely used as representations of the steps for description, and should not be construed as representing the execution order of the respective steps. The method need not be performed in the exact order shown, unless explicitly stated.
Fig. 2A schematically shows a flowchart of a service chain data processing method according to an embodiment of the present disclosure, which may be applied to a host device in a service chain scenario.
As shown in fig. 2A, the method may include operations S210 to S230 as follows.
At least one service chain is deployed in the hosting device in operation S210.
In operation S220, a data stream is received, the data stream including at least one data packet.
In operation S230, for any data packet in the at least one data packet, the data packet is offloaded to a service chain for the data packet in at least one service chain deployed by the host device based on a multi-layer offloading policy, so that the service chain for the data packet processes the data packet.
Those skilled in the art can understand that the service chain data processing method according to the embodiment of the present disclosure deploys one or more service chains in the host device in advance according to actual business needs. For each data packet in the received data stream, the data packet is shunted to a service chain for the data packet based on a multi-layer shunting policy, so that the data packet is provided with a targeted network service by the service chain. With the increasing and complicated network service business requirements, the multi-layer distribution strategy can ensure higher distribution efficiency for a large number of data streams.
According to an embodiment of the present disclosure, a service chaining platform is provided, and the host device may be a system running on the service chaining platform. The user of the service chain platform is called a tenant (tent) of the service chain. The above process of deploying at least one service chain in the host device may include: request data from a first tenant is received, the request data including at least one service request indicator, different service request indicators characterizing service requirements of data flows of different services of the first tenant. Then, based on the request data, a service chain architecture for the first tenant is deployed in the hosting device. The service chain architecture for the first tenant comprises at least one service chain, and the at least one service chain corresponds to at least one service request index in the request data respectively, namely the service chain in the service chain architecture for the first tenant corresponds to the service request index in the request data of the first tenant one to one. For example, the request data of the first tenant includes two service request indexes, the first service request index represents the service requirement of the mail service of the first tenant, and the second service request index represents the service requirement of the website service of the first tenant. Because the business logics of the mail business and the website business are not communicated, two different service chains need to be deployed to respectively provide network services for the data stream generated by the mail business of the first tenant and the data stream generated by the website business, and the two different service chains form a service chain architecture for the first tenant.
According to an embodiment of the present disclosure, any of the at least one service chain includes at least one service node. The at least one service node comprises: at least one serial (online) service node, and/or at least one bypass (offhook/sniffer) service node.
Illustratively, when a service chain for one service request index is deployed, one or more service nodes required and a service order among the one or more service nodes need to be determined according to the service request index. Thereby creating and deploying one or more service nodes that conform to the service order described above. For example, the process of deploying any service node may include: a Virtual Machine (VM) is created in a host device, and a Virtual Network Function (VNF) device running in the VM is created. That is, any service node of the at least one service node may be one virtual network function device operating in a virtual machine environment, and different service nodes of the at least one service node operate in different virtual machine environments. When one service chain includes three service nodes (service nodes a, b, and c), it is necessary to create VM1, VM2, and VM3 in the host apparatus, and create service node a in VM1, service node b in VM2, and service node c in VM 3.
If the service order among the service nodes has a precedence relationship, that is, the service nodes need to execute operations in series, the service nodes can be used as serial service nodes. If the service order of one service node has no relation to the service order of other service nodes, the service node can be used as a bypass service node.
Further, according to an embodiment of the present disclosure, at least one service node in any service chain of the above deployment performs load balancing based on a service request index for the any service chain. Each service request index in the request data includes a service demand capacity, for example, according to data traffic of the mail service, a service chain for the service request index requires three IPS systems and two DPI systems. Therefore, when a service chain for the service request index is deployed, five service nodes need to be constructed, three of the service nodes are IPS systems, and two of the service nodes are DPI systems. So that the service nodes in the service chain can perform load balancing based on the data traffic of the mail service.
The foregoing describes a process of deploying a service chain architecture according to the request data of the first tenant, and the process of deploying the service chain architecture according to the request data of the other tenants is the same as the above process, and is not described herein again.
Fig. 2B schematically shows a flowchart of a service chaining data processing method according to another embodiment of the present disclosure to exemplarily illustrate a multi-layer offloading policy according to an embodiment of the present disclosure.
As shown in fig. 2B, the foregoing operation S230 of offloading any data packet to a service chain for any data packet in at least one service chain deployed by the host device based on the multi-layer offloading policy may include the following operations S231 to S232.
In operation S231, tenant identification information is extracted from the any one of the packets based on the first offload policy, and a service chain architecture for the tenant identification information is determined in the host device.
Then, in operation S232, based on the second offload policy, service identification information is extracted from the any data packet, and a service chain for the service identification information is determined from a service chain architecture for the tenant identification information.
For example, a header (header) of each data packet in the data stream includes tenant identification information and service identification information. In the multi-tier offloading policy, the offloading policy of the first level may be called a tenant policy, and the policy is distinguished by preset tenant identification information, and the tenant identification information may be, for example, information such as a source IP address, a destination IP address, and tennantid (tenant identification) of the data stream, which is not limited herein. The second level of offloading policy may be referred to as a diversion policy, and passes through a preset diversion policy, such as information of a source IP address, a destination IP address, a source MAC address, a destination IP address, a port, a protocol, an application characteristic, a service identifier, and the like. It should be noted that the second level of splitting policy is a more refined splitting policy with respect to the first level of splitting policy. In this embodiment, tenant identification information is selected to perform a first level of offloading, and service identification information under the same tenant identification information is selected to perform a second level of offloading, so that any data packet is offloaded to the determined service chain for the service identification information.
According to an embodiment of the present disclosure, the process of offloading any data packet to a service chain for service identification information may include: when the service chain aiming at the service identification information comprises at least one serial service node, sequentially forwarding any data packet to each serial service node in the at least one serial service node according to the serial sequence of the at least one serial service node so as to be sequentially processed by each serial service node in the at least one serial service node. When the service chain aiming at the service identification information comprises at least one bypass service node, forwarding any data packet to the at least one bypass service node respectively so that the at least one bypass service node processes any data packet independently.
Further, when a service chain is deployed, in order to avoid low service efficiency caused by the fact that a plurality of service nodes have repeated functions, the common functions of the service nodes can be set between the data packets entering the service chain and implemented by a common module. Taking the decapsulation processing procedure as an example, in the related art, each service node has a decapsulation function, and each service node needs to perform decapsulation processing once after receiving a data packet, which results in repeated processing for many times. Therefore, the service chain data processing method according to the embodiment of the present disclosure may further include: before any data packet is distributed to a service chain aiming at any data packet in at least one service chain deployed in host equipment based on a multi-layer distribution strategy, uniform de-encapsulation processing is carried out on any data packet.
It can be understood that the service chain data processing method according to the embodiment of the present disclosure may perform service chain orchestration through multi-level offloading policies and VNF device attribute partitioning, and may forward based on original information in a data packet. Exemplarily, the VNF device attribute partitioning is mainly from several dimensions as follows: access mode, function orthogonalization, load balancing and the like. The access mode means that some VNFs only collect and analyze user data, do not control, do not need to access the service chain in a serial mode, and can be connected in a bypass mode, so that the service chain logic is simplified, and the data forwarding efficiency is improved. The function orthogonalization of the VNF equipment refers to that each VNF equipment function is simplified and specialized as much as possible, and some general functions (such as decapsulation and the like) are realized through a network analysis module before entering a service chain, so that the data forwarding performance is improved, and the problem of positioning is convenient. Because each VNF device has different services and different performance parameters, load balancing needs to be performed according to preset indexes when a service chain is instantiated. Forwarding is carried out based on original information in the data packet, namely the data packet is not modified, and a service chain head or an encapsulated data segment is not additionally added. Compared with the mode of modifying the data packet in the related technology, the method has the advantages of high performance, good compatibility, low complexity and the like.
Referring to fig. 3A and 3B, a service chaining data processing method according to an embodiment of the present disclosure is exemplarily described below with reference to a specific example.
Fig. 3A schematically illustrates an example architecture diagram of a host device according to an embodiment of this disclosure.
As shown in fig. 3A, a service chain is deployed in the hosting apparatus 300, and the service chain includes a VNF device 1, a VNF device 2, and a VNF device 3. The VNF device 1 runs in the virtual machine VM1 301, the VNF device 2 runs in the virtual machine VM2 302, and the VNF device 3 runs in the virtual machine VM3 303. Any two of the VNF devices 1, 2, and 3 may be VNF devices having the same function or VNF devices having different functions. Any of the virtual machines 301, 302, and 303 may be in data communication with the host device 300 through a paravirtualized device abstraction interface specification (e.g., a ghost-user interface specification). In the example shown in fig. 3A, a VNF device 1, a VNF device 2, and a VNF device 3 are connected in series, and after entering a host device 300, a data stream passes through a virtual machine 301 where the VNF device 1 is located according to a service chain sequence, and returns to the host device 300 after being subjected to service processing by the virtual machine 301. Then, the VNF device 2 passes through the virtual machine 302 according to the service chain sequence, and the virtual machine 302 performs service processing and then returns to the host device 300. Finally, the VNF device 3 passes through the virtual machine 303 according to the service chain sequence, and the virtual machine 303 performs service processing and returns to the host device 300.
Fig. 3B schematically illustrates an example architecture diagram of a host device according to another embodiment of the disclosure.
As shown in fig. 3B, in the example shown in fig. 3B, the host apparatus 300 may include: a network resolution module 304, a tenant policy module 305, a drainage policy module 306 of tenant a, and a drainage policy module 307 of tenant B. The hosting device 300 is deployed with a service chain architecture of tenant a and a service chain architecture of tenant B, the service chain architecture of tenant a includes a service chain 308 and a service chain 309, and the service chain architecture of tenant B includes a service chain 310.
The data stream is introduced into the host device 300, and the introduction manner may be, for example, policy routing, and the like, which is not limited herein. Whether the packet in the data flow has an encapsulation (for example, vlan encapsulation, qinQ encapsulation, etc., without limitation) is determined by the network parsing module 304 located in the host device 300. If so, the encapsulation header data is unpacked by the network parsing module 304, and if not, the subsequent processes continue. According to the preset tenant policy, the tenant policy module 305 performs first offloading on the data stream, for example, offloading the data packet for the tenant a to the offloading policy module 306 for the tenant a based on the tenant identification information in the data packet, and offloading the data packet for the tenant B to the offloading policy module 307 for the tenant B. The data stream distributed to the drainage policy module 306 of the tenant a is distributed for the second time by the drainage policy module 306 of the tenant a, and the data stream distributed to the drainage policy module 307 of the tenant B is distributed for the second time by the drainage policy module 307 of the tenant B. When deploying a service chain, a tenant defines the service chain and creates a service chain instance according to business needs, tenant a defines two service chains SFC1 and SFC2 309, and tenant B defines one service chain SFC 3. The service chain SFC1 308 originally defines the sequence of service nodes as follows: the firewall 3081 → the intrusion prevention system 3082 → the network probe 3083 → the antivirus system 3084, in this example, by analyzing the service logic of the VNF devices of the service nodes, it is found that the network probe only analyzes tenant data and does not perform flow control, and the network probe is used as a bypass service node. The final deployed service chain 308 therefore includes: the serial service node: firewall 3081 → intrusion prevention system 3082 → antivirus system 3084, and bypass service node: network probe 3083. The service chain SFC2 309 originally defines the service node sequence as: firewall 3091 → deep packet inspection 3092 → content audit 3093. Serial service nodes are deployed in this order by analyzing that no VNF device is available as a bypass service node. The service chain SFC3 310 originally defines the sequence of service nodes as: firewall 3101 → intrusion prevention system 3102 → deep packet inspection 3103 → content audit 3104. And analyzing and determining that no VNF equipment which can be used as a bypass service node exists, and deploying the serial service nodes in the sequence.
Each service node on the service chain corresponds to a VNF device as shown in fig. 3A. When instantiating a service chain, performance indexes of various VNF devices need to be considered, load balancing is performed, for example, after the service chain SFC1 308 of the tenant a evaluates performance parameters, two intrusion prevention systems are deployed. In addition, each VNF device is deployed on one virtual machine VM, so the service chain SFC1 308 of tenant a needs to deploy five virtual machines: the firewall, the antivirus system and the network probe respectively occupy one, and the intrusion prevention system occupies two.
The data flow enters and exits from the hosting device 300 to the virtual machine of each VNF device in the service chain, returns to the hosting device 300 after completing the service processing, and then, according to the judgment of the network parsing module 304, if the data flow is decapsulated before, the encapsulation header information needs to be added. For the VNF device on the bypass service chain, since there is no serial relationship with other VNF devices, the processing result may not be returned to the host device 300, but may be directly sent to the pre-agreed receiver.
When data streams are processed by various VNF devices in a service chain, since the network parsing module 304 has already performed general operations such as encapsulation and encapsulation, each VNF device only needs to perform its own service function, and the network deployment mode of each VNF device only needs to be in the simplest bridge mode. This improves the forwarding efficiency and simplifies the complexity of network deployment.
The service chain data processing method according to the embodiment of the disclosure has the following beneficial effects: 1. the architecture is open, and various VNF devices can be from the same manufacturer or different manufacturers. 2. The service chain data forwarding control is flexible, high in efficiency and good in compatibility. 3.VNF equipment is convenient to deploy, service logic division is clear, and the positioning problem is simple. 4. The VNF equipment is less in limitation and requirement, and strong in practicality and operability.
Fig. 4 schematically shows a block diagram of a service chaining data processing apparatus according to an embodiment of the present disclosure, applicable to a host device.
As shown in fig. 4, the service chain data processing apparatus 400 may include: a deployment module 410, a receiving module 420, and a processing module 430.
The deployment module 410 is configured to deploy at least one service chain in the hosting device.
The receiving module 420 is configured to receive a data stream, where the data stream includes at least one data packet.
The processing module 430 is configured to, for any data packet in the at least one data packet, offload the any data packet to a service chain for the any data packet in at least one service chain deployed by the host device based on a multi-layer offload policy, so that the service chain for the any data packet processes the any data packet.
It should be noted that the implementation, solved technical problems, realized functions, and achieved technical effects of each module/unit/subunit and the like in the apparatus part embodiment are respectively the same as or similar to the implementation, solved technical problems, realized functions, and achieved technical effects of each corresponding step in the method part embodiment, and are not described herein again.
Any number of modules, sub-modules, units, sub-units, or at least part of the functionality of any number thereof according to embodiments of the present disclosure may be implemented in one module. Any one or more of the modules, sub-modules, units, sub-units according to the embodiments of the present disclosure may be implemented by being split into a plurality of modules. Any one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be implemented at least in part as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented in any other reasonable manner of hardware or firmware by integrating or packaging a circuit, or in any one of or a suitable combination of software, hardware, and firmware implementations. Alternatively, one or more of the modules, sub-modules, units, sub-units according to embodiments of the disclosure may be at least partially implemented as a computer program module, which when executed may perform the corresponding functions.
For example, any of the deployment module 410, the reception module 420, and the processing module 430 may be combined in one module to be implemented, or any one of them may be split into a plurality of modules. Alternatively, at least part of the functionality of one or more of these modules may be combined with at least part of the functionality of the other modules and implemented in one module. According to an embodiment of the present disclosure, at least one of the deployment module 410, the receiving module 420, and the processing module 430 may be implemented at least in part as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented in hardware or firmware in any other reasonable manner of integrating or packaging a circuit, or in any one of or a suitable combination of software, hardware, and firmware. Alternatively, at least one of the deployment module 410, the reception module 420 and the processing module 430 may be at least partially implemented as a computer program module, which when executed may perform the respective functions.
Fig. 5 schematically shows a block diagram of a computer device adapted to implement the above described method according to an embodiment of the present disclosure. The computer device shown in fig. 5 is only an example and should not bring any limitation to the function and scope of use of the embodiments of the present disclosure.
As shown in fig. 5, computer device 500 includes a processor 510 and a computer-readable storage medium 520. The computer device 500 may perform a method according to an embodiment of the present disclosure.
In particular, processor 510 may include, for example, a general purpose microprocessor, an instruction set processor and/or related chip sets and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), among others. The processor 510 may also include on-board memory for caching purposes. Processor 510 may be a single processing unit or a plurality of processing units for performing different actions of a method flow according to embodiments of the disclosure.
Computer-readable storage media 520, for example, may be non-volatile computer-readable storage media, specific examples including, but not limited to: magnetic storage devices, such as magnetic tape or Hard Disk Drives (HDDs); optical storage devices, such as compact disks (CD-ROMs); a memory, such as a Random Access Memory (RAM) or a flash memory; and so on.
The computer-readable storage medium 520 may include a computer program 521, which computer program 521 may include code/computer-executable instructions that, when executed by the processor 510, cause the processor 510 to perform a method according to an embodiment of the disclosure, or any variation thereof.
The computer program 521 may be configured with, for example, computer program code comprising computer program modules. For example, in an example embodiment, code in computer program 521 may include one or more program modules, including for example 521A, modules 521B, … …. It should be noted that the division and number of modules are not fixed, and those skilled in the art may use suitable program modules or program module combinations according to actual situations, and when these program modules are executed by the processor 510, the processor 510 may execute the method according to the embodiment of the present disclosure or any variation thereof.
According to an embodiment of the invention, at least one of the deployment module 410, the receiving module 420 and the processing module 430 may be implemented as a computer program module as described with reference to fig. 5, which, when executed by the processor 510, may implement the service chaining data processing method described above.
Another aspect of the disclosure provides a computer program product comprising computer executable instructions for implementing the method as described above when executed.
The present disclosure also provides a computer-readable storage medium, which may be contained in the apparatus/device/system described in the above embodiments; or may exist separately and not be assembled into the device/apparatus/system. The computer-readable storage medium carries one or more programs which, when executed, implement the method according to an embodiment of the disclosure.
According to embodiments of the present disclosure, the computer-readable storage medium may be a non-volatile computer-readable storage medium, which may include, for example but is not limited to: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Those skilled in the art will appreciate that various combinations and/or combinations of features recited in the various embodiments and/or claims of the present disclosure can be made, even if such combinations or combinations are not expressly recited in the present disclosure. In particular, various combinations and/or combinations of the features recited in the various embodiments of the present disclosure and/or the claims may be made without departing from the spirit and teachings of the present disclosure. All such combinations and/or associations are within the scope of the present disclosure.
While the disclosure has been shown and described with reference to certain exemplary embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the disclosure as defined by the appended claims and their equivalents. Accordingly, the scope of the present disclosure should not be limited to the above-described embodiments, but should be defined not only by the appended claims, but also by equivalents thereof.

Claims (6)

1. A service chain data processing method is applied to host equipment, and the method comprises the following steps:
deploying at least one service chain in the host device, wherein the deploying at least one service chain in the host device comprises: receiving request data from a first tenant, the request data comprising at least one service request indicator; deploying, in the host device, a service chain architecture for the first tenant based on the request data, the service chain architecture for the first tenant including at least one service chain, the at least one service chain corresponding to the at least one business request index, respectively;
receiving a data stream, the data stream comprising at least one data packet; and
decapsulating any data packet of the at least one data packet, decapsulating header data, and offloading, based on a preset multi-layer offload policy, the any data packet to a service chain for the any data packet in the at least one service chain based on original information in the data packet, so that the service chain for the any data packet processes the any data packet;
the offloading, based on the multi-layer offloading policy, the any data packet to the service chain for the any data packet in the at least one service chain includes:
extracting tenant identification information from any data packet based on a first shunt strategy;
determining, in the host device, a service chain architecture for the tenant identification information;
extracting service identification information from any data packet based on a second flow distribution strategy;
determining a service chain for the service identification information from the service chain architecture for the tenant identification information; and
distributing any data packet to the service chain aiming at the service identification information;
wherein any of the at least one service chain comprises at least one service node, the at least one service node comprising: at least one serial service node, and/or at least one bypass service node; and at least one service node in any service chain performs load balancing based on the service request index aiming at any service chain.
2. The method of claim 1, wherein the offloading the any data packet to the service chain for the traffic identification information comprises:
when the service chain aiming at the service identification information comprises at least one serial service node, sequentially forwarding any data packet to the at least one serial service node according to the serial sequence of the at least one serial service node so as to be sequentially processed by the at least one serial service node; and
when the service chain aiming at the service identification information comprises at least one bypass service node, forwarding any data packet to the at least one bypass service node respectively so as to be processed by the at least one bypass service node respectively.
3. The method of claim 1, wherein any of the at least one service node is a virtual network function device operating in a virtual machine environment, and different service nodes of the at least one service node operate in different virtual machine environments.
4. A service chain data processing device applied to a host device comprises:
a deployment module configured to deploy at least one service chain in the host device, wherein the deploying at least one service chain in the host device comprises: receiving request data from a first tenant, the request data comprising at least one service request indicator; deploying, in the host device, a service chain architecture for the first tenant based on the request data, the service chain architecture for the first tenant comprising at least one service chain, the at least one service chain corresponding to the at least one traffic request indicator, respectively;
a receiving module, configured to receive a data stream, where the data stream includes at least one data packet; and
a processing module, configured to decapsulate any data packet of the at least one data packet, decapsulate header data, and offload, based on a multi-layer offload policy, the any data packet to a service chain for the any data packet in the at least one service chain based on original information in the data packet, so that the service chain for the any data packet processes the any data packet;
the offloading, based on the multi-layer offloading policy, the any data packet to the service chain for the any data packet in the at least one service chain includes:
extracting tenant identification information from any data packet based on a first shunt strategy;
determining, in the host device, a service chain architecture for the tenant identification information;
extracting service identification information from any data packet based on a second flow distribution strategy;
determining a service chain for the service identification information from the service chain architecture for the tenant identification information; and
distributing any data packet to the service chain aiming at the service identification information;
wherein any of the at least one service chain comprises at least one service node, the at least one service node comprising: at least one serial service node, and/or at least one bypass service node; and at least one service node in any service chain performs load balancing based on the service request index aiming at any service chain.
5. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor when executing the computer program being adapted to implement the service chaining data processing method of any one of claims 1 to 3.
6. A computer readable storage medium having stored thereon executable instructions which, when executed by a processor, cause the processor to perform the service chain data processing method of any one of claims 1 to 3.
CN201911424500.7A 2019-12-31 2019-12-31 Service chain data processing method and device and computer equipment Active CN111163004B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911424500.7A CN111163004B (en) 2019-12-31 2019-12-31 Service chain data processing method and device and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911424500.7A CN111163004B (en) 2019-12-31 2019-12-31 Service chain data processing method and device and computer equipment

Publications (2)

Publication Number Publication Date
CN111163004A CN111163004A (en) 2020-05-15
CN111163004B true CN111163004B (en) 2023-03-31

Family

ID=70560717

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911424500.7A Active CN111163004B (en) 2019-12-31 2019-12-31 Service chain data processing method and device and computer equipment

Country Status (1)

Country Link
CN (1) CN111163004B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114124849A (en) * 2021-12-03 2022-03-01 北京天融信网络安全技术有限公司 Method and device for realizing service chain based on ghost-user
CN114827045B (en) * 2022-06-23 2022-09-13 天津天睿科技有限公司 Method and device for flow arrangement
CN115878675B (en) * 2023-01-29 2023-06-16 深圳市普拉托科技有限公司 Multi-component data stream query method, system, equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105681230A (en) * 2014-11-21 2016-06-15 中兴通讯股份有限公司 Data processing method and equipment for use in service chain
CN107896195A (en) * 2017-11-16 2018-04-10 锐捷网络股份有限公司 Service chaining method of combination, device and service chaining topological structure
CN108199889A (en) * 2018-01-11 2018-06-22 上海有云信息技术有限公司 Creation method, device, server and the storage medium of service chaining

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9462084B2 (en) * 2014-12-23 2016-10-04 Intel Corporation Parallel processing of service functions in service function chains
CN107204866A (en) * 2016-03-18 2017-09-26 上海有云信息技术有限公司 The implementation method of multi-tenant service chaining transmission is solved based on VXLAN technologies
US20170317936A1 (en) * 2016-04-28 2017-11-02 Cisco Technology, Inc. Selective steering network traffic to virtual service(s) using policy
CN107592234A (en) * 2017-11-03 2018-01-16 睿石网云(北京)科技有限公司 Method, system and the computer-readable recording medium of service link fault location
CN107911258B (en) * 2017-12-29 2021-09-17 深信服科技股份有限公司 SDN network-based security resource pool implementation method and system
CN109842528B (en) * 2019-03-19 2020-10-27 西安交通大学 Service function chain deployment method based on SDN and NFV

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105681230A (en) * 2014-11-21 2016-06-15 中兴通讯股份有限公司 Data processing method and equipment for use in service chain
CN107896195A (en) * 2017-11-16 2018-04-10 锐捷网络股份有限公司 Service chaining method of combination, device and service chaining topological structure
CN108199889A (en) * 2018-01-11 2018-06-22 上海有云信息技术有限公司 Creation method, device, server and the storage medium of service chaining

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"多域网络安全服务编排系统的设计与实现";李天龙;《中国优秀硕士学位论文全文数据库信息科技辑》;20180715;全文 *

Also Published As

Publication number Publication date
CN111163004A (en) 2020-05-15

Similar Documents

Publication Publication Date Title
US11265187B2 (en) Specifying and utilizing paths through a network
US11893409B2 (en) Securing a managed forwarding element that operates within a data compute node
US11750476B2 (en) Service operation chaining
CN114745332B (en) System and network controller for facilitating flow symmetry of service chains in a computer network
US10659252B2 (en) Specifying and utilizing paths through a network
US9923815B2 (en) Network based service function chaining on top of rack switches
US9590907B2 (en) Service chaining in a cloud environment using software defined networking
US11824778B2 (en) Dynamic chain of service functions for processing network traffic
US10476699B2 (en) VLAN to VXLAN translation using VLAN-aware virtual machines
US10177936B2 (en) Quality of service (QoS) for multi-tenant-aware overlay virtual networks
CN111163004B (en) Service chain data processing method and device and computer equipment
US9871720B1 (en) Using packet duplication with encapsulation in a packet-switched network to increase reliability
WO2019147316A1 (en) Specifying and utilizing paths through a network
US20180083837A1 (en) Application-based network segmentation in a virtualized computing environment
US9590855B2 (en) Configuration of transparent interconnection of lots of links (TRILL) protocol enabled device ports in edge virtual bridging (EVB) networks
US10877822B1 (en) Zero-copy packet transmission between virtualized computing instances
US20230066013A1 (en) Dual user space-kernel space datapaths for packet processing operations
US11115337B2 (en) Network traffic segregation on an application basis in a virtual computing environment
US9853885B1 (en) Using packet duplication in a packet-switched network to increase reliability

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Room 332, 3 / F, Building 102, 28 xinjiekouwei street, Xicheng District, Beijing 100088

Applicant after: Qianxin Technology Group Co.,Ltd.

Applicant after: Qianxin Wangshen information technology (Beijing) Co.,Ltd.

Address before: Room 332, 3 / F, Building 102, 28 xinjiekouwei street, Xicheng District, Beijing 100088

Applicant before: Qianxin Technology Group Co.,Ltd.

Applicant before: LEGENDSEC INFORMATION TECHNOLOGY (BEIJING) Inc.

GR01 Patent grant
GR01 Patent grant