WO2018153355A1 - 控制信息传递方法、服务器和系统 - Google Patents

控制信息传递方法、服务器和系统 Download PDF

Info

Publication number
WO2018153355A1
WO2018153355A1 PCT/CN2018/077070 CN2018077070W WO2018153355A1 WO 2018153355 A1 WO2018153355 A1 WO 2018153355A1 CN 2018077070 W CN2018077070 W CN 2018077070W WO 2018153355 A1 WO2018153355 A1 WO 2018153355A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual
control information
server
virtual machine
control
Prior art date
Application number
PCT/CN2018/077070
Other languages
English (en)
French (fr)
Inventor
康明
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2018153355A1 publication Critical patent/WO2018153355A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/40Support for services or applications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/08Protocols specially adapted for terminal emulation, e.g. Telnet
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/30Peripheral units, e.g. input or output ports
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/70Virtual switches
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/10Architectures or entities
    • H04L65/1046Call controllers; Call servers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services

Definitions

  • the present application relates to the field of communications, and in particular, to a control information delivery method, server, and system.
  • the traditional telecommunication system consists of various dedicated hardware devices, and different applications use different hardware devices.
  • NFV network function virtualization
  • VNF virtual network function
  • COTS common commercial shelf product
  • the upper layer service is decoupled from the underlying hardware, and each service can quickly increase the virtual resources to rapidly expand the system capacity, or can quickly reduce the virtual resources to shrink the system capacity, thereby greatly improving the flexibility of the network.
  • the virtual machines of each service located in the VNF receive and send data packets through the virtual switch located in the bottom layer, but the virtual switch in the general COTS server only has the data forwarding function, and cannot perform the functions of the upper virtual machine. So that some invalid or abnormal data packets are forwarded to the upper-layer virtual machine, consuming unnecessary resources.
  • Embodiments of the present application provide a control information delivery method, server, and system for implementing a virtual machine in an NFV system that can send control information to a virtual switch to enable a virtual switch to implement a specific function of the virtual machine.
  • a first aspect provides a control information delivery method, the method comprising: a first device receiving control information from a service software of a virtual machine; the first device transmitting control information to the second device; wherein the first device is a virtual agent The front end of the device, the virtual agent device front end is configured in the virtual machine, the second device is the virtual agent device back end, and the virtual agent device back end is configured in the virtual resource layer virtual network; or the first device is the virtual network function manager VNFM The second device is the Virtualization Infrastructure Manager VIM.
  • the service software of the virtual machine transfers the control information to the virtual switch through the first device and the second device, so that the virtual machine can send control information to the virtual switch in the NFV system to make the virtual
  • the switch can implement specific functions of the virtual machine.
  • the method further includes: the first device receiving the control result information from the second device, the control result information being used to indicate whether the control information is configured successfully; and the first device sending the control result information to the service of the virtual machine software.
  • the foregoing embodiment implements the service software that the virtual switch feeds back the control result information of the control information to the virtual machine through the second device and the first device.
  • the control information is used for anti-attack or call session bandwidth control, and the control information includes: a virtual machine identifier, a flow rule type, and a stream processing operation type.
  • the stream processing operation type is used to indicate addition, modification, or deletion;
  • the flow rule type is an access control list ACL, the parameter package includes a source Internet Protocol IP address, a source port number, a destination IP address, a destination port number, and a filter.
  • the filtering operation action is used to indicate that the packet is allowed to pass or be discarded; when the flow rule type is call admission control CAC, the parameter packet includes a source IP address, a source port number, a destination IP address, a destination port number, and an allowed bandwidth. .
  • This embodiment specifically discloses the content of the control information.
  • a second aspect provides a control information transmission method, including: a second device receives control information from a first device; and a second device configures control information to a virtual switch; wherein the first device is a virtual agent device front end, and the virtual agent The front end of the device is configured in the virtual machine, the second device is the virtual agent device back end, and the virtual agent device back end is configured in the virtual network layer of the virtual resource layer; or the first device is the virtual network function manager VNFM, and the second device is Virtualization Infrastructure Manager VIM.
  • the service software of the virtual machine transfers the control information to the virtual switch through the first device and the second device, so that the virtual machine can send control information to the virtual switch in the NFV system to make the virtual
  • the switch can implement specific functions of the virtual machine.
  • the method further includes: the second device receiving the control result information from the virtual switch, the control result information being used to indicate whether the control information is configured successfully; and the second device sending the control result information to the first device.
  • the foregoing embodiment implements the service software that the virtual switch feeds back the control result information of the control information to the virtual machine through the second device and the first device.
  • the control information is used for anti-attack or call session bandwidth control, and the control information includes: a virtual machine identifier, a flow rule type, and a stream processing operation type.
  • the stream processing operation type is used to indicate addition, modification, or deletion;
  • the flow rule type is an access control list ACL, the parameter package includes a source Internet Protocol IP address, a source port number, a destination IP address, a destination port number, and a filter.
  • the filtering operation action is used to indicate that the packet is allowed to pass or be discarded; when the flow rule type is call admission control CAC, the parameter packet includes a source IP address, a source port number, a destination IP address, a destination port number, and an allowed bandwidth. .
  • This embodiment specifically discloses the content of the control information.
  • a network function virtualization infrastructure layer NFVI server including: a virtual agent device front end, configured to receive control information from a service software of a virtual machine, and send the information to a virtual agent device back end, where the control information For anti-attack or call session bandwidth control, the virtual agent device front end is configured in the virtual machine, the virtual agent device back end is configured in the virtual resource layer virtual network; the virtual agent device back end is used to receive control from the virtual agent device front end Information is sent to the virtual switch.
  • the control information generated by the service software of the virtual machine is transmitted to the virtual switch through the virtual agent device front end located in the virtual machine and the virtual agent device back end located in the virtual network layer of the virtual resource layer, thereby realizing the NFV system.
  • the virtual machine can send control information to the virtual switch to enable the virtual switch to implement the specific functions of the virtual machine.
  • the virtual proxy device backend is further configured to receive control result information from the virtual switch and send it to the virtual proxy device front end, and the control result information is used to indicate whether the control information is successfully configured; the virtual proxy device front end, It is also used to receive control result information from the virtual proxy device backend and send it to the virtual machine's business software.
  • the above embodiment implements the service software that the virtual switch feeds back the control result information of the control information to the virtual machine through the virtual agent device backend and the virtual agent device front end.
  • the control information is used for anti-attack or call session bandwidth control, and the control information includes: a virtual machine identifier, a flow rule type, and a stream processing operation type.
  • the stream processing operation type is used to indicate addition, modification, or deletion;
  • the flow rule type is an access control list ACL, the parameter package includes a source Internet Protocol IP address, a source port number, a destination IP address, a destination port number, and a filter.
  • the filtering operation action is used to indicate that the packet is allowed to pass or be discarded; when the flow rule type is call admission control CAC, the parameter packet includes a source IP address, a source port number, a destination IP address, a destination port number, and an allowed bandwidth. .
  • This embodiment specifically discloses the content of the control information.
  • a fourth aspect provides a virtual network function manager VNFM server, including: a receiving unit, configured to receive control information from a service software of a virtual machine; and a sending unit, configured to send the control information to the virtualized infrastructure manager VIM .
  • the service software in the virtual machine configures the control information to the virtual switch through the VNFM and the VIM.
  • the virtual machine can send control information to the virtual switch to enable the virtual switch to implement the specific functions of the virtual machine.
  • the VNFM and the VIM are both existing devices in the existing NFV architecture, and the solution economy is higher.
  • the receiving unit is further configured to receive control result information from the VIM, where the control result information is used to indicate whether the control information is successfully configured
  • the sending unit is further configured to send the control result information to the service software of the virtual machine.
  • the control information is used for anti-attack or call session bandwidth control, and the control information includes: an identifier of the SBC virtual machine, a flow rule type, and a stream processing operation.
  • the type and parameter package, the stream processing operation type is used to indicate addition, modification or deletion;
  • the parameter package includes the source Internet Protocol IP address, the source port number, the destination IP address, the destination port number, and
  • the filtering operation action is used to indicate that the packet is allowed to pass or discard;
  • the flow rule type is call admission control CAC, the parameter packet includes a source IP address, a source port number, a destination IP address, a destination port number, and an allowable bandwidth.
  • This embodiment specifically discloses the content of the control information.
  • a virtualization infrastructure manager VIM server including: a receiving unit, configured to receive control information from a virtual network function manager VNFM; and a sending unit configured to configure control information to the virtual switch.
  • the service software in the virtual machine configures the control information to the virtual switch through the VNFM and the VIM.
  • the virtual machine can send control information to the virtual switch to enable the virtual switch to implement the specific functions of the virtual machine.
  • the receiving unit is further configured to receive control result information from the virtual switch, where the control result information is used to indicate whether the control information is successfully configured, and the sending unit is further configured to send the control result information to the VNFM.
  • the foregoing implementation manner implements the service software that the virtual switch feeds back the control result information of the control information to the virtual machine through the VIM and the VNFM.
  • the control information is used for anti-attack or call session bandwidth control, and the control information includes: an identifier of the SBC virtual machine, a flow rule type, and a stream processing operation.
  • Type and parameter package, stream processing operation type is used to indicate addition, modification or deletion;
  • the parameter package includes source Internet Protocol IP address, source port number, destination IP address, destination port number and
  • the filtering operation action is used to indicate that the packet is allowed to pass or discard;
  • the parameter packet includes a source IP address, a source port number, a destination IP address, a destination port number, and an allowable bandwidth.
  • This embodiment specifically discloses the content of the control information.
  • an embodiment of the present application provides a network function virtualization infrastructure layer NFVI server, including: a processor, a memory, a bus, and a communication interface; the memory is configured to store a computer execution instruction, and the processor and the memory pass the A bus connection that, when the NFVI server is running, executes the computer-executable instructions stored by the memory to cause the NFVI server to perform the control information transfer method of any of the above first aspects.
  • the embodiment of the present application provides a virtual network function manager VNFM server, including: a processor, a memory, a bus, and a communication interface; the memory is configured to store a computer execution instruction, and the processor is connected to the memory through the bus.
  • VNFM server When the VNFM server is running, the processor executes the computer-executed instructions stored in the memory to cause the VNFM server to perform the control information transfer method of any of the above first aspects.
  • an embodiment of the present application provides a virtualization infrastructure manager VIM server, including: a processor, a memory, a bus, and a communication interface; the memory is configured to store a computer execution instruction, and the processor and the memory pass the bus Connecting, when the VIM server is running, the processor executes the computer-executed instructions stored by the memory to cause the VIM server to perform the control information transfer method of any of the above first aspects.
  • an embodiment of the present application provides a computer storage medium, including instructions, when executed on a computer, causing a computer to execute the control information delivery method as described in the first aspect.
  • an embodiment of the present application provides a computer program product comprising instructions, when executed on a computer, causing the computer to perform the control information delivery method as described in the first aspect.
  • an embodiment of the present application provides a computer storage medium, including instructions, when executed on a computer, causing a computer to execute the control information delivery method as described in the second aspect.
  • the embodiment of the present application provides a computer program product comprising instructions, when executed on a computer, causing the computer to execute the control information delivery method according to the second aspect.
  • the embodiment of the present application provides a network function virtualization NFV communication system, including the network function virtualization infrastructure layer NFVI server according to the third aspect, or the virtual network function as described in the fourth aspect.
  • a VNFM server, and a virtualized infrastructure manager VIM server as described in the fifth aspect; or comprising the NFVI server as described in the sixth aspect; or the VNFM server according to the seventh aspect, and the eighth aspect The VIM server described.
  • FIG. 1 is a schematic structural diagram of an NFV system according to an embodiment of the present application.
  • FIG. 2 is a schematic diagram of an SBC anti-attack or call bandwidth control function in the prior art
  • FIG. 3 is a schematic diagram of another SBC anti-attack or call bandwidth control function in the prior art
  • FIG. 4 is a schematic structural diagram of a hardware of a server according to an embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of a hardware of a server according to an embodiment of the present disclosure.
  • FIG. 6 is a schematic structural diagram of a hardware of a server according to an embodiment of the present disclosure.
  • FIG. 7 is a schematic flowchart of a method for transmitting control information according to an embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of an apparatus related to a method for transmitting control information according to an embodiment of the present application.
  • FIG. 9 is a schematic flowchart of another method for transmitting control information according to an embodiment of the present application.
  • FIG. 10 is a schematic flowchart of still another method for transmitting control information according to an embodiment of the present application.
  • FIG. 11 is a schematic flowchart of still another method for transmitting control information according to an embodiment of the present application.
  • FIG. 12 is a schematic structural diagram of another apparatus for transmitting control information according to an embodiment of the present application.
  • FIG. 13 is a schematic structural diagram of an NFVI server according to an embodiment of the present application.
  • FIG. 14 is a schematic structural diagram of another NFVI server according to an embodiment of the present disclosure.
  • FIG. 15 is a schematic structural diagram of still another NFVI server according to an embodiment of the present application.
  • FIG. 16 is a schematic structural diagram of a VNFM server according to an embodiment of the present disclosure.
  • FIG. 17 is a schematic structural diagram of another VNFM server according to an embodiment of the present disclosure.
  • FIG. 18 is a schematic structural diagram of still another VNFM server according to an embodiment of the present application.
  • FIG. 19 is a schematic structural diagram of a VIM server according to an embodiment of the present disclosure.
  • FIG. 20 is a schematic structural diagram of another VIM server according to an embodiment of the present disclosure.
  • FIG. 21 is a schematic structural diagram of still another VIM server according to an embodiment of the present application.
  • the NFV system architecture provided by the embodiment of the present application is as shown in FIG. 1.
  • the NFV system 100 can be used in various networks, for example, in a data center network, a carrier network, or a local area network.
  • the NFV system 100 includes: NFV management and orchestration (NFV MANO) 101; NFV infrastructure (NFVI) 102; multiple virtual network functions (VNF) 103; Element management (EM) 104; network service, VNF and infrastructure description (network service, VNF and infrastructure description) 105; and operation-support system/business support system (OSS/BSS) 106.
  • NFV MANO NFV management and orchestration
  • NFVI NFV infrastructure
  • VNF multiple virtual network functions
  • EM Element management
  • network service VNF and infrastructure description
  • OSS/BSS operation-support system/business support system
  • the NFV management and orchestration system 101 includes an NFV orchestrator (NFVO) 1011, one or more VNF managers (VNFM) 1012, and a virtualized infrastructure manager (VIM) 1013.
  • NFVO NFV orchestrator
  • VNFM VNF managers
  • VIP virtualized infrastructure manager
  • Network services, VNF and infrastructure descriptions 105 and OSS/BSS 106 are discussed further in the ETSI GS NFV 002 V1.1.1 standard.
  • the NFV MANO 101 is used to perform monitoring and management of the VNF 103 and NFVI 102.
  • NFVO 1011 may implement network services on NFVI 102 (eg, Layer 2 (L2) and Layer 3 (L3) virtual private network (VPN) services), or may perform resources from one or more VNFM 1012
  • the relevant request sends configuration information to the VNFM 1012 and collects status information of the VNF 103.
  • NFVO 1011 can communicate with VIM 1013 to enable resource allocation and/or reservation and to exchange configuration and status information for virtualized hardware resources.
  • the VNFM 1012 can manage one or more VNFs 103.
  • the VNFM 1012 can perform various management functions such as instantiating, updating, querying, scaling, and/or terminating the VNF 103 and the like.
  • the VIM 1013 can perform resource management functions such as managing the allocation of infrastructure resources (eg, adding resources to virtual containers) and operational functions (such as collecting NFVI failure information).
  • the VNFM 1012 and the VIM 1013 can communicate with each other for resource allocation and exchange of configuration and status information of virtualized hardware resources.
  • the NFVI 102 includes a hardware resource layer 1021, a virtual resource layer (software resource) 1022, and a virtualization layer 1023.
  • NFVI 102 accomplishes the deployment of a virtualized environment through hardware resources, software resources, or a combination of both.
  • the hardware resource layer 1021 and the virtualization layer 1023 are used to provide virtualized resources, such as virtual machines (VMs) and other forms of virtual containers for the VNF 103.
  • the hardware resource layer 1021 includes computing hardware 10211, storage hardware 10212, and network hardware 10213.
  • Computing hardware 10211 may be off-the-shelf hardware and/or user-customized hardware used to provide processing and computing resources.
  • Storage hardware 10212 may be storage capacity provided within the network or storage capacity resident in storage hardware 10212 itself (local storage located within the server).
  • Network hardware 10213 can be a switch, a router, and/or any other network device configured to have switching functionality.
  • Network hardware 10213 can span multiple domains and can include multiple networks interconnected by one or more transport networks.
  • the virtualization layer 1023 within the NFVI 102 can abstract hardware resources from the physical layer and decouple the VNF 103 to provide virtualization resources to the VNF 103.
  • the virtual resource layer 1022 includes a virtual computing 10221, a virtual memory 10222, and a virtual network 10223.
  • Virtual computing 10221 and virtual storage 10222 may be provided to VNF 103 in the form of virtual machines and/or other virtual containers.
  • one or more VNFs 103 can be deployed on a virtual machine.
  • the virtualization layer 1023 abstracts the network hardware 10213 to form a virtual network 10223.
  • the virtual network 10223 can include a virtual switch (VS) that is used to provide a connection between the virtual machine and other virtual machines.
  • the transport network in network hardware 10213 can be virtualized using a centralized control plane and a separate forwarding plane (eg, software defined network (SDN)).
  • SDN software defined network
  • VNFM 1012 can interact with VNF 103 and EM 104 to manage the lifecycle of the VNF and exchange configuration and status information.
  • the VNF 103 can be configured to virtualize at least one network function performed by one physical network device.
  • the VNF 103 can be configured to provide functions of different network elements in an IP multimedia subsystem (IMS) network, such as a proxy call session control function. , P-CSCF), the service call session control function (S-CSCF) or the network function of the home subscriber server (HSS).
  • IMS IP multimedia subsystem
  • P-CSCF proxy call session control function
  • S-CSCF service call session control function
  • HSS home subscriber server
  • the EM 104 is configured to manage one or more VNFs 103.
  • VNF function such as a session border controller (SBC), a firewall (Firewall), or a packet data network gateway (PGW).
  • SBC session border controller
  • Firewall Firewall
  • PGW packet data network gateway
  • SBC devices in telecommunications systems need to support anti-attack and session-level call bandwidth control.
  • the traditional SBC device is implemented by setting a black and white list (access control list (ACL), access control list, ACL description below) at the network entry hardware layer.
  • ACL access control list
  • the network data packet address information is retrieved in the ACL, and the data packet is released or discarded according to the policy configured in the ACL.
  • the ACL data is provided in two ways: static manual configuration or dynamic configuration after attack detection.
  • the SBC delivers the network data address combination and the allowed bandwidth data (call admission control (CAC) data table corresponding to the session.
  • CAC call admission control
  • the network entry hardware layer retrieves the data table when processing the network message, and if the network message address is combined in the data table, the bandwidth of the combined corresponding message is counted, if the network address is combined If the bandwidth consumption is greater than the allowed bandwidth in the data table, the packet is discarded and the related information is recorded.
  • the processing mode is such that the attack traffic or the abnormal traffic exceeding the allowed bandwidth of the call conference can be discarded at the front end of the network entry, thereby providing strong anti-attack processing capability and saving system processing resource consumption of invalid packets.
  • the SBC is deployed as a virtual machine application on a general-purpose server (COTS).
  • COTS general-purpose server
  • the general-purpose server lacks the hardware layer function for network packet anti-attack or call session bandwidth control.
  • the existing virtualization technology focuses on virtualization.
  • the decoupling problem between the application and the hardware, the isolation of the virtual machine application and the network processing hardware layer caused by the decoupling, and the transparent processing of the virtual switch to the network data packet make the abnormal network traffic not in the front of the network entry. The processing is completed, so that the anti-attack capability is not strong, and the invalid system processing resource consumption caused by the abnormal traffic.
  • FIG. 2 it is a schematic diagram of an SBC anti-attack or call bandwidth control function in the prior art.
  • the SBC service is deployed as the virtualized SBC service software 2011 in the SBC virtual machine 201.
  • other services are deployed as virtualized other service software 2021 on other virtual machines 202, the SBC virtual machine 201 and others.
  • the virtual machines 202 are deployed together on a Common Server (COTS) 200.
  • COTS Common Server
  • the SBC virtual machine 201 obtains network access capability by using the virtual switch 2001 of the virtualization layer, that is, the virtual network card 2012 in the figure.
  • the other virtual machines 202 also have a virtual network card 2022 with network access capability.
  • the SBC virtual machine 201 can only see the virtual network card 2012 in the virtual machine, and the network data packet reaches the virtual switch 2001 through the physical network card 2002, and then the message is forwarded to the virtual network card 2012 of the SBC virtual machine 201.
  • the SBC virtual machine 201 performs anti-attack, call bandwidth control, and the like on the network data packet.
  • the virtual switch 2001 on the general-purpose server 200 is not aware of the content of the network data packet, and is transparently distributed to the SBC virtual machine 201 only according to the forwarding rule.
  • attack packets are not perceived on the physical switch (physical network card) of the virtual machine switch or the network entry.
  • the network data related to the attack traffic or the call session can only be transparently transmitted to the SBC VM.
  • attack traffic Or abnormal traffic exceeding the allowed bandwidth of the session is not attack-proof (discarded) on the virtual switch, which consumes the processing capability of the virtual machine switch, so that the network processing capability available to other virtual machines deployed on the same server is degraded;
  • abnormal traffic exceeding the allowed bandwidth of the call session is input to the SBC virtual machine, and the SBC virtual machine needs to consume the CPU of the service processing to identify or process the attack traffic or the abnormal traffic outside the call allowed bandwidth.
  • the CPU processing capability of the entire cloud system has invalid resource consumption (CPU processing consumption of abnormal traffic on the virtual switch and processing consumption of abnormal traffic by the SBC virtual machine).
  • FIG. 3 it is a schematic diagram of another SBC anti-attack or call bandwidth control function in the prior art.
  • the clouded SBC virtual machine 201 uses the external device to send back the filtering information of the attack defense or the call conference bandwidth control to the virtual switch 2001.
  • the SBC virtual machine 201 identifies the attack source and generates the ACL table information.
  • the message is sent to the element management system (EMS) 300, and further passed to the policy and charging rules function (PCRF) 301, and the PCRF 301 sends the filtering information to the software definition network (software defined network).
  • EMS element management system
  • PCRF policy and charging rules function
  • the network, SDN controller 302 configures the network packet processing filtering rules required by the SBC virtual machine 201 into the virtual machine switch 2001 through the OpenFlow interface of the virtual switch 2001.
  • the virtual switch 2001 performs packet matching according to the configured network packet processing filtering rule in the processing of the subsequent network data packet, and performs the release or discard processing on the matched packet according to the operation requirement of the network packet processing filtering rule.
  • the solution relies on multiple external device devices to complete the processing of the processing and filtering rules of the SBC virtual machine network data packet, which has the following disadvantages: First, the information transmission path is long, and after many links, the reliability of the message transmission is guaranteed to be high; secondly, the transmission The path involves multiple devices, and the solution has many devices, which leads to high cost and poor economy.
  • the server 400 includes at least one processor 401, a communication bus 402, a memory 403, and at least one communication interface 404.
  • the processor 401 can be a general central processing unit (CPU), a microprocessor, an application-specific integrated circuit (ASIC), or one or more for controlling the execution of the program of the present application. integrated circuit.
  • CPU central processing unit
  • ASIC application-specific integrated circuit
  • Communication bus 402 can include a path for communicating information between the components described above.
  • Communication interface 404 using any type of transceiver, for communicating with other devices or communication networks, such as Ethernet, radio access network (RAN), wireless local area networks (WLAN), etc. .
  • RAN radio access network
  • WLAN wireless local area networks
  • the memory 403 may be a read-only memory (ROM) or other type of static storage device that can store static information and instructions, a random access memory (RAM) or other type that can store information and instructions.
  • the dynamic storage device can also be an electrically erasable programmable read-only memory (EEPROM), a compact disc read-only memory (CD-ROM) or other optical disc storage, and a disc storage device. (including compact discs, laser discs, optical discs, digital versatile discs, Blu-ray discs, etc.), magnetic disk storage media or other magnetic storage devices, or can be used to carry or store desired program code in the form of instructions or data structures and can be Any other media accessed, but not limited to this.
  • the memory can exist independently and be connected to the processor via a bus.
  • the memory can also be integrated with the processor.
  • the memory 403 is used to store application code for executing the solution of the present application, and is controlled by the processor 401 for execution.
  • the processor 401 is configured to execute the application code stored in the memory 403, thereby implementing the downlink signal transmission method described in the embodiment of the present application.
  • the processor 401 may include one or more CPUs, such as CPU0 and CPU1 in FIG.
  • server 400 can include multiple processors, such as processor 401 and processor 408 in FIG. Each of these processors can be a single-CPU processor or a multi-core processor.
  • a processor herein may refer to one or more devices, circuits, and/or processing cores for processing data, such as computer program instructions.
  • the server 400 may further include an output device 405 and an input device 406.
  • Output device 405 is in communication with processor 401 and can display information in a variety of ways.
  • the output device 405 can be a liquid crystal display (LCD), a light emitting diode (LED) display device, a cathode ray tube (CRT) display device, or a projector.
  • Input device 406 is in communication with processor 401 and can accept user input in a variety of ways.
  • input device 406 can be a mouse, keyboard, touch screen device, or sensing device, and the like.
  • the server 400 described above may be a general purpose server or a dedicated server. Or there is a device of similar structure in Figure 4.
  • the embodiment of the present application does not limit the type of the server 400.
  • the server 400 may be the VNFM 1012 server, the VIM 1013 server, or the NFVI 102 server shown in FIG.
  • the embodiments of the present application are described in terms of each function corresponding to one server, those skilled in the art may understand that in the actual product, multiple functions may be implemented on one server, which are all implemented in the present application. Within the scope of protection.
  • the embodiment of the present application provides a control information transmission method, as shown in FIG. 5, including:
  • S001 The service software of the virtual machine generates control information, and sends the control information to the first device, where the control information is used for attack prevention or call session bandwidth control.
  • the control information is used for anti-attack or call session bandwidth control, so that the virtual switch can implement the anti-attack or call session bandwidth control function of the SBC virtual machine.
  • the control information includes: an identifier of the SBC virtual machine, a flow rule type, a stream processing operation type, and a parameter packet, where the flow processing operation type is used to indicate addition, modification, or deletion; and when the flow rule type is access control
  • the parameter packet includes a source Internet Protocol IP address, a source port number, a destination IP address, a destination port number, and a filtering operation action, where the filtering operation action is used to indicate that the packet is allowed to pass or be discarded;
  • the parameter packet includes a source IP address, a source port number, a destination IP address, a destination port number, and an allowed bandwidth.
  • the first device receives control information from a service software of the session border controller virtual machine.
  • the first device sends control information to the second device.
  • the second device receives control information from the first device.
  • the second device configures control information to the virtual switch.
  • the first device and the second device may be virtual devices implemented by software, or devices of the entity, that is, the first device may be a virtual agent device front end, the virtual agent device front end is configured in the virtual machine, and the second device is a virtual agent.
  • the back end of the device, the virtual agent device back end is configured in the virtual network layer of the virtual resource layer; or the first device is a virtual network function manager VNFM, and the second device is a virtualized infrastructure manager VIM.
  • the service software of the virtual machine transfers the control information to the virtual switch through the first device and the second device, so that the virtual machine can send control information to the virtual switch in the NFV system to make the virtual
  • the switch can implement specific functions of the virtual machine.
  • the method may further include:
  • the virtual switch sends control result information to the second device, where the control result information is used to indicate whether the control information is successfully configured.
  • the second device receives control result information from the virtual switch.
  • the second device sends the control result information to the first device.
  • the first device receives the control result information from the second device.
  • the first device sends the control result information to the service software of the virtual machine.
  • the foregoing embodiment implements the service software that the virtual switch feeds back the control result information of the control information to the virtual machine through the second device and the first device.
  • the embodiment of the present application provides a control information transmission method. Referring to FIG. 7, the method includes:
  • S101 The service software of the virtual machine generates control information, and sends the control information to the front end of the virtual proxy device.
  • a stream processing virtual proxy device may be added, wherein the virtual proxy device front end 2013 is configured in the virtual machine such that the virtual proxy device front end 2013 can directly communicate with the business software 2011 in the virtual machine;
  • the virtual proxy device backend 2003 is configured in the virtual network of the resource layer such that the virtual proxy device backend 2003 can communicate directly with the virtual switch 2001 also located in the virtual network of the virtual resource layer.
  • the virtual proxy device front end 2013 is configured in the VNF 108
  • the virtual proxy device backend 2003 is configured in the virtual network 10223 of the virtual resource layer.
  • the SBC service software Taking the virtual machine as the SBC virtual machine and the service software as the SBC service software, the SBC service software generates control information according to the attack situation or the call session bandwidth control requirement, and configures the control information through the interface provided by the virtual agent device front end of the SBC virtual machine. .
  • an operation command word identifier is also sent to control whether the action is a set flow rule (SetFlowRule) or a query statistics (QueryStat).
  • SetFlowRule set flow rule
  • queryStat query statistics
  • the virtual agent device front end receives control information from the service software of the virtual machine.
  • the virtual agent device front end sends control information to the virtual agent device back end.
  • the virtual proxy device front end 2013 transmits the control information transparently to the virtual proxy device backend 2003.
  • the virtual proxy device back end receives control information from the virtual proxy device front end.
  • the virtual proxy device backend configures control information to the virtual switch.
  • the virtual proxy device backend 2003 constructs different processing procedures based on the type of information to be transmitted, and finally calls the OpenFlow interface provided by the virtual switch to configure control information into the virtual machine switch. Specifically, the virtual proxy device backend 2003 invokes the virtual network interface of the virtualization layer according to the identifier (VM-ID) of the virtual machine in the information, queries the virtual port (VM-Port) of the virtual switch corresponding to the identifier, and then invokes The OpenFlow interface maps the operations in the control information to the operation model corresponding to the OpenFlow and sends the operation to the virtual switch.
  • VM-ID the identifier
  • VM-Port virtual port
  • the virtual switch when the virtual switch supports sending the packet filtering information to the hardware layer of the network portal (for example, a physical network card), the virtual switch automatically completes the hardware layer that sends the foregoing control information to the network portal.
  • the hardware layer of the network portal for example, a physical network card
  • the actions in the above S101-S105 can be performed by the processor 401 in the server 400 shown in FIG. 4 calling the application code stored in the memory 403.
  • the server 400 at this time is the NFVI 102 server shown in FIG. .
  • the control information delivery method provided by the embodiment of the present application delivers control information generated by the business software of the virtual machine to the virtual switch through the virtual agent device front end located in the virtual machine and the virtual agent device back end located in the virtual network, thereby realizing In an NFV system, a virtual machine can send control information to a virtual switch to enable a virtual switch to implement a particular function of the virtual machine.
  • the method further includes:
  • the virtual switch sends control result information to the virtual agent device backend, where the control result information is used to indicate whether the control information is successfully configured.
  • the virtual switch After processing the anti-attack or call session bandwidth control information sent by the virtual device backend, the virtual switch replies to the virtual proxy device backend whether the processing result is successful or failed. If it fails, it also gives the reason for the failure.
  • the virtual proxy device back end receives control result information from the virtual switch.
  • the virtual proxy device back end sends the control result information to the virtual proxy device front end.
  • the virtual proxy device backend transparently transmits the control result information to the virtual proxy device front end.
  • the virtual agent device front end receives the control result information from the virtual agent device back end.
  • the virtual agent device front end sends the control result information to the service software of the virtual machine.
  • the virtual agent device front end transparently transmits the control result information to the business software of the virtual machine.
  • the actions in the above S201-S205 can be performed by the processor 401 in the server 400 shown in FIG. 4 calling the application code stored in the memory 403.
  • the server 400 at this time is the NFVI 102 server shown in FIG. .
  • the foregoing implementation manner implements the service software that the virtual switch feeds back the control result information of the control information to the virtual machine through the virtual agent device backend and the virtual agent device front end.
  • the method further includes:
  • S301 The service software of the virtual machine generates query information, and sends the query information to the front end of the virtual proxy device.
  • the query information is used to query the statistics of the network packet, and the operation command word identifier is QueryStat, and the identifier of the virtual machine is also included in the query information.
  • the virtual agent device front end receives the query information.
  • the virtual agent device front end sends the query information to the virtual agent device back end.
  • the virtual proxy device backend receives the query information.
  • the virtual proxy device backend configures the query information to the virtual switch.
  • the virtual agent device backend parses the information transmitted from the front end of the virtual proxy device. For example, if the operation command word identifier is QueryStat, the virtual machine port is mapped to the virtual port (VM-Port) of the virtual switch according to the identifier of the virtual machine. The message is sent to the virtual switch.
  • the operation command word identifier is QueryStat
  • the virtual machine port is mapped to the virtual port (VM-Port) of the virtual switch according to the identifier of the virtual machine.
  • the message is sent to the virtual switch.
  • the virtual switch sends the query result information to the virtual proxy device backend.
  • the query result information is fed back through an OpenFlowRsp (Query, VM-Port, StatInfo) command, where OpenFlowRsp represents an OpenFlow response message, Query represents content as a query result, VM-Port represents a virtual switch virtual port, and StatInfo represents a specific query. result.
  • OpenFlowRsp represents an OpenFlow response message
  • Query represents content as a query result
  • VM-Port represents a virtual switch virtual port
  • StatInfo represents a specific query. result.
  • the virtual proxy device back end receives the query result information from the virtual switch.
  • the virtual proxy device backend sends the query result information to the virtual proxy device front end.
  • the virtual proxy device backend transparently transmits the query result information to the virtual proxy device front end.
  • the virtual agent device front end receives the query result information from the virtual proxy device back end.
  • the virtual agent device front end sends the query result information to the service software of the virtual machine.
  • the virtual agent device front end transparently transmits the query result information to the business software of the virtual machine.
  • the actions in the above S301-S310 can be performed by the processor 401 in the server 400 shown in FIG. 4 calling the application code stored in the memory 403.
  • the server 400 at this time is the NFVI 102 server shown in FIG. .
  • the foregoing implementation manner implements that the service software of the virtual machine sends the query information to the virtual switch through the virtual proxy device front end and the virtual proxy device back end, and the virtual switch feeds the query result information to the virtual through the virtual proxy device backend and the virtual proxy device front end.
  • Machine business software implements that the service software of the virtual machine sends the query information to the virtual switch through the virtual proxy device front end and the virtual proxy device back end, and the virtual switch feeds the query result information to the virtual through the virtual proxy device backend and the virtual proxy device front end.
  • the embodiment of the present application provides another control information transmission method. Referring to FIG. 11, the method includes:
  • the service software of the virtual machine generates control information and sends the control information to the VNFM.
  • the virtual machine is an SBC virtual machine
  • the service software is an SBC service software.
  • the SBC service software 2011 generates control information according to an attack situation or a call session bandwidth control requirement, and sends the information through an interface provided by the VNFM 1012. Give VNFM 1012.
  • the control information here is the same as the foregoing control information, and details are not described herein again.
  • the VNFM receives control information from a service software of the SBC virtual machine.
  • the VNFM sends control information to the VIM.
  • the VNFM sends control information to the VIM based on the interface capabilities provided by the VIM 1013.
  • the VIM receives control information from the VNFM.
  • the VIM configures the control information to the virtual switch for the virtual switch to perform attack defense or call session bandwidth control.
  • the VIM completes the message mapping and processing from the VIM to the virtual switch and sends it to the virtual switch.
  • the operation in the above S401 can be performed by the processor 401 in the server 400 shown in FIG. 4 calling the application code stored in the memory 403.
  • the server 400 at this time is the NFVI 102 server shown in FIG. 1;
  • the actions in S402 and S403 can be performed by the processor 401 in the server 400 shown in FIG. 4 calling the application code stored in the memory 403.
  • the server 400 at this time is the VNFM 1012 server shown in FIG. 1;
  • the actions in and S405 can be performed by the processor 401 in the server 400 shown in FIG. 4 calling the application code stored in the memory 403, and the server 400 at this time is the VIM 1013 server shown in FIG. 1.
  • the service software in the virtual machine configures the control information to the virtual switch through the VNFM and the VIM.
  • the virtual machine can send control information to the virtual switch to enable the virtual switch to implement the specific functions of the virtual machine.
  • the VNFM and the VIM are both existing devices in the existing NFV architecture, and the solution economy is higher.
  • the VNFM is equivalent to the virtual agent device front end
  • the VIM is equivalent to the virtual agent device back end.
  • the virtual machine service software, the VNFM, the VIM, and the virtual switch need to follow the existing communication protocol.
  • the virtual switch can also send the control result information to the service software of the virtual machine through the VIM and the VNFM; similar to steps S301-S310, the service software of the virtual machine can also pass the VNFM and the VIM.
  • the query information is sent to the virtual switch, and the virtual switch can also send the query result information to the service software of the virtual machine through the VIM and the VNFM. I will not repeat them here.
  • the embodiments of the present application may divide the functional modules of each device according to the foregoing method example.
  • each functional module may be divided according to each function, or two or more functions may be integrated into one processing module.
  • the above integrated modules can be implemented in the form of hardware or in the form of software functional modules. It should be noted that the division of the module in the embodiment of the present application is schematic, and is only a logical function division, and the actual implementation may have another division manner.
  • FIG. 13 is a schematic diagram showing a possible structure of the VNFI server involved in the foregoing embodiment.
  • the VNFI server 13 includes: a service software module 1311 and a virtual agent device front end 1312.
  • the service software module 1311 is configured to support the VNFI server 13 to perform the process S001 in FIG. 5, the process S101 in FIG. 7, the process S301 in FIG. 10, the process S401 in FIG. 11, and the virtual agent device front end 1312 for supporting the VNFI server 13 Performing processes S002 and S003 in FIG. 5, processes S008 and 009 in FIG. 6, processes S102 and S103 in FIG. 7, processes S204 and S205 in FIG.
  • the virtual proxy device backend 1313 is used to support the VNFI server 13 to perform processes S004 and S005 in FIG. 5, processes S006 and 007 in FIG. 6, processes S104 and S105 in FIG. 7, and processes S202 and S203 in FIG. Processes S304, S305, S307, and S308 in 10; the switching virtual machine 1314 is configured to support the VNFI server 13 to perform the process S006 in FIG. 6, the process S201 in FIG. 9, and the process S306 in FIG. All the related content of the steps involved in the foregoing method embodiments may be referred to the functional descriptions of the corresponding functional modules, and details are not described herein again.
  • FIG. 14 shows a possible structural diagram of the VNFI server involved in the above embodiment.
  • the VNFI server 13 includes a processing module 1322 and a communication module 1323.
  • the processing module 1322 is configured to control and manage the actions of the VNFI server 13.
  • the processing module 1322 is configured to support the VNFI server 13 to perform the processes S00-S005 in FIG. 5, the processes S006-S010 in FIG. 6, and the process in FIG. S101-S105, processes S201-S205 in Fig. 9, processes S301-S310 in Fig. 10, and process S401 in Fig. 11.
  • Communication module 1313 is used to support communication between the VNFI server and other entities, such as with the functional modules or network entities shown in FIG.
  • the VNFI server 13 may further include a storage module 1321 for storing program codes and data of the VNFI server.
  • the processing module 1322 may be a processor or a controller, for example, may be a central processing unit (CPU), a general-purpose processor, a digital signal processor (DSP), and an application-specific integrated circuit (application-specific). Integrated circuit (ASIC), field programmable gate array (FPGA) or other programmable logic device, transistor logic device, hardware component, or any combination thereof. It is possible to implement or carry out the various illustrative logical blocks, modules and circuits described in connection with the present disclosure.
  • the processor may also be a combination of computing functions, for example, including one or more microprocessor combinations, a combination of a DSP and a microprocessor, and the like.
  • the communication module 1323 may be a transceiver, a transceiver circuit, a communication interface, or the like.
  • the storage module 1321 may be a memory.
  • the VNFI server involved in the embodiment of the present application may be the VNFI server 13 shown in FIG.
  • the VNFI server 13 includes a processor 1332, a transceiver 1333, a memory 1331, and a bus 1334.
  • the transceiver 1333, the processor 1332, and the memory 1331 are connected to each other through a bus 1334.
  • the bus 1334 may be a peripheral component interconnect (PCI) bus or an extended industry standard architecture (EISA) bus. Wait.
  • PCI peripheral component interconnect
  • EISA extended industry standard architecture
  • the bus can be divided into an address bus, a data bus, a control bus, and the like. For ease of representation, only one thick line is shown in the figure, but it does not mean that there is only one bus or one type of bus.
  • FIG. 16 is a schematic diagram showing a possible structure of the VNFM server involved in the foregoing embodiment.
  • the VNFM server 16 includes a receiving unit 1611 and a sending unit 1612.
  • the receiving unit 1611 is configured to support the VNFM server 16 to perform the process S002 in FIG. 5, the process S009 in FIG. 6, the process S402 in FIG. 11;
  • the sending unit 1312 is configured to support the VNFM server 13 to perform the process S003 in FIG. 5, FIG. Process S010 in, process S403 in FIG. All the related content of the steps involved in the foregoing method embodiments may be referred to the functional descriptions of the corresponding functional modules, and details are not described herein again.
  • FIG. 17 shows a possible structural diagram of the VNFM server involved in the above embodiment.
  • the VNFM server 16 includes a processing module 1622 and a communication module 1623.
  • the processing module 1622 is configured to perform control management on the actions of the VNFM server 16, for example, the processing module 1622 is configured to support the VNFM server 16 to perform the processes S002 and S003 in FIG. 5, the processes S009 and S010 in FIG. 6, and the process in FIG. S402 and S403.
  • Communication module 1613 is used to support communication of the VNFM server with other entities, such as with the functional modules or network entities shown in FIG.
  • the VNFM server 16 may also include a storage module 1621 for storing program code and data of the VNFM server.
  • the processing module 1622 may be a processor or a controller, for example, may be a central processing unit (CPU), a general-purpose processor, a digital signal processor (DSP), and an application-specific integrated circuit (application-specific). Integrated circuit (ASIC), field programmable gate array (FPGA) or other programmable logic device, transistor logic device, hardware component, or any combination thereof. It is possible to implement or carry out the various illustrative logical blocks, modules and circuits described in connection with the present disclosure.
  • the processor may also be a combination of computing functions, for example, including one or more microprocessor combinations, a combination of a DSP and a microprocessor, and the like.
  • the communication module 1623 can be a transceiver, a transceiver circuit, a communication interface, or the like.
  • the storage module 1621 can be a memory.
  • the VNFM server involved in the embodiment of the present application may be the VNFM server 16 shown in FIG. 18.
  • the VNFM server 16 includes a processor 1632, a transceiver 1633, a memory 1631, and a bus 1634.
  • the transceiver 1633, the processor 1632, and the memory 1631 are connected to each other through a bus 1634.
  • the bus 1634 may be a peripheral component interconnect (PCI) bus or an extended industry standard architecture (EISA) bus. Wait.
  • PCI peripheral component interconnect
  • EISA extended industry standard architecture
  • the bus can be divided into an address bus, a data bus, a control bus, and the like. For ease of representation, only one thick line is shown in the figure, but it does not mean that there is only one bus or one type of bus.
  • FIG. 19 is a schematic diagram showing a possible structure of a VIM server involved in the foregoing embodiment.
  • the VIM server 19 includes a receiving unit 1911 and a sending unit 1912.
  • the receiving unit 1911 is configured to support the VIM server 19 to perform the process S004 in FIG. 5, the process S007 in FIG. 6, the process S404 in FIG. 11;
  • the sending unit 1912 is configured to support the VIM server 19 to execute the process S005 in FIG. 5, FIG. Process S008 in, process S405 in FIG. All the related content of the steps involved in the foregoing method embodiments may be referred to the functional descriptions of the corresponding functional modules, and details are not described herein again.
  • FIG. 20 shows a possible structural diagram of the VIM server involved in the above embodiment.
  • the VIM server 19 includes a processing module 1922 and a communication module 1923.
  • the processing module 1922 is configured to control and manage the actions of the VIM server 19.
  • the processing module 1922 is configured to support the VIM server 19 to perform the processes S004 and S005 in FIG. 5, the processes S007 and S008 in FIG. 6, and the process in FIG. S404 and S405.
  • Communication module 1913 is used to support communication between the VIM server and other entities, such as with the functional modules or network entities shown in FIG.
  • the VIM server 19 may also include a storage module 1921 for storing program code and data of the VIM server.
  • the processing module 1922 may be a processor or a controller, such as a central processing unit (CPU), a general-purpose processor, a digital signal processor (DSP), and an application-specific integrated circuit (application-specific). Integrated circuit (ASIC), field programmable gate array (FPGA) or other programmable logic device, transistor logic device, hardware component, or any combination thereof. It is possible to implement or carry out the various illustrative logical blocks, modules and circuits described in connection with the present disclosure.
  • the processor may also be a combination of computing functions, for example, including one or more microprocessor combinations, a combination of a DSP and a microprocessor, and the like.
  • the communication module 1923 may be a transceiver, a transceiver circuit, a communication interface, or the like.
  • the storage module 1921 may be a memory.
  • the VIM server involved in the embodiment of the present application may be the VIM server 19 shown in FIG.
  • the VIM server 19 includes a processor 1932, a transceiver 1933, a memory 1931, and a bus 1934.
  • the transceiver 1933, the processor 1932, and the memory 1931 are connected to each other through a bus 1934; the bus 1934 may be a peripheral component interconnect (PCI) bus or an extended industry standard architecture (EISA) bus. Wait.
  • PCI peripheral component interconnect
  • EISA extended industry standard architecture
  • the bus can be divided into an address bus, a data bus, a control bus, and the like. For ease of representation, only one thick line is shown in the figure, but it does not mean that there is only one bus or one type of bus.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本申请实施例公开了一种控制信息传递方法、服务器和系统,涉及通信领域,实现了在NFV系统中虚拟机可以向虚拟交换机发送控制信息以使虚拟交换机可以实现虚拟机的特定功能。控制信息传递方法包括:虚拟机的业务软件生成控制信息,并发送给第一设备;第一设备从会话边界控制器虚拟机的业务软件接收控制信息;第一设备将控制信息发送给第二设备;第二设备从第一设备接收控制信息;第二设备将控制信息配置给虚拟交换机。本申请实施例用于云化电信设备。

Description

控制信息传递方法、服务器和系统
本申请要求于2017年2月24日提交中国专利局、申请号为201710104539.5、发明名称为“控制信息传递方法、服务器和系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及通信领域,尤其涉及一种控制信息传递方法、服务器和系统。
背景技术
传统的电信系统由各种专用的硬件设备组成,不同的应用采用不同的硬件设备。随着网络规模的增长,系统越来越复杂,带来了诸多的挑战,包括新增业务的开发上线、系统的运维、资源利用率等。针对这些挑战,现有技术中提出了网络功能虚拟化(network function virtualization,NFV)技术。其通过将各个网元设备转变成独立的虚拟机布置于上层的虚拟网络功能(virtual network function,VNF)中,实现从目前的专用硬件平台迁移至通用的商用货架产品(commercial-off-the-shelf,COTS)服务器;并通过虚拟化技术,对基础设施硬件设备资源池化及虚拟化,为上层应用提供虚拟资源。实现了上层业务与底层硬件解耦,并且使得每个业务能够快速增加虚拟资源以快速扩展系统容量,或者能够快速减少虚拟资源以收缩系统容量,大大提升了网络的弹性。
在实际应用中,位于VNF中的各个业务的虚拟机通过位于底层中的虚拟交换机来接收和发送数据包,但是由于通用COTS服务器中的虚拟交换机仅具有数据转发功能,无法执行上层虚拟机的功能,使得一些无效或异常数据包转发给上层的虚拟机,占用不必要的资源。
发明内容
本申请的实施例提供一种控制信息传递方法、服务器和系统,用于实现在NFV系统中虚拟机可以向虚拟交换机发送控制信息以使虚拟交换机可以实现虚拟机的特定功能。
为达到上述目的,本申请的实施例采用如下技术方案:
第一方面,提供了一种控制信息传递方法,该方法包括:第一设备从虚拟机的业务软件接收控制信息;第一设备将控制信息发送给第二设备;其中,第一设备为虚拟代理设备前端,虚拟代理设备前端配置于虚拟机中,第二设备为虚拟代理设备后端,虚拟代理设备后端配置于虚拟资源层的虚拟网络中;或者,第一设备为虚拟网络功能管理器VNFM,第二设备为虚拟化基础设施管理器VIM。本申请实施例提供的控制信息传递方法,虚拟机的业务软件将控制信息通过第一设备和第二设备传递给虚拟交换机,实现了在NFV系统中虚拟机可以向虚拟交换机发送控制信息以使虚拟交换机可以实现虚拟机的特定功能。
在一种可能的设计中,该方法还包括:第一设备从第二设备接收控制结果信息,控制结果信息用于指示控制信息是否配置成功;第一设备将控制结果信息发送给虚拟机的业务软件。上述实施方式实现了虚拟交换机将控制信息的控制结果信息通过第二设备和第一设备反馈给虚拟机的业务软件。
在一种可能的设计中,当虚拟机为会话边界控制器SBC虚拟机时,控制信息用于防攻击或呼叫会话带宽控制,控制信息包括:虚拟机的标识、流规则类型、流处理操作类型和参数包,流处理操作类型用于指示增加、修改或删除;当流规则类型为访问控制列表ACL时,参数包包括源互联网协议IP地址、源端口号、目的IP地址、目的端口号和过滤操作动作,其中,过滤操作动作用于指示允许通过或丢弃包;当流规则类型为呼叫准入控制CAC时,参数包包括源IP地址、源端口号、目的IP地址、目的端口号和允许带宽。该实施方式具体公开了控制信息的内容。
第二方面,提供了一种控制信息传递方法,包括:第二设备从第一设备接收控制信息;第二设备将控制信息配置给虚拟交换机;其中,第一设备为虚拟代理设备前端,虚拟代理设备前端配置于虚拟机中,第二设备为虚拟代理设备后端,虚拟代理设备后端配置于虚拟资源层的虚拟网络中;或者,第一设备为虚拟网络功能管理器VNFM,第二设备为虚拟化基础设施管理器VIM。本申请实施例提供的控制信息传递方法,虚拟机的业务软件将控制信息通过第一设备和第二设备传递给虚拟交换机,实现了在NFV系统中虚拟机可以向虚拟交换机发送控制信息以使虚拟交换机可以实现虚拟机的特定功能。
在一种可能的设计中,该方法还包括:第二设备从虚拟交换机接收控制结果信息,控制结果信息用于指示控制信息是否配置成功;第二设备将控制结果信息发送给第一设备。上述实施方式实现了虚拟交换机将控制信息的控制结果信息通过第二设备和第一设备反馈给虚拟机的业务软件。
在一种可能的设计中,当虚拟机为会话边界控制器SBC虚拟机时,控制信息用于防攻击或呼叫会话带宽控制,控制信息包括:虚拟机的标识、流规则类型、流处理操作类型和参数包,流处理操作类型用于指示增加、修改或删除;当流规则类型为访问控制列表ACL时,参数包包括源互联网协议IP地址、源端口号、目的IP地址、目的端口号和过滤操作动作,其中,过滤操作动作用于指示允许通过或丢弃包;当流规则类型为呼叫准入控制CAC时,参数包包括源IP地址、源端口号、目的IP地址、目的端口号和允许带宽。该实施方式具体公开了控制信息的内容。
第三方面,提供了一种网络功能虚拟化基础设施层NFVI服务器,包括:虚拟代理设备前端,用于从虚拟机的业务软件接收控制信息,并发送给虚拟代理设备后端,其中,控制信息用于防攻击或呼叫会话带宽控制,虚拟代理设备前端配置于虚拟机中,虚拟代理设备后端配置于虚拟资源层的虚拟网络中;虚拟代理设备后端,用于从虚拟代理设备前端接收控制信息,并发送给虚拟交换机。该实施方式通过位于虚拟机中的虚拟代理设备前端以及位于虚拟资源层的虚拟网络中的虚拟代理设备后端,将虚拟机的业务软件产生的控制信息传递给虚拟交换机,实现了在NFV系统中虚拟机可以向虚拟交换机发送控制信息以使虚拟交换机可以实现虚拟机的特定功能。
在一种可能的设计中,虚拟代理设备后端,还用于从虚拟交换机接收控制结果信息,并发送给虚拟代理设备前端,控制结果信息用于指示控制信息是否配置成功;虚拟代理设备前端,还用于从虚拟代理设备后端接收控制结果信息,并发送给虚拟机的业务软件。上述实施方式实现了虚拟交换机将控制信息的控制结果信息通过虚拟代理设备后端和虚拟代理设备前 端反馈给虚拟机的业务软件。
在一种可能的设计中,当虚拟机为会话边界控制器SBC虚拟机时,控制信息用于防攻击或呼叫会话带宽控制,控制信息包括:虚拟机的标识、流规则类型、流处理操作类型和参数包,流处理操作类型用于指示增加、修改或删除;当流规则类型为访问控制列表ACL时,参数包包括源互联网协议IP地址、源端口号、目的IP地址、目的端口号和过滤操作动作,其中,过滤操作动作用于指示允许通过或丢弃包;当流规则类型为呼叫准入控制CAC时,参数包包括源IP地址、源端口号、目的IP地址、目的端口号和允许带宽。该实施方式具体公开了控制信息的内容。
第四方面,提供了一种虚拟网络功能管理器VNFM服务器,包括:接收单元,用于从虚拟机的业务软件接收控制信息;发送单元,用于将控制信息发送给虚拟化基础设施管理器VIM。该实施方式中,虚拟机中的业务软件通过VNFM和VIM将控制信息配置给虚拟交换机。实现了在NFV系统中虚拟机可以向虚拟交换机发送控制信息以使虚拟交换机可以实现虚拟机的特定功能。该方案相对于现有技术中,SBC通过EMS、PCRF、SDN控制器向虚拟交换机配置控制信息的方案来说,VNFM和VIM都是现有NFV架构中的现有设备,方案经济性更高。
在一种可能的设计中,接收单元,还用于从VIM接收控制结果信息,控制结果信息用于指示控制信息是否配置成功;发送单元,还用于将控制结果信息发送给虚拟机的业务软件。上述实施方式实现了虚拟交换机将控制信息的控制结果信息通过VIM和VNFM反馈给虚拟机的业务软件。
在一种可能的设计中,当虚拟机为会话边界控制器SBC虚拟机时,控制信息用于防攻击或呼叫会话带宽控制,控制信息包括:SBC虚拟机的标识、流规则类型、流处理操作类型和参数包,流处理操作类型用于指示增加、修改或删除;当流规则类型为访问控制列表ACL时,参数包包括源互联网协议IP地址、源端口号、目的IP地址、目的端口号和过滤操作动作,其中,过滤操作动作用于指示允许通过或丢弃包;当流规则类型为呼叫准入控制CAC时,参数包包括源IP地址、源端口号、目的IP地址、目的端口号和允许带宽。该实施方式具体公开了控制信息的内容。
第五方面,提供了一种虚拟化基础设施管理器VIM服务器,包括:接收单元,用于从虚拟网络功能管理器VNFM接收控制信息;发送单元,用于将控制信息配置给虚拟交换机。该实施方式中,虚拟机中的业务软件通过VNFM和VIM将控制信息配置给虚拟交换机。实现了在NFV系统中虚拟机可以向虚拟交换机发送控制信息以使虚拟交换机可以实现虚拟机的特定功能。该方案相对于现有技术中,SBC通过EMS、PCRF、SDN控制器向虚拟交换机配置控制信息的方案来说,VNFM和VIM都是现有NFV架构中的现有设备,方案经济性更高。
在一种可能的设计中,接收单元,还用于从虚拟交换机接收控制结果信息,控制结果信息用于指示控制信息是否配置成功;发送单元,还用于将控制结果信息发送给VNFM。上述实施方式实现了虚拟交换机将控制信息的控制结果信息通过VIM和VNFM反馈给虚拟机的业务软件。
在一种可能的设计中,当虚拟机为会话边界控制器SBC虚拟机时,控制信息用于防攻击或呼叫会话带宽控制,控制信息包括:SBC虚拟机的标识、流规则类型、流处理操作类型和参数包,流处理操作类型用于指示增加、修改或删除;当流规则类型为访问控制列表ACL时, 参数包包括源互联网协议IP地址、源端口号、目的IP地址、目的端口号和过滤操作动作,其中,过滤操作动作用于指示允许通过或丢弃包;当流规则类型为呼叫准入控制CAC时,参数包包括源IP地址、源端口号、目的IP地址、目的端口号和允许带宽。该实施方式具体公开了控制信息的内容。
第六方面,本申请实施例提供一种网络功能虚拟化基础设施层NFVI服务器,包括:处理器、存储器、总线和通信接口;该存储器用于存储计算机执行指令,该处理器与该存储器通过该总线连接,当该NFVI服务器运行时,该处理器执行该存储器存储的该计算机执行指令,以使该NFVI服务器执行上述第一方面中任意一项的控制信息传递方法。
第七方面,本申请实施例提供一种虚拟网络功能管理器VNFM服务器,包括:处理器、存储器、总线和通信接口;该存储器用于存储计算机执行指令,该处理器与该存储器通过该总线连接,当该VNFM服务器运行时,该处理器执行该存储器存储的该计算机执行指令,以使该VNFM服务器执行上述第一方面中任意一项的控制信息传递方法。
第八方面,本申请实施例提供一种虚拟化基础设施管理器VIM服务器,包括:处理器、存储器、总线和通信接口;该存储器用于存储计算机执行指令,该处理器与该存储器通过该总线连接,当该VIM服务器运行时,该处理器执行该存储器存储的该计算机执行指令,以使该VIM服务器执行上述第一方面中任意一项的控制信息传递方法。
第九方面,本申请实施例提供了一种计算机存储介质,包括指令,当其在计算机上运行时,使得计算机执行如第一方面所述的控制信息传递方法。
第十方面,本申请实施例提供了一种包含指令的计算机程序产品,当其在计算机上运行时,使得该计算机执行如第一方面所述的控制信息传递方法。
第十一方面,本申请实施例提供了一种计算机存储介质,包括指令,当其在计算机上运行时,使得计算机执行如第二方面所述的控制信息传递方法。
第十二方面,本申请实施例提供了一种包含指令的计算机程序产品,当其在计算机上运行时,使得该计算机执行如第二方面所述的控制信息传递方法。
另外,第六方面至第十二方面中任一种设计方式所带来的技术效果可参见第一方面或第二方面中不同设计方式所带来的技术效果,此处不再赘述。
第十三方面,本申请实施例提供一种网络功能虚拟化NFV通信系统,包括如第三方面所述的网络功能虚拟化基础设施层NFVI服务器;或者包括如第四方面所述的虚拟网络功能管理器VNFM服务器以及如第五方面所述的虚拟化基础设施管理器VIM服务器;或者包括如第六方面所述的NFVI服务器;或者包括如第七方面所述的VNFM服务器以及如第八方面所述的VIM服务器。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍。
图1为本申请实施例提供的NFV系统架构示意图;
图2为现有技术中一种SBC防攻击或呼叫带宽控制功能的示意图;
图3为现有技术中另一种SBC防攻击或呼叫带宽控制功能的示意图;
图4为本申请实施例提供的一种服务器的硬件结构示意图;
图5为本申请实施例提供的一种服务器的硬件结构示意图;
图6为本申请实施例提供的一种服务器的硬件结构示意图;
图7为本申请实施例提供的一种控制信息传递方法的流程示意图;
图8为本申请实施例提供的一种控制信息传递方法涉及装置的结构示意图;
图9为本申请实施例提供的另一种控制信息传递方法的流程示意图;
图10为本申请实施例提供的又一种控制信息传递方法的流程示意图;
图11为本申请实施例提供的再一种控制信息传递方法的流程示意图;
图12本申请实施例提供的再一种控制信息传递方法涉及装置的结构示意图;
图13为本申请实施例提供的一种NFVI服务器的结构示意图;
图14为本申请实施例提供的另一种NFVI服务器的结构示意图;
图15为本申请实施例提供的又一种NFVI服务器的结构示意图;
图16为本申请实施例提供的一种VNFM服务器的结构示意图;
图17为本申请实施例提供的另一种VNFM服务器的结构示意图;
图18为本申请实施例提供的又一种VNFM服务器的结构示意图;
图19为本申请实施例提供的一种VIM服务器的结构示意图;
图20为本申请实施例提供的另一种VIM服务器的结构示意图;
图21为本申请实施例提供的又一种VIM服务器的结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述。
下面结合附图,对本申请的实施例进行描述。
本申请实施例提供的NFV系统架构如图1中所示,NFV系统100可以在各种网络中使用,例如在一个数据中心网络、运营商网络或局域网中来实现。所述NFV系统100包括:NFV管理和编排系统(NFV management and orchestration,NFV MANO)101;NFV基础设施层(NFV infrastructure,NFVI)102;多个虚拟网络功能(virtual network function,VNF)103;多个网元管理(element management,EM)104;网络服务、VNF和基础设施描述(network service,VNF and infrastructure description)105;以及业务支持管理系统(operation-support system/business support system,OSS/BSS)106。其中,NFV管理和编排系统101包括NFV编排器(NFV orchestrator,NFVO)1011,一个或多个VNF管理器(VNF manager,VNFM)1012和虚拟化基础设施管理器(virtualized infrastructure manager,VIM)1013。网路服务、VNF和基础设施描述105和OSS/BSS 106在ETSI GS NFV 002 V1.1.1标准中有进一步的讨论。
NFV MANO 101用于执行对VNF 103和NFVI 102的监视和管理。NFVO 1011可以实现在NFVI 102上的网络服务(例如层二(L2)和层三(L3)的虚拟专用网(virtual private network,VPN)服务),也可以执行来自一个或多个VNFM 1012的资源相关请求,发送配置信息到VNFM 1012,并收集VNF 103的状态信息。另外,NFVO 1011可以与VIM 1013通信,以实现资源的分配和/或预留以及交换虚拟化硬件资源的配置和状态信息。所述VNFM 1012可以管理一个或多个VNF 103。VNFM 1012可以执行各种管理功能,如实例化、更新、查询、缩放和/或终止 VNF 103等。VIM 1013可以执行资源管理的功能,例如管理基础设施资源的分配(例如增加资源到虚拟容器)和操作功能(如收集NFVI故障信息)。所述VNFM 1012和VIM 1013可以相互通信进行资源分配和交换虚拟化硬件资源的配置和状态信息。
所述NFVI 102包括硬件资源层1021、虚拟资源层(软件资源)1022和虚拟化层1023。NFVI 102通过硬件资源、软件资源或两者的组合来完成虚拟化环境的部署。硬件资源层1021和虚拟化层1023用于提供虚拟化的资源,例如作为虚拟机(virtual machine,VM)和其它形式的虚拟容器,用于VNF 103。硬件资源层1021包括计算硬件10211、存储硬件10212和网络硬件10213。计算硬件10211可以是市场上现成的硬件和/或用户定制的硬件,用来提供处理和计算资源。存储硬件10212可以是网络内提供的存储容量或驻留在存储硬件10212本身的存储容量(位于服务器内的本地存储器)。在一个实现方案中,计算硬件10211和存储硬件10212的资源可以被集中在一起。网络硬件10213可以是交换机、路由器和/或配置成具有交换功能的任何其他网络设备。网络硬件10213可以横跨多个域,并且可以包括多个由一个或多个传输网络互连的网络。
NFVI 102里面的虚拟化层1023可以从物理层抽象硬件资源以及解耦VNF 103,以便向VNF 103提供虚拟化资源。虚拟资源层1022包括虚拟计算10221、虚拟存储器10222和虚拟网络10223。虚拟计算10221和虚拟存储10222可以以虚拟机和/或其他虚拟容器的形式提供给VNF 103。例如,一个或一个以上的VNF 103可以部署在一个虚拟机上。虚拟化层1023抽象网络硬件10213从而形成虚拟网络10223,虚拟网络10223可以包括虚拟交换机(virtual switch,VS),所述虚拟交换机用来提供虚拟机和其他虚拟机之间的连接。此外,网络硬件10213中的传输网络,可以采用集中式控制平面和一个单独的转发平面(例如软件定义网络(software defined network,SDN))虚拟化。
VNFM 1012可以与VNF 103和EM 104交互来对VNF的生命周期进行管理以及交换配置和状态信息。VNF 103可以被配置为通过一个物理网络设备执行的至少一个网络功能的虚拟化。例如,在一个实现方案中,所述VNF 103可以经过配置以提供IP多媒体子系统(IMS,IP multimedia subsystem)网络中的不同网元具备的功能,例如代理呼叫会话控制功能(proxy call session control function,P-CSCF),服务呼叫会话控制功能(call session control function,S-CSCF)或归属签约用户服务器(home subscriber server,HSS)的网络功能等。EM 104经过配置以对一个或多个VNF 103进行管理。
本申请实施描述的控制信息传递方法、服务器和系统,可以应用于会话边界控制器(session border controller,SBC)、防火墙(Firewall)、分组数据网络网关(packet data network gateway,PGW)等VNF功能前移至虚拟交换机中,或者,实现VNF配置和使用虚拟交换机的操作。下面着重以SBC业务为例进行说明。
在电信系统中的SBC设备需要支持防攻击以及会话级呼叫带宽控制等功能。对于防攻击功能,传统的SBC设备通过在网络入口硬件层设置黑白名单(访问控制列表(access control list,ACL)、接入控制列表,下文以ACL描述)来实现。在处理网络数据报文时,在ACL中检索网络数据报文地址信息,按照ACL中配置的策略实现数据包的放通或丢弃。ACL的数据是通过两种方式来提供:静态人工配置方式或防攻击检测后动态配置。对于会话的呼叫带宽控制功能,SBC在呼叫会话中协商确定会话的允许带宽后,下发该会话对应的网络数据地址组合和允许的带宽数据(呼叫准入控制(call admission control,CAC)数据表)到网络入 口硬件层,网络入口硬件层在处理网络报文的时候,检索该数据表,如果网络报文地址组合在数据表中,统计该组合对应报文的带宽,如果该网络地址组合的带宽消耗大于数据表中的允许带宽,则丢弃报文并记录相关信息。该处理方式使得攻击流量或超出呼叫会议允许带宽的异常流量可以在网络入口的较前位置被丢弃掉,从而提供较强的防攻击处理能力,同时节约无效报文的系统处理资源消耗。
云化后,SBC作为虚拟机应用部署在通用服务器(COTS),通用服务器缺少针对网络报文的防攻击或呼叫会话带宽控制功能的硬件层功能,同时现有的虚拟化技术重点解决的是虚拟应用和硬件的解耦问题,由于解耦带来的虚拟机应用和网络处理硬件层的隔离,以及虚拟交换机对于网络数据报文的透明性处理,使得异常网络流量无法在网络入口的较前位置完成处理,从而存在防攻击能力不强,以及异常流量导致的无效系统处理资源消耗。
参照图2中所示,为现有技术中一种SBC防攻击或呼叫带宽控制功能的示意图。在云化后,SBC业务作为虚拟化的SBC业务软件2011部署在SBC虚拟机201中,同样地,其他业务作为虚拟化的其他业务软件2021部署在其他虚拟机202上,SBC虚拟机201和其他虚拟机202共同部署在通用服务器(COTS)200上。SBC虚拟机201通过使用虚拟化层的虚拟交换机2001来获得网络出入口能力,即如图中的虚拟网卡2012,同样地,其他虚拟机202也具有网络出入口能力的虚拟网卡2022。SBC虚拟机201只能看到本虚拟机内的虚拟网卡2012,网络数据报文经过物理网卡2002到达虚拟交换机2001,然后报文被转发到SBC虚拟机201的虚拟网卡2012。SBC虚拟机201针对该网络数据报文进行防攻击、呼叫带宽控制等的处理。其中,通用服务器200上的虚拟交换机2001对于网络数据报文的内容是不感知的,仅根据转发规则透明分发至SBC虚拟机201。
该方案中攻击报文在虚拟机交换机或网络入口物理层(物理网卡)上无感知,攻击流量或呼叫会话相关的网络数据只能透传给SBC虚拟机,会有两个不利影响:攻击流量或超出会话允许带宽的异常流量在虚拟交换机上无防攻击处理(丢弃),消耗了虚拟机交换机的处理能力,使得部署于同一服务器的其他虚拟机可获得的网络处理能力下降;另外,攻击流量或超出呼叫会话允许带宽的异常流量都输入到SBC虚拟机,SBC虚拟机需要消耗业务处理的CPU才能识别或处理攻击流量或者呼叫允许带宽以外的异常流量。针对此类异常流量,整个云化系统的CPU处理能力存在无效的资源消耗(异常流量在虚拟交换机上的CPU处理消耗以及SBC虚拟机对异常流量的处理消耗)。
参照图3中所示,为现有技术中另一种SBC防攻击或呼叫带宽控制功能的示意图。云化后的SBC虚拟机201利用外部设备迂回下发防攻击或呼叫会议带宽控制的过滤信息到虚拟交换机2001,例如SBC虚拟机201进行业务处理过程中识别了攻击源后产生ACL表信息,通过发送消息到网元管理系统(element management system,EMS)300,并进一步传递给策略与计费规则功能(policy and charging rules function,PCRF)301,PCRF301再下发过滤信息到软件定义网络(software defined network,SDN)控制器302,SDN控制器302通过虚拟交换机2001的开放流(OpenFlow)接口把SBC虚拟机201要求的网络数据包处理过滤规则配置到虚拟机交换机2001中。虚拟交换机2001在后续的网络数据报文的处理中根据已配置的网络数据包处理过滤规则进行报文匹配,对于匹配的报文按照网络数据包处理过滤规则的操作要求进行放通或丢弃处理。
该方案依托多个外部设备装置完成SBC虚拟机网络数据报文的处理过滤规则的传递,存 在如下缺点:首先,信息传输路径长,经过环节多,消息传递的可靠性保证代价大;其次,传输路径涉及到多个设备,方案落地的设备多,导致方案成本高,经济性较差。
参照图4中所示,为本申请实施例提供的一种服务器的硬件结构示意图,该服务器400包括至少一个处理器401,通信总线402,存储器403以及至少一个通信接口404。
处理器401可以是一个通用中央处理器(central processing unit,CPU),微处理器,特定应用集成电路(application-specific integrated circuit,ASIC),或一个或多个用于控制本申请方案程序执行的集成电路。
通信总线402可包括一通路,在上述组件之间传送信息。
通信接口404,使用任何收发器一类的装置,用于与其他设备或通信网络通信,如以太网,无线接入网(radio access network,RAN),无线局域网(wireless local area networks,WLAN)等。
存储器403可以是只读存储器(read-only memory,ROM)或可存储静态信息和指令的其他类型的静态存储设备,随机存取存储器(random access memory,RAM)或者可存储信息和指令的其他类型的动态存储设备,也可以是电可擦可编程只读存储器(electrically erasable programmable read-only memory,EEPROM)、只读光盘(compact disc read-only memory,CD-ROM)或其他光盘存储、光碟存储(包括压缩光碟、激光碟、光碟、数字通用光碟、蓝光光碟等)、磁盘存储介质或者其他磁存储设备、或者能够用于携带或存储具有指令或数据结构形式的期望的程序代码并能够由计算机存取的任何其他介质,但不限于此。存储器可以是独立存在,通过总线与处理器相连接。存储器也可以和处理器集成在一起。
其中,存储器403用于存储执行本申请方案的应用程序代码,并由处理器401来控制执行。处理器401用于执行存储器403中存储的应用程序代码,从而实现本申请实施例中所述的下行信号传输方法。
在具体实现中,作为一种实施例,处理器401可以包括一个或多个CPU,例如图4中的CPU0和CPU1。
在具体实现中,作为一种实施例,服务器400可以包括多个处理器,例如图4中的处理器401和处理器408。这些处理器中的每一个可以是一个单核(single-CPU)处理器,也可以是一个多核(multi-CPU)处理器。这里的处理器可以指一个或多个设备、电路、和/或用于处理数据(例如计算机程序指令)的处理核。
在具体实现中,作为一种实施例,服务器400还可以包括输出设备405和输入设备406。输出设备405和处理器401通信,可以以多种方式来显示信息。例如,输出设备405可以是液晶显示器(liquid crystal display,LCD),发光二级管(light emitting diode,LED)显示设备,阴极射线管(cathode ray tube,CRT)显示设备,或投影仪(projector)等。输入设备406和处理器401通信,可以以多种方式接受用户的输入。例如,输入设备406可以是鼠标、键盘、触摸屏设备或传感设备等。
上述的服务器400可以是一个通用服务器或者是一个专用服务器。或有图4中类似结构的设备。本申请实施例不限定服务器400的类型。例如,服务器400可以为图1中所示的VNFM1012服务器、VIM 1013服务器或NFVI 102服务器等。需要说明的是,本申请实施例虽然以各个功能分别对应一个服务器进行描述,但是本领域技术人员可以理解,在实际产品中,也可以将多个功能实现在一个服务器上,均在本申请实施例保护范围内。
下面以虚拟机为SBC虚拟机,业务软件为SBC业务软件为例,对本申请实施例进行详细说明,本领域技术人员可以理解,对于其他类型的虚拟机(例如Firewall、PGW等)同样适用于本申请实施例的保护范围。
本申请实施例提供了一种控制信息传递方法,参照图5中所示,包括:
S001、虚拟机的业务软件生成控制信息,并发送给第一设备,其中,控制信息用于防攻击或呼叫会话带宽控制。
其中,当虚拟机为会话边界控制器SBC虚拟机时,控制信息用于防攻击或呼叫会话带宽控制,使得虚拟交换机可以实现SBC虚拟机的防攻击或呼叫会话带宽控制功能。此时控制信息包括:所述SBC虚拟机的标识、流规则类型、流处理操作类型和参数包,所述流处理操作类型用于指示增加、修改或删除;当所述流规则类型为访问控制列表ACL时,所述参数包包括源互联网协议IP地址、源端口号、目的IP地址、目的端口号和过滤操作动作,其中,所述过滤操作动作用于指示允许通过或丢弃包;当所述流规则类型为呼叫准入控制CAC时,所述参数包包括源IP地址、源端口号、目的IP地址、目的端口号和允许带宽。
S002、第一设备从会话边界控制器虚拟机的业务软件接收控制信息。
S003、第一设备将控制信息发送给第二设备。
S004、第二设备从第一设备接收控制信息。
S005、第二设备将控制信息配置给虚拟交换机。
其中,第一设备和第二设备可以为软件实现的虚拟设备,或者实体的设备,即第一设备可以为虚拟代理设备前端,该虚拟代理设备前端配置于虚拟机中,第二设备为虚拟代理设备后端,虚拟代理设备后端配置于虚拟资源层的虚拟网络中;或者,第一设备为虚拟网络功能管理器VNFM,第二设备为虚拟化基础设施管理器VIM。
本申请实施例提供的控制信息传递方法,虚拟机的业务软件将控制信息通过第一设备和第二设备传递给虚拟交换机,实现了在NFV系统中虚拟机可以向虚拟交换机发送控制信息以使虚拟交换机可以实现虚拟机的特定功能。
可选的,参照图6中所示,所述方法还可以包括:
S006、虚拟交换机向第二设备发送控制结果信息,控制结果信息用于指示控制信息是否配置成功。
S007、第二设备从虚拟交换机接收控制结果信息。
S008、第二设备将控制结果信息发送给第一设备。
S009、第一设备从第二设备接收控制结果信息。
S010、第一设备将控制结果信息发送给虚拟机的业务软件。
上述实施方式实现了虚拟交换机将控制信息的控制结果信息通过第二设备和第一设备反馈给虚拟机的业务软件。
以第一设备和第二设备为软件实现的虚拟设备为例,本申请实施例提供了一种控制信息传递方法,参照图7中所示,包括:
S101、虚拟机的业务软件生成控制信息,并发送给虚拟代理设备前端。
参照图8中所示,可以新增流处理的虚拟代理设备,其中,在虚拟机中配置虚拟代理设备前端2013,使得虚拟代理设备前端2013可以与虚拟机中的业务软件2011直接通信;在虚 拟资源层的虚拟网络中配置虚拟代理设备后端2003,使得虚拟代理设备后端2003可以与同样位于虚拟资源层的虚拟网络中的虚拟交换机2001直接通信。对应到图1中所示的NFV架构图中,即在VNF 108中配置虚拟代理设备前端2013,在虚拟资源层的虚拟网络10223中配置虚拟代理设备后端2003。
以虚拟机为SBC虚拟机,业务软件为SBC业务软件为例,SBC业务软件根据受到攻击情况或者呼叫会话带宽控制要求产生控制信息,通过SBC虚拟机的虚拟代理设备前端提供的接口来配置控制信息。
另外,虚拟机的业务软件向虚拟代理设备前端2013发送上述信息时,还要发送操作命令字标识以控制动作是设置流规则(SetFlowRule)或者查询统计(QueryStat)。在配置控制信息时,需要下发设置流规则。
S102、虚拟代理设备前端从虚拟机的业务软件接收控制信息。
S103、虚拟代理设备前端将控制信息发送给虚拟代理设备后端。
基于标准化的虚拟设备机制,虚拟代理设备前端2013将控制信息透传发送给虚拟代理设备后端2003。
S104、虚拟代理设备后端从虚拟代理设备前端接收控制信息。
S105、虚拟代理设备后端将控制信息配置给虚拟交换机。
虚拟代理设备后端2003基于传递信息的类型,构建不同的处理过程,最后调用虚拟交换机提供的OpenFlow接口将控制信息配置到虚拟机交换机中。具体的,虚拟代理设备后端2003根据信息中的虚拟机的标识(VM-ID)调用虚拟化层的虚拟网络接口,查询到该标识对应的虚拟交换机的虚拟端口(VM-Port),然后调用OpenFlow接口,将控制信息中的操作映射到OpenFlow对应的操作模型,发送给虚拟交换机。
可选的,当虚拟交换机支持将报文过滤信息下发至网络入口的硬件层(例如物理网卡)时,虚拟交换机自动完成将上述控制信息下发到网络入口的硬件层。
其中,上述S101-S105中的动作可以由图4所示的服务器400中的处理器401调用存储器403中存储的应用程序代码来执行,此时的服务器400为图1中所示的NFVI 102服务器。
本申请实施例提供的控制信息传递方法,通过位于虚拟机中的虚拟代理设备前端以及位于虚拟网络中的虚拟代理设备后端,将虚拟机的业务软件产生的控制信息传递给虚拟交换机,实现了在NFV系统中虚拟机可以向虚拟交换机发送控制信息以使虚拟交换机可以实现虚拟机的特定功能。
可选的,参照图9中所述,所述方法还包括:
S201、虚拟交换机向虚拟代理设备后端发送控制结果信息,控制结果信息用于指示控制信息是否配置成功。
虚拟交换机在处理虚拟设备后端下发的防攻击或呼叫会话带宽控制信息后,向虚拟代理设备后端回复处理结果是成功还是失败,如果失败,同时给出失败原因。
S202、虚拟代理设备后端从虚拟交换机接收控制结果信息。
S203、虚拟代理设备后端将控制结果信息发送给虚拟代理设备前端。
虚拟代理设备后端将控制结果信息透传发送给虚拟代理设备前端。
S204、虚拟代理设备前端从虚拟代理设备后端接收控制结果信息。
S205、虚拟代理设备前端将控制结果信息发送给虚拟机的业务软件。
虚拟代理设备前端将控制结果信息透传发送给虚拟机的业务软件。
其中,上述S201-S205中的动作可以由图4所示的服务器400中的处理器401调用存储器403中存储的应用程序代码来执行,此时的服务器400为图1中所示的NFVI 102服务器。
上述实施方式实现了虚拟交换机将控制信息的控制结果信息通过虚拟代理设备后端和虚拟代理设备前端反馈给虚拟机的业务软件。
可选的,参照图10中所述,所述方法还包括:
S301、虚拟机的业务软件生成查询信息,并发送给虚拟代理设备前端。
该查询信息用于查询网络数据包统计情况,其操作命令字标识为查询统计(QueryStat),并且查询信息中还包含虚拟机的标识。
S302、虚拟代理设备前端接收查询信息。
S303、虚拟代理设备前端将查询信息发送给虚拟代理设备后端。
S304、虚拟代理设备后端接收查询信息。
S305、虚拟代理设备后端将查询信息配置给虚拟交换机。
虚拟代理设备后端解析从虚拟代理设备前端传递的信息,如操作命令字标识为查询统计(QueryStat),则根据虚拟机的标识映射到虚拟交换机的虚拟端口(VM-Port),构造查询的接口消息发送给虚拟交换机。
S306、虚拟交换机向虚拟代理设备后端发送查询结果信息。
示例性的,通过OpenFlowRsp(Query,VM-Port,StatInfo)命令反馈查询结果信息,其中,OpenFlowRsp表示OpenFlow响应消息,Query表示内容为查询结果,VM-Port表示虚拟交换机的虚拟端口,StatInfo表示具体查询结果。
S307、虚拟代理设备后端从虚拟交换机接收查询结果信息。
S308、虚拟代理设备后端将查询结果信息发送给虚拟代理设备前端。
虚拟代理设备后端将查询结果信息透传发送给虚拟代理设备前端。
S309、虚拟代理设备前端从虚拟代理设备后端接收查询结果信息。
S310、虚拟代理设备前端将查询结果信息发送给虚拟机的业务软件。
虚拟代理设备前端将查询结果信息透传发送给虚拟机的业务软件。
其中,上述S301-S310中的动作可以由图4所示的服务器400中的处理器401调用存储器403中存储的应用程序代码来执行,此时的服务器400为图1中所示的NFVI 102服务器。
上述实施方式实现了虚拟机的业务软件将查询信息通过虚拟代理设备前端和虚拟代理设备后端发送给虚拟交换机,并且虚拟交换机将查询结果信息通过虚拟代理设备后端和虚拟代理设备前端反馈给虚拟机的业务软件。
以第一设备和第二设备为实体设备为例,本申请实施例提供了另一种控制信息传递方法,参照图11中所示,包括:
S401、虚拟机的业务软件生成控制信息,并发送给VNFM。
参照图12中所示,以虚拟机为SBC虚拟机,业务软件为SBC业务软件为例,SBC业务软件2011根据受到攻击情况或者呼叫会话带宽控制要求产生控制信息,并通过VNFM 1012提供 的接口发送给VNFM 1012。此处的控制信息与前述控制信息相同,在此不再赘述。
S402、VNFM从SBC虚拟机的业务软件接收控制信息。
S403、VNFM将控制信息发送给VIM。
VNFM根据VIM 1013提供的接口能力将控制信息发送给VIM。
S404、VIM从VNFM接收控制信息。
S405、VIM将控制信息配置给虚拟交换机,用于虚拟交换机进行防攻击或呼叫会话带宽控制。
VIM完成从VIM到虚拟交换机的消息映射和处理,进而发送给虚拟交换机。
其中,上述S401中的动作可以由图4所示的服务器400中的处理器401调用存储器403中存储的应用程序代码来执行,此时的服务器400为图1中所示的NFVI 102服务器;上述S402和S403中的动作可以由图4所示的服务器400中的处理器401调用存储器403中存储的应用程序代码来执行,此时的服务器400为图1中所示的VNFM 1012服务器;上述S404和S405中的动作可以由图4所示的服务器400中的处理器401调用存储器403中存储的应用程序代码来执行,此时的服务器400为图1中所示的VIM 1013服务器。
本申请实施例提供的控制信息传递方法,虚拟机中的业务软件通过VNFM和VIM将控制信息配置给虚拟交换机。实现了在NFV系统中虚拟机可以向虚拟交换机发送控制信息以使虚拟交换机可以实现虚拟机的特定功能。该方案相对于现有技术中,通过EMS、PCRF、SDN控制器向虚拟交换机配置控制信息的方案来说,VNFM和VIM都是现有NFV架构中的现有设备,方案经济性更高。
需要说明的是,VNFM相当于所述虚拟代理设备前端,VIM相当于所述虚拟代理设备后端,区别在于,虚拟机的业务软件、VNFM、VIM、虚拟交换机之间需要遵循已有的通信协议。因此与步骤S201-S205类似的,虚拟交换机也可以通过VIM和VNFM将所述控制结果信息发送给虚拟机的业务软件;与步骤S301-S310类似的,虚拟机的业务软件也可以通过VNFM和VIM将所述查询信息发送给虚拟交换机,并且虚拟交换机也可以通过VIM和VNFM将所述查询结果信息发送给虚拟机的业务软件。在此不再赘述。
本申请实施例可以根据上述方法示例对各设备进行功能模块的划分,例如,可以对应各个功能划分各个功能模块,也可以将两个或两个以上的功能集成在一个处理模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。需要说明的是,本申请实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
在采用对应各个功能划分各个功能模块的情况下,图13示出了上述实施例中所涉及的VNFI服务器的一种可能的结构示意图,VNFI服务器13包括:业务软件模块1311、虚拟代理设备前端1312、虚拟代理设备后端1313、交换虚拟机1314。业务软件模块1311用于支持VNFI服务器13执行图5中的过程S001,图7中的过程S101,图10中的过程S301,图11中的过程S401;虚拟代理设备前端1312用于支持VNFI服务器13执行图5中的过程S002和S003,图6中的过程S008和009,图7中的过程S102和S103,图9中的过程S204和S205,图10中的过程S302、S303、S309和S310;虚拟代理设备后端1313用于支持VNFI服务器13执行图5中的过程S004和S005,图6中的过程S006和007,图7中的过程S104和S105,图9 中的过程S202和S203,图10中的过程S304、S305、S307和S308;交换虚拟机1314用于支持VNFI服务器13执行图6中的过程S006,图9中的过程S201,图10中的过程S306。其中,上述方法实施例涉及的各步骤的所有相关内容均可以援引到对应功能模块的功能描述,在此不再赘述。
在采用集成的单元的情况下,图14示出了上述实施例中所涉及的VNFI服务器的一种可能的结构示意图。VNFI服务器13包括:处理模块1322和通信模块1323。处理模块1322用于对VNFI服务器13的动作进行控制管理,例如,处理模块1322用于支持VNFI服务器13执行图5中的过程S00-S005、图6中的过程S006-S010、图7中的过程S101-S105、图9中的过程S201-S205、图10中的过程S301-S310、图11中的过程S401。通信模块1313用于支持VNFI服务器与其他实体的通信,例如与图1中示出的功能模块或网络实体之间的通信。VNFI服务器13还可以包括存储模块1321,用于存储VNFI服务器的程序代码和数据。
其中,处理模块1322可以是处理器或控制器,例如可以是中央处理器(central processing unit,CPU),通用处理器,数字信号处理器(digital signal processor,DSP),专用集成电路(application-specific integrated circuit,ASIC),现场可编程门阵列(field programmable gate array,FPGA)或者其他可编程逻辑器件、晶体管逻辑器件、硬件部件或者其任意组合。其可以实现或执行结合本申请公开内容所描述的各种示例性的逻辑方框,模块和电路。所述处理器也可以是实现计算功能的组合,例如包含一个或多个微处理器组合,DSP和微处理器的组合等等。通信模块1323可以是收发器、收发电路或通信接口等。存储模块1321可以是存储器。
当处理模块1322为处理器,通信模块1323为收发器,存储模块1321为存储器时,本申请实施例所涉及的VNFI服务器可以为图15所示的VNFI服务器13。
参阅图15所示,该VNFI服务器13包括:处理器1332、收发器1333、存储器1331、总线1334。其中,收发器1333、处理器1332、存储器1331通过总线1334相互连接;总线1334可以是外设部件互连标准(peripheral component interconnect,PCI)总线或扩展工业标准结构(extended industry standard architecture,EISA)总线等。所述总线可以分为地址总线、数据总线、控制总线等。为便于表示,图中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。
在采用对应各个功能划分各个功能模块的情况下,图16示出了上述实施例中所涉及的VNFM服务器的一种可能的结构示意图,VNFM服务器16包括:接收单元1611、发送单元1612。接收单元1611用于支持VNFM服务器16执行图5中的过程S002,图6中的过程S009,图11中的过程S402;发送单元1312用于支持VNFM服务器13执行图5中的过程S003,图6中的过程S010,图11中的过程S403。其中,上述方法实施例涉及的各步骤的所有相关内容均可以援引到对应功能模块的功能描述,在此不再赘述。
在采用集成的单元的情况下,图17示出了上述实施例中所涉及的VNFM服务器的一种可能的结构示意图。VNFM服务器16包括:处理模块1622和通信模块1623。处理模块1622用于对VNFM服务器16的动作进行控制管理,例如,处理模块1622用于支持VNFM服务器16执行图5中的过程S002和S003、图6中的过程S009和S010、图11中的过程S402和S403。通信模块1613用于支持VNFM服务器与其他实体的通信,例如与图1中示出的功能模块或网络实体之间的通信。VNFM服务器16还可以包括存储模块1621,用于存储VNFM服务器的程序 代码和数据。
其中,处理模块1622可以是处理器或控制器,例如可以是中央处理器(central processing unit,CPU),通用处理器,数字信号处理器(digital signal processor,DSP),专用集成电路(application-specific integrated circuit,ASIC),现场可编程门阵列(field programmable gate array,FPGA)或者其他可编程逻辑器件、晶体管逻辑器件、硬件部件或者其任意组合。其可以实现或执行结合本申请公开内容所描述的各种示例性的逻辑方框,模块和电路。所述处理器也可以是实现计算功能的组合,例如包含一个或多个微处理器组合,DSP和微处理器的组合等等。通信模块1623可以是收发器、收发电路或通信接口等。存储模块1621可以是存储器。
当处理模块1622为处理器,通信模块1623为收发器,存储模块1621为存储器时,本申请实施例所涉及的VNFM服务器可以为图18所示的VNFM服务器16。
参阅图18所示,该VNFM服务器16包括:处理器1632、收发器1633、存储器1631、总线1634。其中,收发器1633、处理器1632、存储器1631通过总线1634相互连接;总线1634可以是外设部件互连标准(peripheral component interconnect,PCI)总线或扩展工业标准结构(extended industry standard architecture,EISA)总线等。所述总线可以分为地址总线、数据总线、控制总线等。为便于表示,图中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。
在采用对应各个功能划分各个功能模块的情况下,图19示出了上述实施例中所涉及的VIM服务器的一种可能的结构示意图,VIM服务器19包括:接收单元1911、发送单元1912。接收单元1911用于支持VIM服务器19执行图5中的过程S004,图6中的过程S007,图11中的过程S404;发送单元1912用于支持VIM服务器19执行图5中的过程S005,图6中的过程S008,图11中的过程S405。其中,上述方法实施例涉及的各步骤的所有相关内容均可以援引到对应功能模块的功能描述,在此不再赘述。
在采用集成的单元的情况下,图20示出了上述实施例中所涉及的VIM服务器的一种可能的结构示意图。VIM服务器19包括:处理模块1922和通信模块1923。处理模块1922用于对VIM服务器19的动作进行控制管理,例如,处理模块1922用于支持VIM服务器19执行图5中的过程S004和S005、图6中的过程S007和S008、图11中的过程S404和S405。通信模块1913用于支持VIM服务器与其他实体的通信,例如与图1中示出的功能模块或网络实体之间的通信。VIM服务器19还可以包括存储模块1921,用于存储VIM服务器的程序代码和数据。
其中,处理模块1922可以是处理器或控制器,例如可以是中央处理器(central processing unit,CPU),通用处理器,数字信号处理器(digital signal processor,DSP),专用集成电路(application-specific integrated circuit,ASIC),现场可编程门阵列(field programmable gate array,FPGA)或者其他可编程逻辑器件、晶体管逻辑器件、硬件部件或者其任意组合。其可以实现或执行结合本申请公开内容所描述的各种示例性的逻辑方框,模块和电路。所述处理器也可以是实现计算功能的组合,例如包含一个或多个微处理器组合,DSP和微处理器的组合等等。通信模块1923可以是收发器、收发电路或通信接口等。存储模块1921可以是存储器。
当处理模块1922为处理器,通信模块1923为收发器,存储模块1921为存储器时,本申请实施例所涉及的VIM服务器可以为图21所示的VIM服务器19。
参阅图21所示,该VIM服务器19包括:处理器1932、收发器1933、存储器1931、总线1934。其中,收发器1933、处理器1932、存储器1931通过总线1934相互连接;总线1934可以是外设部件互连标准(peripheral component interconnect,PCI)总线或扩展工业标准结构(extended industry standard architecture,EISA)总线等。所述总线可以分为地址总线、数据总线、控制总线等。为便于表示,图中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (19)

  1. 一种控制信息传递方法,其特征在于,包括:
    第一设备从虚拟机的业务软件接收控制信息;
    所述第一设备将所述控制信息发送给第二设备;
    其中,所述第一设备为虚拟代理设备前端,所述虚拟代理设备前端配置于所述虚拟机中,所述第二设备为虚拟代理设备后端,所述虚拟代理设备后端配置于虚拟资源层的虚拟网络中;
    或者,
    所述第一设备为虚拟网络功能管理器VNFM,所述第二设备为虚拟化基础设施管理器VIM。
  2. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    所述第一设备从所述第二设备接收控制结果信息,所述控制结果信息用于指示所述控制信息是否配置成功;
    所述第一设备将所述控制结果信息发送给所述虚拟机的业务软件。
  3. 根据权利要求1或2所述的方法,其特征在于,当所述虚拟机为会话边界控制器SBC虚拟机时,所述控制信息用于防攻击或呼叫会话带宽控制,所述控制信息包括:所述虚拟机的标识、流规则类型、流处理操作类型和参数包,所述流处理操作类型用于指示增加、修改或删除;
    当所述流规则类型为访问控制列表ACL时,所述参数包包括源互联网协议IP地址、源端口号、目的IP地址、目的端口号和过滤操作动作,其中,所述过滤操作动作用于指示允许通过或丢弃包;
    当所述流规则类型为呼叫准入控制CAC时,所述参数包包括源IP地址、源端口号、目的IP地址、目的端口号和允许带宽。
  4. 一种控制信息传递方法,其特征在于,包括:
    第二设备从第一设备接收控制信息;
    所述第二设备将所述控制信息配置给虚拟交换机;
    其中,所述第一设备为虚拟代理设备前端,所述虚拟代理设备前端配置于虚拟机中,所述第二设备为虚拟代理设备后端,所述虚拟代理设备后端配置于虚拟资源层的虚拟网络中;
    或者,
    所述第一设备为虚拟网络功能管理器VNFM,所述第二设备为虚拟化基础设施管理器VIM。
  5. 根据权利要求4所述的方法,其特征在于,所述方法还包括:
    所述第二设备从所述虚拟交换机接收控制结果信息,所述控制结果信息用于指示所述控制信息是否配置成功;
    所述第二设备将所述控制结果信息发送给所述第一设备。
  6. 根据权利要求4或5所述的方法,其特征在于,当所述虚拟机为会话边界控制器SBC虚拟机时,所述控制信息用于防攻击或呼叫会话带宽控制,所述控制信息包括:所述虚拟机的标识、流规则类型、流处理操作类型和参数包,所述流处理操作类型用于指示增加、修改或删除;
    当所述流规则类型为访问控制列表ACL时,所述参数包包括源互联网协议IP地址、源端口号、目的IP地址、目的端口号和过滤操作动作,其中,所述过滤操作动作用于指示允许通过或丢弃包;
    当所述流规则类型为呼叫准入控制CAC时,所述参数包包括源IP地址、源端口号、目的IP地址、目的端口号和允许带宽。
  7. 一种网络功能虚拟化基础设施层NFVI服务器,其特征在于,包括:
    虚拟代理设备前端,用于从虚拟机的业务软件接收控制信息,并发送给虚拟代理设备后端,其中,所述虚拟代理设备前端配置于所述虚拟机中,所述虚拟代理设备后端配置于虚拟资源层的虚拟网络中;
    所述虚拟代理设备后端,用于从所述虚拟代理设备前端接收所述控制信息,并发送给虚拟交换机。
  8. 根据权利要求7所述的NFVI服务器,其特征在于,
    所述虚拟代理设备后端,还用于从所述虚拟交换机接收控制结果信息,并发送给所述虚拟代理设备前端,所述控制结果信息用于指示所述控制信息是否配置成功;
    所述虚拟代理设备前端,还用于从所述虚拟代理设备后端接收控制结果信息,并发送给所述虚拟机的业务软件。
  9. 根据权利要求7或8所述的NFVI服务器,其特征在于,当所述虚拟机为会话边界控制器SBC虚拟机时,所述控制信息用于防攻击或呼叫会话带宽控制,所述控制信息包括:所述虚拟机的标识、流规则类型、流处理操作类型和参数包,所述流处理操作类型用于指示增加、修改或删除;
    当所述流规则类型为访问控制列表ACL时,所述参数包包括源互联网协议IP地址、源端口号、目的IP地址、目的端口号和过滤操作动作,其中,所述过滤操作动作用于指示允许通过或丢弃包;
    当所述流规则类型为呼叫准入控制CAC时,所述参数包包括源IP地址、源端口号、目的IP地址、目的端口号和允许带宽。
  10. 一种虚拟网络功能管理器VNFM服务器,其特征在于,包括:
    接收单元,用于从会话边界控制器虚拟机的业务软件接收控制信息;
    发送单元,用于将所述控制信息发送给虚拟化基础设施管理器VIM。
  11. 根据权利要求10所述的VNFM服务器,其特征在于,
    所述接收单元,还用于从所述VIM接收控制结果信息,所述控制结果信息用于指示所述控制信息是否配置成功;
    所述发送单元,还用于将所述控制结果信息发送给所述虚拟机的业务软件。
  12. 根据权利10或11所述的VNFM服务器,其特征在于,当所述虚拟机为会话边界控制器SBC虚拟机时,所述控制信息用于防攻击或呼叫会话带宽控制,所述控制信息包括:所述虚拟机的标识、流规则类型、流处理操作类型和参数包,所述流处理操作类型用于指示增加、修改或删除;
    当所述流规则类型为访问控制列表ACL时,所述参数包包括源互联网协议IP地址、源端口号、目的IP地址、目的端口号和过滤操作动作,其中,所述过滤操作动作用于指示允许通过或丢弃包;
    当所述流规则类型为呼叫准入控制CAC时,所述参数包包括源IP地址、源端口号、目的IP地址、目的端口号和允许带宽。
  13. 一种虚拟化基础设施管理器VIM服务器,其特征在于,包括:
    接收单元,用于从虚拟网络功能管理器VNFM接收控制信息;
    发送单元,用于将所述控制信息配置给虚拟交换机。
  14. 根据权利要求13所述的VIM服务器,其特征在于,
    所述接收单元,还用于从所述虚拟交换机接收控制结果信息,所述控制结果信息用于指示所述控制信息是否配置成功;
    所述发送单元,还用于将所述控制结果信息发送给所述VNFM。
  15. 根据权利要求13或14所述的VIM服务器,其特征在于,所述控制信息包括:所述SBC虚拟机的标识、流规则类型、流处理操作类型和参数包,所述流处理操作类型用于指示增加、修改或删除;
    当所述流规则类型为访问控制列表ACL时,所述参数包包括源互联网协议IP地址、源端口号、目的IP地址、目的端口号和过滤操作动作,其中,所述过滤操作动作用于指示允许通过或丢弃包;
    当所述流规则类型为呼叫准入控制CAC时,所述参数包包括源IP地址、源端口号、目的IP地址、目的端口号和允许带宽。
  16. 一种网络功能虚拟化基础设施层NFVI服务器,其特征在于,包括:处理器、存储器、总线和通信接口;所述存储器用于存储计算机执行指令,所述处理器与所述存储器通过所述总线连接,当所述NFVI服务器运行时,所述处理器执行所述存储器存储的所述计算机执行指令,以使所述NFVI服务器执行如权利要求1-6中任意一项所述的控制信息传递方法。
  17. 一种虚拟网络功能管理器VNFM服务器,其特征在于,包括:处理器、存储器、总线和通信接口;所述存储器用于存储计算机执行指令,所述处理器与所述存储器通过所述总线连接,当所述VNFM服务器运行时,所述处理器执行所述存储器存储的所述计算机执行指令,以使所述VNFM服务器执行如权利要求1-3中任意一项所述的控制信息传递方法。
  18. 一种虚拟化基础设施管理器VIM服务器,其特征在于,包括:处理器、存储器、总线和通信接口;所述存储器用于存储计算机执行指令,所述处理器与所述存储器通过所述总线连接,当所述VIM服务器运行时,所述处理器执行所述存储器存储的所述计算机执行指令,以使所VIM服务器执行如权利要求4-6中任意一项所述的控制信息传递方法。
  19. 一种网络功能虚拟化NFV通信系统,其特征在于,包括如权利要求7-9任一项所述的网络功能虚拟化基础设施层NFVI服务器;或者包括如权利要求10-12任一项所述的虚拟网络功能管理器VNFM服务器以及如权利要求13-15任一项所述的虚拟化基础设施管理器VIM服务器;或者包括如权利要求16所述的NFVI服务器;或者包括如权利要求17所述的VNFM服务器以及如权利要求18所述的VIM服务器。
PCT/CN2018/077070 2017-02-24 2018-02-23 控制信息传递方法、服务器和系统 WO2018153355A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710104539.5 2017-02-24
CN201710104539.5A CN108512779B (zh) 2017-02-24 2017-02-24 控制信息传递方法、服务器和系统

Publications (1)

Publication Number Publication Date
WO2018153355A1 true WO2018153355A1 (zh) 2018-08-30

Family

ID=63252402

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/077070 WO2018153355A1 (zh) 2017-02-24 2018-02-23 控制信息传递方法、服务器和系统

Country Status (2)

Country Link
CN (1) CN108512779B (zh)
WO (1) WO2018153355A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114793217A (zh) * 2022-03-24 2022-07-26 阿里云计算有限公司 智能网卡、数据转发方法、装置及电子设备
CN115801709A (zh) * 2023-01-20 2023-03-14 苏州浪潮智能科技有限公司 路由mac地址的管理方法、装置、电子设备及存储介质
WO2023236858A1 (zh) * 2022-06-06 2023-12-14 华为技术有限公司 流表规则的管理方法、流量管理方法、系统及存储介质
CN114793217B (zh) * 2022-03-24 2024-06-04 阿里云计算有限公司 智能网卡、数据转发方法、装置及电子设备

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111600755B (zh) * 2020-05-13 2023-02-28 天翼数字生活科技有限公司 上网行为管理系统和方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104253866A (zh) * 2014-09-20 2014-12-31 华为技术有限公司 虚拟网络功能网元的软件部署方法、系统及相关设备
CN104410672A (zh) * 2014-11-12 2015-03-11 华为技术有限公司 网络功能虚拟化应用升级的方法、转发业务的方法及装置
CN104486234A (zh) * 2014-11-21 2015-04-01 华为技术有限公司 一种将业务交换机卸载到物理网卡的方法及服务器
CN105791175A (zh) * 2014-12-26 2016-07-20 电信科学技术研究院 软件定义网络中控制传输资源的方法及设备
US20160328258A1 (en) * 2013-12-27 2016-11-10 Ntt Docomo, Inc. Management system, overall management node, and management method

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9817695B2 (en) * 2009-04-01 2017-11-14 Vmware, Inc. Method and system for migrating processes between virtual machines
JP5839032B2 (ja) * 2011-02-24 2016-01-06 日本電気株式会社 ネットワークシステム、コントローラ、及びフロー制御方法
US20130034094A1 (en) * 2011-08-05 2013-02-07 International Business Machines Corporation Virtual Switch Data Control In A Distributed Overlay Network
CN103023827B (zh) * 2012-11-23 2017-04-19 杭州华三通信技术有限公司 一种虚拟化数据中心的数据转发方法及其实现设备
CN103780674B (zh) * 2013-11-13 2017-05-31 南京中兴新软件有限责任公司 一种基于硬件模拟的虚拟机通信方法和装置
US9497235B2 (en) * 2014-05-30 2016-11-15 Shoretel, Inc. Determining capacity of virtual devices in a voice over internet protocol system
CN105282003B (zh) * 2014-06-20 2019-03-22 中国电信股份有限公司 建立隧道的方法和系统以及隧道控制器和虚拟交换机
US10237354B2 (en) * 2014-09-25 2019-03-19 Intel Corporation Technologies for offloading a virtual service endpoint to a network interface card
CN104618234B (zh) * 2015-01-22 2018-12-07 华为技术有限公司 控制网络流量传输路径切换的方法及系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160328258A1 (en) * 2013-12-27 2016-11-10 Ntt Docomo, Inc. Management system, overall management node, and management method
CN104253866A (zh) * 2014-09-20 2014-12-31 华为技术有限公司 虚拟网络功能网元的软件部署方法、系统及相关设备
CN104410672A (zh) * 2014-11-12 2015-03-11 华为技术有限公司 网络功能虚拟化应用升级的方法、转发业务的方法及装置
CN104486234A (zh) * 2014-11-21 2015-04-01 华为技术有限公司 一种将业务交换机卸载到物理网卡的方法及服务器
CN105791175A (zh) * 2014-12-26 2016-07-20 电信科学技术研究院 软件定义网络中控制传输资源的方法及设备

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114793217A (zh) * 2022-03-24 2022-07-26 阿里云计算有限公司 智能网卡、数据转发方法、装置及电子设备
CN114793217B (zh) * 2022-03-24 2024-06-04 阿里云计算有限公司 智能网卡、数据转发方法、装置及电子设备
WO2023236858A1 (zh) * 2022-06-06 2023-12-14 华为技术有限公司 流表规则的管理方法、流量管理方法、系统及存储介质
CN115801709A (zh) * 2023-01-20 2023-03-14 苏州浪潮智能科技有限公司 路由mac地址的管理方法、装置、电子设备及存储介质
CN115801709B (zh) * 2023-01-20 2023-05-23 苏州浪潮智能科技有限公司 路由mac地址的管理方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN108512779A (zh) 2018-09-07
CN108512779B (zh) 2020-11-27

Similar Documents

Publication Publication Date Title
US11036536B2 (en) Method, apparatus, and system for deploying virtualized network function using network edge computing
WO2018024059A1 (zh) 一种虚拟化网络中业务部署的方法和装置
US10880248B2 (en) Orchestrator agnostic application container visibility
US10313380B2 (en) System and method for centralized virtual interface card driver logging in a network environment
US11108653B2 (en) Network service management method, related apparatus, and system
US9825808B2 (en) Network configuration via abstraction components and standard commands
US20130124702A1 (en) Method and System For Network Configuration And/Or Provisioning Based On Metadata
US10572291B2 (en) Virtual network management
WO2016184283A1 (zh) 一种虚拟机数据流管理方法和系统
WO2018153355A1 (zh) 控制信息传递方法、服务器和系统
CN105556929A (zh) 在云计算系统中运行应用的网络元件和方法
WO2021185083A1 (zh) Vnf实例化方法及装置
WO2019047835A1 (zh) 虚拟网络功能的实例化方法
WO2019062995A1 (zh) 网络管理方法、设备及系统
WO2021254001A1 (zh) 会话建立方法、装置、系统及计算机存储介质
WO2015043679A1 (en) Moving stateful applications
WO2021103657A1 (zh) 网络操作方法、装置、设备和存储介质
WO2021175105A1 (zh) 连接方法、装置、设备和存储介质
EP4083795A1 (en) Method for deploying virtual machine, and related apparatus
CN108886476B (zh) 虚拟交换机数据平面和数据平面迁移的多个提供器框架
WO2022028092A1 (zh) 一种vnf实例化的方法和装置
WO2021022947A1 (zh) 一种部署虚拟机的方法及相关装置
WO2020220937A1 (zh) 一种安全策略管理方法及装置
WO2022089645A1 (zh) 通信方法、装置、设备、系统及计算机可读存储介质
WO2023035777A1 (zh) 网络配置方法、代理组件、控制器、电子设备和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18758199

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18758199

Country of ref document: EP

Kind code of ref document: A1