WO2019179453A1 - 虚拟机创建方法及装置 - Google Patents

虚拟机创建方法及装置 Download PDF

Info

Publication number
WO2019179453A1
WO2019179453A1 PCT/CN2019/078813 CN2019078813W WO2019179453A1 WO 2019179453 A1 WO2019179453 A1 WO 2019179453A1 CN 2019078813 W CN2019078813 W CN 2019078813W WO 2019179453 A1 WO2019179453 A1 WO 2019179453A1
Authority
WO
WIPO (PCT)
Prior art keywords
network card
computing node
virtual machine
virtual
resource pool
Prior art date
Application number
PCT/CN2019/078813
Other languages
English (en)
French (fr)
Inventor
刘铁声
管延杰
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to ES19771706T priority Critical patent/ES2945218T3/es
Priority to EP19771706.9A priority patent/EP3761170B1/en
Publication of WO2019179453A1 publication Critical patent/WO2019179453A1/zh
Priority to US17/026,767 priority patent/US11960915B2/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5077Logical partitioning of resources; Management or configuration of virtualized resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45562Creating, deleting, cloning virtual machine instances
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45595Network integration; Enabling network access in virtual machine instances
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5011Pool

Definitions

  • the present disclosure relates to the field of cloud computing technologies, and in particular, to a virtual machine creation method and apparatus.
  • SR-IOV Single-Root I/O Virtualization
  • PCIe Peripheral Component Interconnect Express
  • PCIe Peripheral Component Interconnect Express
  • SR-IOV technology has been widely applied to cloud platforms.
  • a physical network card supporting SR-IOV technology is configured on a computing node of a cloud platform, so that when the cloud platform is created, The physical network card supporting the SR-IOV technology can be used to generate a virtual function (VF, Virtual Function), and the generated VF is used as a virtual network card of the virtual machine on the computing node.
  • VF Virtual Function
  • a physical network card supporting SR-IOV technology can usually virtualize multiple VFs.
  • the user when the upper layer service needs the cloud platform to provide the virtual machine, the user also needs to select a suitable computing node among the plurality of computing nodes managed by the cloud platform to create the virtual machine. Because different virtual machines have different performance requirements, for example, some virtual machines have high bandwidth and delay requirements. When these virtual machines are running on the same computing node, the computing node resources may be insufficient. In order to cope with various situations, complex resource planning will greatly increase the complexity of service delivery and reduce system performance.
  • the embodiment of the present disclosure provides a method and an apparatus for creating a virtual machine, which solves the problems of high complexity and low system performance when creating a virtual machine in the related art.
  • the technical solution is as follows:
  • the first aspect provides a virtual machine creation method, where the method is applied to a cloud platform, where each computing node of the cloud platform includes a network card resource pool, where the network card resource pool includes each configured on the computing node.
  • Physical network card the method includes:
  • the target computing node is invoked to create the virtual machine.
  • the method provided by the embodiment of the present disclosure pools network resources on each computing node of the cloud platform, and completes configuring one network card resource pool for each computing node, thereby implementing all physical network cards configured on each computing node.
  • the resources are uniformly scheduled for use by the upper-layer services, so that the cloud platform can automatically create the virtual to be created according to the parameter information of the virtual network card occupied by the virtual machine to be created and the current resource occupation information of the network card resource pool of each computing node.
  • the machine is scheduled to be created on a suitable computing node. Therefore, the embodiment of the present disclosure does not require the user to select a suitable computing node independently, and also takes into account different performance requirements of different virtual machines, and does not have high requirements on bandwidth and delay.
  • the multiple virtual machines are configured in the same computing node, and the resulting computing node resources are insufficient, and the network resources of the computing nodes are fully utilized reasonably; in addition, the embodiments of the present disclosure are complicated because no user is required to cope with various situations. Resource planning, so the complexity of business distribution is low, Increased system performance.
  • the parameter information includes the number of virtual network cards occupied by the virtual machine to be created, the virtual network card bandwidth, and the affinity information of the virtual network card occupied by the virtual machine. ;
  • the affinity information is used to indicate whether different virtual network cards occupied by the same virtual machine are from the same physical network card; when the affinity information indicates maintaining affinity, the same virtual machine The different virtual network cards occupied by the same physical network card; when the affinity information indicates that the anti-affinity is maintained, different virtual network cards occupied by the same virtual machine are from different physical network cards.
  • the embodiment of the present disclosure can improve forwarding efficiency by specifying affinity information, such as specifying that different virtual network cards are from the same physical network card, and if a different virtual network card is specified from different physical network cards, even if one physical network card fails. It will not affect other virtual network cards of the virtual machine, which can improve the reliability of the virtual machine.
  • the method further includes:
  • For each NIC resource pool determine the number of virtual NICs available for each physical NIC in the NIC resource pool and the available virtual NIC bandwidth according to the NIC information of each physical NIC in the NIC resource pool;
  • the number of virtual network cards available for each physical network card in the network card resource pool and the available virtual network card bandwidth are used as resource occupation information of the network card resource pool;
  • the method further includes:
  • the updated resource occupation information of the target computing node is stored in the cloud platform database.
  • the determining, according to the parameter information and current resource occupation information of each network card resource pool, in at least one computing node of the cloud platform including:
  • CPU Central Processing Unit
  • the method further includes:
  • the bandwidth occupied by any one of the virtual network cards is limited to the Within the target bandwidth.
  • the embodiment of the present disclosure also supports setting the bandwidth quality of service (QoS) of the virtual network card occupied by the created virtual machine, and ensuring the virtual machine between the virtual machine internal network card. Use bandwidth reasonably to avoid preempting bandwidth resources and affecting upper-layer services.
  • QoS quality of service
  • a virtual machine creation apparatus for performing the virtual creation method described in the above first aspect.
  • a storage medium stores at least one instruction, at least one program, a code set or a set of instructions, the at least one instruction, the at least one program, the code set or the instruction
  • the set is loaded and executed by the processor to implement the virtual machine creation method as described in the first aspect above.
  • a computer program product comprising instructions which, when run on a computer, enable the computer to perform the virtual machine creation method as described in the first aspect above.
  • a cloud platform is provided, where the cloud platform includes the virtual machine creation device, and the virtual machine creation device is configured to execute the virtual creation method described in the first aspect.
  • FIG. 1A is a structural diagram of a cloud platform involved in a virtual machine creation method according to an embodiment of the present disclosure
  • 1B is a general process description of a virtual machine creation method according to an embodiment of the present disclosure
  • FIG. 2 is a flowchart of a method for constructing a network card resource pool according to an embodiment of the present disclosure
  • FIG. 3 is a flowchart of a method for creating a virtual machine according to an embodiment of the present disclosure
  • FIG. 4 is an architectural diagram of a cloud platform taking an open source OpenStack as an example according to an embodiment of the present disclosure
  • FIG. 5 is a schematic structural diagram of a virtual machine creation apparatus according to an embodiment of the present disclosure.
  • FIG. 6 is a schematic structural diagram of a virtual machine creation apparatus according to an embodiment of the present disclosure.
  • SR-IOV Also known as hard pass-through technology, it is a technology that virtualizes a single PCIe device into multiple independent PCIe devices for upper-layer services. Hard pass-through technology has emerged to meet the high bandwidth and low latency requirements of applications. For example, with the rapid development of cloud computing technology, more and more applications are deployed on cloud platforms, and some applications have higher requirements on bandwidth and delay, so in order to meet the high bandwidth and low latency requirements of these applications. , introduced SR-IOV technology on the cloud platform.
  • the SR-IOV virtual channel is divided into two categories, one is Physical Function (PF) and the other is Virtual Function (VF).
  • PF is a complete PCIe device, which includes comprehensive management and configuration functions.
  • hypervisor When a hypervisor of a computing node on the cloud platform identifies a physical NIC that supports SR-IOV, it will manage and configure the physics through PF. All input/output (I/O) resources of the NIC.
  • Hypervisor is the core of virtualization technology, which is an intermediate software layer running between the basic physical device and the operating system, allowing multiple operating systems and applications to share hardware.
  • VF is a simplified PCIe device that only contains I/O functions, so it is impossible to manage SR-IOV-enabled physical network cards through VF. All VFs are derived from PF, and one supports SR-IOV physics.
  • the NIC can generate multiple VFs, for example, 256 VFs can be generated.
  • the VF generated by the physical network card is used as the virtual network card of the virtual machine created on the computing node.
  • FIG. 1A is a structural diagram of a cloud platform involved in a virtual machine creation method according to an embodiment of the present disclosure.
  • an embodiment of the present disclosure implements SR-IOV resource cloudization, that is, multiple physical network cards of each computing node on the cloud platform form an SR-IOV resource pool for use by upper-layer services, and SR is also used in this document.
  • the -IOV resource pool is called a NIC resource pool.
  • the embodiment of the present disclosure implements unified management of multiple physical network cards on each computing node by configuring a network card resource pool, and the number of virtual network cards available for multiple physical network cards and the available virtual network card bandwidth are uniformly scheduled. For example, in FIG. 1A, a physical network card 1, a physical network card 2, a physical network card 3, and a physical network card 4 form a network card resource pool.
  • the number of computing nodes on the cloud platform may be multiple, and only one computing node is illustrated in FIG. 1A for illustration.
  • FIG. 1A For other computing nodes than the computing node shown in FIG. 1A, there is a similar configuration to the computing node shown in FIG. 1A.
  • each compute node on the cloud platform multiple physical network cards supporting SR-IOV can be configured.
  • Each physical network card is connected to a different physical network plane, and one physical network card can virtualize multiple VFs, and each physical network card is independent of each other.
  • the number of physical NICs configured by one computing node may be one or more. In FIG. 1A, only four physical NICs are configured as an example, and the number of physical NICs configured on one computing node in the embodiment of the present disclosure is used. No specific limitation is made.
  • Step A As shown in FIG. 1A, a configuration module is configured on the cloud platform, and the configuration module is configured to configure a network card resource pool on the computing node.
  • the configuration module is specifically configured to invoke the cloud proxy module of each computing node, and then the cloud proxy module of each computing node collects the network card information of each physical network card configured on the computing node.
  • the network card information includes, but is not limited to, a network card model, a network card chip model, a network card bandwidth, and the like, which are not specifically limited in this embodiment of the present disclosure.
  • Step B After obtaining the network card information of the physical network card of each computing node, the configuration module determines the available resources of the network card resource pool of each computing node according to the network card information of the physical network card of each computing node, and uses each network card resource pool. Resources are stored in the cloud platform database.
  • the configuration module can determine the number of virtual network cards that can be generated by the physical network card according to the network card model of the physical network card and the network card chip model, and determine the virtual network card bandwidth available to the physical network card according to the network card bandwidth.
  • the available resources of a network card resource pool include, but are not limited to, the number of virtual network cards available in the physical network card of the network card resource pool, the available virtual network card bandwidth, and the like.
  • the embodiments of the present disclosure store the resources available in each network card resource pool in the cloud platform database, so as to provide SR-IOV resource cloudization data for each computing node, which is used to support the scheduling of the virtual machine.
  • the first point to be described is that the network card information of the physical network card obtained by the configuration module is supported by the embodiment, and the embodiment of the present disclosure supports storage to the cloud platform database. After any NIC resource pool is deployed on the cloud platform, if the subsequent expansion of the NIC resource pool, volume reduction, and hardware changes are required, the configuration module needs to be refreshed on the configuration module to update the cloud platform database. Information about the resources available to the NIC resource pool.
  • the scheduling module after selecting the best computing node for creating the virtual machine, the scheduling module also uses the parameter information of the virtual network card occupied by the created virtual machine to calculate the network card resource pool of the computing node.
  • the currently available resources are updated and the updated data is stored in the cloud platform database for the next SR-IOV resource scheduling.
  • the parameter information includes, but is not limited to, the number of virtual network cards occupied by the virtual machine to be created, the virtual network card bandwidth, and the affinity information of the virtual network card occupied by the virtual machine to be created.
  • Step C As shown in FIG. 1A, the scheduling module provided by the embodiment of the present disclosure includes a first scheduling module and a second scheduling module, which are used for SR-IOV resource scheduling.
  • the embodiment of the present disclosure utilizes the first scheduling module and the second scheduling module in FIG. 1A to comprehensively select the best computing node for virtual machine creation among the plurality of computing nodes provided by the cloud platform.
  • Step D As shown in FIG. 1A, the user specifies parameter information of the virtual network card occupied by the virtual machine to be created through the input interface.
  • the input interface may be a Command Line Interface (CLI) or a Graphical User Interface (GUI), etc., which is not specifically limited in this embodiment of the present disclosure.
  • CLI Command Line Interface
  • GUI Graphical User Interface
  • the number of virtual network cards is used to indicate how many network cards are to be created by the virtual machine to be created; the virtual network card bandwidth is used to indicate the bandwidth of each virtual network card; the affinity information of the virtual network card is used to indicate the same Whether the different virtual network cards occupied by one virtual machine are from the same physical network card. For this reason, the embodiment of the present disclosure proposes the concept of affinity and anti-affinity of the virtual network card. Affinity refers to the same physical NIC between different virtual NICs occupied by the same virtual machine. The anti-affinity refers to different physical NICs occupied by different virtual NICs occupied by the same virtual machine. .
  • FIG. 1B the steps of creating a virtual machine based on the NIC resource pool shown in FIG. 1A configured in the computing node are described below.
  • Step 101 The user side initiates an authentication request from the input interface to obtain valid authentication information, and sends a virtual machine creation request to the application programming interface (API) module through the input interface, and specifies the virtual machine to be created. Parameter information of the virtual network card.
  • API application programming interface
  • the CPU resource information, the memory resource information, and the like of the virtual machine to be created may be specified, which is not specifically limited in the embodiment of the present disclosure.
  • Step 102 After receiving the virtual machine creation request, the API module initializes the corresponding virtual machine detailed information in the cloud platform database.
  • the virtual machine detailed information includes the foregoing parameter information.
  • the virtual machine detailed information may further include CPU resource information, memory resource information, and the like of the virtual machine to be created, which is not specifically limited in this embodiment of the present disclosure.
  • Step 103 The scheduling module selects an optimal computing node for creating a virtual machine based on the foregoing parameter information and current usage of each network card resource pool.
  • Step 104 Call a cloud proxy module on the computing node to create a virtual machine.
  • Step 105 The cloud proxy module on the computing node invokes an execution module to obtain virtual machine detailed information for the virtual machine to be created from the cloud platform database.
  • Step 106 The cloud proxy module acquires mirror information required to create a virtual machine from the mirroring module according to the virtual machine detailed information.
  • Step 107 The cloud proxy module acquires network information required to create a virtual machine from the network module according to the virtual machine detailed information.
  • the user can also specify a physical network card to create a VF in the virtual machine creation request as the virtual network card of the virtual machine to be created.
  • the embodiment of the present disclosure may also limit the quality of service (QoS) of the virtual network card. For details, refer to the following.
  • Step 108 The cloud proxy module acquires, according to the virtual machine detailed information, the storage information required to create the virtual machine from the storage module.
  • Step 109 After the cloud agent module is ready to create various resources of the virtual machine, call the hypervisor on the computing node to create the virtual machine.
  • Step 1010 The cloud proxy module returns a virtual machine creation result to the user side.
  • NIC resource pool construction process provided by the embodiment of the present disclosure is described in detail below through an embodiment.
  • FIG. 2 is a flowchart of a method for constructing a network card resource pool according to an embodiment of the present disclosure. The method is applied to the cloud platform.
  • Each of the computing nodes of the cloud platform includes a network card resource pool, and the network card resource pool includes all physical network cards configured on the computing node.
  • the method process provided by the embodiment of the present disclosure includes :
  • the configuration module sends a call request to the cloud proxy module of the computing node.
  • the call request is used to invoke a computing node to collect network card information of the computing node, where the network card information is specifically the network card information of each physical network card in the network card resource pool of the computing node.
  • the computing node After the cloud proxy module receives the call request of the configuration module, the computing node obtains the network card information of each physical network card in the network card resource pool of the computing node from the operating system.
  • the cloud proxy module returns, to the configuration module, network card information of each physical network card in the network card resource pool of the computing node.
  • the configuration module determines the resources available in each NIC resource pool according to the NIC information of the physical NIC of each computing node, and the NIC resources are obtained.
  • the resources available in the pool are stored as resource occupancy information of each NIC resource pool to the cloud platform database.
  • the network card information acquired by the configuration module supports the storage in the cloud platform database.
  • the method provided by the embodiment of the present disclosure pools network resources on each computing node of the cloud platform, and completes configuring one network card resource pool for each computing node, thereby implementing all physical network cards configured on each computing node.
  • the resources are uniformly scheduled for use by the upper-layer services, so that the cloud platform can automatically create the virtual to be created according to the parameter information of the virtual network card occupied by the virtual machine to be created and the current resource occupation information of the network card resource pool of each computing node.
  • the machine is scheduled to be created on a suitable computing node. Therefore, the embodiment of the present disclosure does not require the user to select a suitable computing node independently, and also takes into account different performance requirements of different virtual machines, and does not have high requirements on bandwidth and delay.
  • the multiple virtual machines are configured in the same computing node, and the resulting computing node resources are insufficient, and the network resources of the computing nodes are fully utilized reasonably; in addition, the embodiments of the present disclosure are complicated because no user is required to cope with various situations. Resource planning, so the complexity of business distribution is low, The system performance.
  • FIG. 3 is a flowchart of a method for creating a virtual machine according to an embodiment of the present disclosure. The method is applied to the cloud platform, and each of the computing nodes of the cloud platform includes a network card resource pool, and the network card resource pool includes all physical network cards configured on the computing node.
  • the method process provided by the embodiment of the present disclosure includes :
  • the API module of the cloud platform receives a virtual machine creation request, where the virtual machine creation request includes parameter information of a virtual network card occupied by the virtual machine to be created.
  • the virtual machine creation request received by the API module is initiated by the user through an input interface such as CLI/GUI.
  • the virtual machine creation request carries the parameter information about the virtual network card occupied by the virtual machine to be created, such as the number of virtual network cards, the virtual network card bandwidth, and the affinity information of the virtual network card. No specific limitation is made.
  • the API module after receiving the virtual machine creation request, stores the parameter information in the cloud platform database, and details are not described herein again.
  • the API module invokes a scheduling module of the cloud platform, and the scheduling module acquires current resource occupation information of the network card resource pool of each computing node in the cloud platform.
  • the scheduling module may obtain current resource occupation information of the network card resource pool of each computing node from the cloud platform database.
  • the current resource occupation information of the NIC resource pool includes, but is not limited to, the number of virtual NICs currently available and the available virtual NIC bandwidth of each physical NIC in the resource pool.
  • the scheduling module of the cloud platform determines, according to the parameter information and resource occupation information of each network card resource pool, a target computing node used for virtual machine creation in at least one computing node of the cloud platform.
  • the scheduling module provided by the embodiment of the present disclosure is divided into a first scheduling module and a second scheduling module, where the first scheduling module and the second scheduling module can perform screening of computing nodes on two different levels.
  • the other available CPU resources and current available memory resources of each computing node on the cloud platform may be additionally acquired, and then the current available CPU resources of each computing node are obtained. And the currently available memory resources, in the at least one computing node of the cloud platform, the candidate computing nodes are initially screened out.
  • the optimal computing node for creating the virtual machine may be further filtered according to the parameter information. That is, the second scheduling module determines, according to the parameter information and the current resource occupation information of each NIC resource pool, a target computing node that performs virtual machine creation in the candidate computing node. For example, if the current resource occupancy information of a network card resource pool can meet the requirements of the above parameter information, it can be used as a target computing node.
  • the best computing nodes in the candidate computing nodes may be weighted according to the current resource occupation information of each NIC resource pool. For example, the more available resources, the greater the weight of the computing nodes.
  • the optimal calculation of the node is further performed according to the parameter information of the virtual machine to be created. For example, the number of virtual network cards of a network card resource pool and the virtual network card bandwidth meet the requirements, but the embodiment of the present disclosure further checks whether the network card resource pool meets the affinity information requirement of the virtual network card occupied by the virtual machine to be created.
  • the parameter information indicates that the number of virtual network cards is three, and the bandwidth of each virtual network card is 10M, and the virtual network cards have affinity, if a physical network card in a certain network card resource pool is currently available, The number of NICs is four, and the available virtual NIC bandwidth is 40M, which can be determined as the target computing node.
  • the scheduling module may further recalculate the resource occupation information of the network card resource pool of the target computing node based on the parameter information of the virtual network card occupied by the virtual machine to be created, and The updated resource occupation information of the target computing node is stored in the cloud platform database.
  • the virtual machine to be created occupies three virtual network cards of the NIC resource pool of the target computing node and 30M bandwidth, so for the NIC resource pool, the current remaining resources Only one virtual NIC from one physical NIC and 10M bandwidth.
  • the scheduling module invokes the standard computing node, and the virtual computing machine is created by the target computing node.
  • the cloud proxy module on the target computing node may be invoked to create a virtual machine.
  • the cloud proxy module invokes the execution module to obtain the virtual machine detailed information of the virtual machine to be created from the cloud platform database, and the execution module obtains the virtual machine detailed information for the virtual machine to be created from the cloud platform database, and provides the Cloud agent module.
  • the cloud proxy module obtains the mirror information required for creating the virtual machine from the mirror module according to the detailed information of the virtual machine, obtains the network information required for creating the virtual machine from the network module, and obtains the virtual machine required for creating the virtual machine from the storage module. Storage information.
  • the hypervisor on the target computing node can be called to create the virtual machine.
  • the embodiment of the present disclosure also sets the bandwidth QoS for the virtual network card of the created virtual machine.
  • the bandwidth of the virtual network card is limited to the initially specified bandwidth to ensure the virtual machine. Between the internal NICs of the virtual machine, the bandwidth is used reasonably to avoid preempting bandwidth resources and affecting upper-layer services.
  • the embodiment of the present disclosure pools network resources on each computing node of the cloud platform, and configures one network card resource pool for each computing node, thereby implementing resources of all physical network cards configured on each computing node. Unified scheduling for use by the upper-layer service, so that the cloud platform can automatically schedule the virtual machine to be created according to the parameter information of the virtual network card occupied by the virtual machine to be created and the current resource occupation information of the network card resource pool of each computing node.
  • the configuration is performed on a suitable computing node. Therefore, the embodiment of the present disclosure does not require the user to independently select a suitable computing node, and also takes into account different performance requirements of different virtual machines, and does not have a high requirement for bandwidth and delay.
  • the virtual machine is configured on the same computing node, and the resulting computing node resources are insufficient, and the network resources of the computing node are fully utilized reasonably; in addition, the embodiment of the present disclosure does not require the user to cope with various situations for complex resource planning. , so the complexity of business distribution is low, and the systemicity is improved. can.
  • the embodiment of the present disclosure also supports setting the bandwidth QoS of the virtual network card occupied by the created virtual machine, ensuring reasonable use of bandwidth between the virtual machines and the internal network cards of the virtual machine, and avoiding mutual mutual use. Seize bandwidth resources and affect upper-layer services.
  • the embodiment of the present disclosure introduces the affinity and anti-affinity of the virtual network card on the basis of constructing the network card resource pool.
  • affinity information for example, designating different virtual network cards from the same physical network card. The forwarding efficiency can be improved. If different virtual NICs are specified from different physical NICs, even if one physical NIC fails, it will not affect other virtual NICs of the virtual machine, which can improve the reliability of the virtual machine.
  • each module in the system architecture diagram shown in FIG. 1 can be instantiated into the OpenStack components shown in FIG. 4.
  • Keystone Equivalent to the authentication module in Figure 1.
  • Keystone is the project name of OpenStack Identity and authenticates all other OpenStack projects.
  • the service provides token, policy, and directory capabilities through the OpenStack API.
  • Glance Equivalent to the mirroring module in Figure 1, which provides query, upload, and download services for virtual machine images.
  • Glance provides the restful API to query the metadata of the virtual machine image and obtain the mirrored content.
  • virtual machine images can be stored on a variety of storage, such as simple file storage or object storage.
  • Neutron Equivalent to the network module in Figure 1, which provides network support for the entire OpenStack environment, including Layer 2 switching, Layer 3 routing, load balancing, firewall, and VPN.
  • Cinder Equivalent to the storage module in Figure 1, its core function is the management of the volume, allowing the processing of volumes, volume types, snapshots of volumes, and volume backups. It provides a unified interface for different storage devices on the back end, and different block device service vendors implement their driver support in Cinder to integrate with OpenStack.
  • Nova is the project responsible for computing resource management in OpenStack. It is mainly responsible for virtual machine lifecycle management and other computing resource lifecycle management, including several important components, such as: Nova-api, Nova-conductor, Nova-scheduler, Nova-compute Wait. Among them, Nova-api: such as parameter extraction, parameter verification, data object operation, etc., is equivalent to the API module in Figure 1.
  • Nova-conductor complex operation implementation, nova-compute database access agent.
  • Nova-scheduler Virtual machine location scheduling, equivalent to the scheduling module in Figure 1.
  • FIG. 5 is a schematic structural diagram of a virtual machine creation apparatus according to an embodiment of the present disclosure.
  • the device is applied to the cloud platform, wherein each computing node of the cloud platform includes a network card resource pool, and one network card resource pool includes each physical network card configured on the computing node.
  • the device includes:
  • the receiving module 501 is configured to receive a virtual machine creation request, where the virtual machine creation request includes parameter information of a virtual network card occupied by the virtual machine to be created;
  • the first obtaining module 502 is configured to acquire current resource occupation information of a network card resource pool of each computing node in the cloud platform;
  • a first determining module 503, configured to determine, according to the parameter information and current resource occupation information of each network card resource pool, a target computing node for performing virtual machine creation in at least one computing node of the cloud platform;
  • a creating module 504 is configured to invoke the target computing node to create the virtual machine.
  • the device provided by the embodiment of the present disclosure pools network resources on each computing node of the cloud platform, and configures one network card resource pool for each computing node, thereby implementing all physical network cards configured on each computing node.
  • the resources are uniformly scheduled for use by the upper-layer services, so that the cloud platform can automatically create the virtual to be created according to the parameter information of the virtual network card occupied by the virtual machine to be created and the current resource occupation information of the network card resource pool of each computing node.
  • the machine is scheduled to be created on a suitable computing node. Therefore, the embodiment of the present disclosure does not require the user to select a suitable computing node independently, and also takes into account different performance requirements of different virtual machines, and does not have high requirements on bandwidth and delay.
  • the multiple virtual machines are configured in the same computing node, and the resulting computing node resources are insufficient, and the network resources of the computing nodes are fully utilized reasonably; in addition, the embodiments of the present disclosure are complicated because no user is required to cope with various situations. Resource planning, so the complexity of business distribution is low, Increased system performance.
  • the parameter information includes the number of virtual network cards occupied by the virtual machine to be created, the virtual network card bandwidth, and the affinity information of the virtual network card occupied by the virtual machine;
  • the affinity information is used to indicate whether different virtual network cards occupied by the same virtual machine are from the same physical network card.
  • the apparatus further comprises:
  • a second acquiring module configured to acquire network card information of each physical network card in a network card resource pool of each computing node
  • a second determining module configured to determine, according to the network card information of each physical NIC in the NIC resource pool, the number of virtual NICs available for each physical NIC in the NIC resource pool and the available virtual NIC bandwidth for each NIC resource pool; The number of virtual network cards available for each physical network card in the network card resource pool and the available virtual network card bandwidth are used as resource occupation information of the network card resource pool;
  • a storage module configured to store resource occupation information of the network card resource pool to a cloud platform database
  • the second obtaining module is configured to obtain, from the cloud platform database, current resource occupation information of a network card resource pool of each computing node.
  • the apparatus further comprises:
  • the second determining module is further configured to: after determining the target computing node, recalculating resource occupancy information of the network card resource pool of the target computing node based on the parameter information;
  • the storage module is further configured to store the updated resource occupation information of the target computing node into the cloud platform database.
  • the first determining module is further configured to acquire current available CPU resources and current available memory resources of each computing node of the cloud platform; according to the currently available CPU resources and current of each computing node And determining, by the available memory resources, the candidate computing node in the at least one computing node of the cloud platform; determining the target in the candidate computing node according to the parameter information and current resource occupation information of each NIC resource pool calculate node.
  • the apparatus further comprises:
  • a processing module configured to: after the virtual machine is created, if the current bandwidth of any one of the virtual network cards occupied by the virtual machine is greater than the target bandwidth specified in the parameter information, the virtual network card is occupied by any one of the virtual network cards The bandwidth is limited to the target bandwidth.
  • the virtual machine creation device provided by the foregoing embodiment is only illustrated by dividing the foregoing functional modules when creating a virtual machine. In actual applications, the function distribution may be completed by different functional modules as needed. The internal structure of the device is divided into different functional modules to complete all or part of the functions described above.
  • the virtual machine creation device and the virtual machine creation method embodiment provided by the foregoing embodiments are in the same concept, and the specific implementation process is described in detail in the method embodiment, and details are not described herein again.
  • FIG. 6 is a schematic structural diagram of a virtual machine creation apparatus according to an embodiment of the present disclosure.
  • the apparatus 600 may generate a large difference due to different configurations or performances, and may include one or more processors (central processing units, CPU). 601 and one or more memories 602, wherein the memory 602 stores at least one instruction, and the at least one instruction is loaded and executed by the processor 601 to implement the virtual machine creation method provided by the foregoing various method embodiments.
  • the device may also have components such as a wired or wireless network interface, a keyboard, and an input/output interface for input and output.
  • the device may also include other components for implementing the functions of the device, and details are not described herein.
  • a computer readable storage medium such as a memory including instructions executable by a processor in a terminal to perform the virtual machine creation method in the above embodiments.
  • the computer readable storage medium can be a ROM, a random access memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, and an optical data storage device.
  • a person skilled in the art may understand that all or part of the steps of implementing the above embodiments may be completed by hardware, or may be instructed by a program to execute related hardware, and the program may be stored in a computer readable storage medium.
  • the storage medium mentioned may be a read only memory, a magnetic disk or an optical disk or the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Stored Programmes (AREA)
  • Conveying And Assembling Of Building Elements In Situ (AREA)

Abstract

本公开提供了一种虚拟机创建方法及装置,属于云计算技术领域。该方法应用于云平台,云平台的每一个计算节点上均包括网卡资源池,一个网卡资源池包括本计算节点上配置的各个物理网卡,方法包括:接收虚拟机创建请求,虚拟机创建请求中包括待创建的虚拟机所占用的虚拟网卡的参数信息;获取云平台中各个计算节点的网卡资源池当前的资源占用信息;根据参数信息以及各个网卡资源池当前的资源占用信息,在云平台的至少一个计算节点中,确定用于进行虚拟机创建的目标计算节点;调用目标计算节点创建虚拟机。本公开不但将各个计算节点上全部物理网卡的网络资源池化,而且达到了在创建虚拟机时能够兼顾不同虚拟机的性能要求,充分合理利用计算节点的资源。

Description

虚拟机创建方法及装置 技术领域
本公开涉及云计算技术领域,特别涉及一种虚拟机创建方法及装置。
背景技术
单根输入/输出虚拟化(SR-IOV,Single-Root I/O Virtualization)技术,又称为硬直通技术,是一种将单个外设部件互连标准接口(PCIe,Peripheral Component Interconnect express)设备对上层业务来讲虚拟化为多个独立PCIe设备的技术。目前在云计算技术领域中,SR-IOV技术已经广泛地应用在了云平台上,比如在云平台的计算节点上配置支持SR-IOV技术的物理网卡,这样云平台在创建虚拟机时,便可利用支持SR-IOV技术的物理网卡来生成虚拟功能(VF,Virtual Function),进而将生成的VF作为计算节点上的虚拟机的虚拟网卡。其中,一个支持SR-IOV技术的物理网卡通常能够虚拟出多个VF。
相关技术中,当上层业务需要云平台提供虚拟机时,还需用户在云平台管理的多个计算节点中选择合适的计算节点创建虚拟机。由于不同虚拟机存在不同的性能要求,比如部分虚拟机对带宽和时延要求高,当这些虚拟机被运行在同一计算节点上时,可能会导致计算节点资源不足。而为了应对各种情况进行复杂的资源规划则会导致业务发放的复杂度大大提升,会降低系统性能。
发明内容
本公开实施例提供了一种虚拟机创建方法及装置,解决了相关技术中在创建虚拟机时存在复杂度高以及系统性能低的问题。所述技术方案如下:
第一方面,提供了一种虚拟机创建方法,所述方法应用于云平台,所述云平台的每一个计算节点上均包括网卡资源池,所述网卡资源池包括本计算节点上配置的各个物理网卡,所述方法包括:
接收虚拟机创建请求,所述虚拟机创建请求中包括待创建的虚拟机所占用的虚拟网卡的参数信息;
获取所述云平台中各个计算节点的网卡资源池当前的资源占用信息;
根据所述参数信息以及各个网卡资源池当前的资源占用信息,在所述云平台的至少一个计算节点中,确定用于进行虚拟机创建的目标计算节点;
调用所述目标计算节点创建所述虚拟机。
本公开实施例提供的方法,将云平台的各个计算节点上的网络资源池化,完成了为每一个计算节点分别配置一个网卡资源池,因此实现了对各个计算节点上配置的所有物理网卡的资源进行统一调度,以供上层业务使用,从而云平台能够根据待创建的虚拟机所占用的虚拟网卡的参数信息以及各个计算节点的网卡资源池当前的资源占用信息,自动来将待创建的虚拟机调度到合适的计算节点上进行创建,所以本公开实施例不但无需用户自主进行合适的计算节点的选择,而且兼顾了不同虚拟机的不同性能 要求,不会出现因对带宽和时延要求高的多个虚拟机被配置在同一个计算节点,而导致的计算节点资源不足的情况,充分合理地利用了计算节点的网络资源;另外,本公开实施例由于无需用户应对各种情况进行复杂的资源规划,所以业务发放的复杂度低,提升了系统性能。
在第一方面的第一种可能的实现方式中,所述参数信息中包括待创建的虚拟机所占用的虚拟网卡数量、虚拟网卡带宽以及所述虚拟机所占用的虚拟网卡的亲和性信息;
其中,所述亲和性信息用于指示同一个虚拟机所占用的不同虚拟网卡之间是否来自于同一个物理网卡;当所述亲和性信息指示保持亲和性时,同一个虚拟机所占用的不同虚拟网卡之间来自于同一个物理网卡;当所述亲和性信息指示保持反亲和性时,同一个虚拟机所占用的不同虚拟网卡之间来自于不同的物理网卡。
本公开实施例通过指定亲和性信息,比如指定不同的虚拟网卡来自同一个物理网卡,则可以提升转发效率,而若指定不同的虚拟网卡来自于不同的物理网卡,则即便一个物理网卡出现故障了,也不会影响到该虚拟机的其它虚拟网卡,可以提高虚拟机的可靠性。
结合第一方面,在第一方面的第二种可能的实现方式中,所述方法还包括:
获取每一个计算节点的网卡资源池中各个物理网卡的网卡信息;
对于每一个网卡资源池,根据所述网卡资源池中各个物理网卡的网卡信息,确定所述网卡资源池中各个物理网卡可用的虚拟网卡数量以及可用的虚拟网卡带宽;
将所述网卡资源池中各个物理网卡可用的虚拟网卡数量以及可用的虚拟网卡带宽,作为所述网卡资源池的资源占用信息;
将所述网卡资源池的资源占用信息存储至云平台数据库;
所述获取所述云平台中各个计算节点的网卡资源池当前的资源占用信息,包括:
从所述云平台数据库中获取各个计算节点的网卡资源池当前的资源占用信息。
结合第一方面的第二种可能的实现方式,在第一方面的第三种可能的实现方式中,所述方法还包括:
在确定出所述目标计算节点后,基于所述参数信息,重新计算所述目标计算节点的网卡资源池的资源占用信息;
将所述目标计算节点更新后的资源占用信息存储至所述云平台数据库中。
结合第一方面,在第一方面的第四种可能的实现方式中,所述根据所述参数信息以及各个网卡资源池当前的资源占用信息,在所述云平台的至少一个计算节点中,确定用于进行虚拟机创建的目标计算节点,包括:
获取所述云平台的每一个计算节点的当前可用中央处理单元(Central Processing Unit,CPU)资源和当前可用内存资源;
根据所述每一个计算节点的当前可用CPU资源和当前可用内存资源,在所述云平台的至少一个计算节点中,确定候选计算节点;
根据所述参数信息以及所述各个网卡资源池当前的资源占用信息,在所述候选计算节点中确定所述目标计算节点。
结合第一方面,在第一方面的第五种可能的实现方式中,所述方法还包括:
在创建完毕所述虚拟机后,若所述虚拟机所占用的任意一个虚拟网卡的当前带宽 大于所述参数信息中指定的目标带宽,则将所述任意一个虚拟网卡占用的带宽限制在所述目标带宽之内。
本公开实施例在构建网卡资源池的基础上,还支持对创建的虚拟机所占用的虚拟网卡设置带宽服务质量(Quality of Service,QoS),保证虚拟机之间,虚拟机内部网卡之间,合理使用带宽,避免相互抢占带宽资源,影响上层业务。
第二方面,提供了一种虚拟机创建装置,所述装置用于执行上述第一方面所述的虚拟创建方法。
第三方面,提供了一种存储介质,所述存储介质中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由处理器加载并执行以实现如上述第一方面所述的虚拟机创建方法。
第四方面,提供了一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机能够执行如上述第一方面所述的虚拟机创建方法。
第五方面,提供了一种云平台,所述云平台中包括所述虚拟机创建装置,所述虚拟机创建装置用于执行上述第一方面所述的虚拟创建方法。
附图说明
图1A是本公开实施例提供的一种虚拟机创建方法所涉及的云平台的架构图;
图1B是本公开实施例提供的一种虚拟机创建方法的整体流程描述;
图2是本公开实施例提供的一种网卡资源池的构建方法的流程图;
图3是本公开实施例提供的一种虚拟机创建方法的流程图;
图4是本公开实施例提供的一种以开源OpenStack为例的云平台的架构图;
图5是本公开实施例提供的一种虚拟机创建装置的结构示意图;
图6是本公开实施例提供的一种虚拟机创建装置的结构示意图。
具体实施方式
为使本公开的目的、技术方案和优点更加清楚,下面将结合附图对本公开实施方式作进一步地详细描述。
在对本公开实施例进行详细解释说明之前,先对本公开实施例涉及的名词进行解释说明。
SR-IOV:又称为硬直通技术,是一种将单个PCIe设备对上层业务来讲虚拟化为多个独立PCIe设备的技术。硬直通技术的出现多是为了满足应用的高带宽和低时延需求。比如,随着云计算技术的高速发展,越来越多的应用被部署在云平台上,而有些应用对带宽和时延的要求较高,所以为了满足这些应用的高带宽和低时延需求,在云平台上引入了SR-IOV技术。
SR-IOV虚拟出的通道分为两类,一类为物理功能(Physical Function,PF),另一类为虚拟功能(Virtual Function,VF)。其中,PF是一个完整的PCIe设备,包含了全面的管理、配置功能,当云平台上某个计算节点的Hypervisor识别出一个支持SR-IOV的物理网卡后,会通过PF来管理和配置该物理网卡的所有输入/输出(Input/Output,I/O)资源。其中,Hypervisor为虚拟化技术的核心,是一种运行在基础物理设备和操作系统之间的中间软件层,可允许多个操作系统和应用共享硬件。
而VF是一个简化的PCIe设备,仅包含I/O功能,所以无法通过VF来对支持 SR-IOV的物理网卡进行管理,所有的VF均是通过PF衍生而来,一块支持SR-IOV的物理网卡可以生成多个VF,比如可以生成256个VF。在本公开实施例中,使用物理网卡生成的VF来作为计算节点上创建的虚拟机的虚拟网卡。
图1A是本公开实施例提供的一种虚拟机创建方法所涉及的云平台的架构图。
参见图1A,本公开实施例实现了SR-IOV资源云化,即云平台上每一个计算节点的多个物理网卡形成一个SR-IOV资源池,以供上层业务使用,在本文中也将SR-IOV资源池称之为网卡资源池。本公开实施例通过配置网卡资源池,实现对各个计算节点上的多个物理网卡统一管理,多个物理网卡可用的虚拟网卡数量和可用的虚拟网卡带宽统一进行调度。比如在图1A中,物理网卡1、物理网卡2、物理网卡3以及物理网卡4便形成了一个网卡资源池。
需要说明的是,云平台上的计算节点个数可为多个,在图1A中仅列举一个计算节点进行举例说明。对于除了图1A所示的计算节点之外的其他计算节点来说,与图1A所示的计算节点具有相似的配置。
另外,对于云平台上的每一个计算节点来说,均可配置有多个支持SR-IOV的物理网卡。其中,每个物理网卡连接到不同的物理网络平面,一个物理网卡能够虚拟出多个VF,且每个物理网卡之间相互独立。此外,一个计算节点配置的物理网卡个数可为一个或多个,在图1A中仅以配置4个物理网卡为例进行举例说明,本公开实施例对一个计算节点上配置的物理网卡个数不进行具体限定。
下面结合图1A对云平台上任意一个计算节点上的网卡资源池的构建步骤进行说明。
步骤A、如图1A所示,云平台上设置了配置模块,该配置模块用于配置计算节点上的网卡资源池。
其中,配置模块具体用于调用各个计算节点的云代理模块,进而由各个计算节点的云代理模块,来收集本计算节点上配置的各个物理网卡的网卡信息。
在本公开实施例中,网卡信息包括但不限于网卡型号、网卡芯片型号、网卡带宽等,本公开实施例对此不进行具体限定。
步骤B、配置模块在获取到各个计算节点的物理网卡的网卡信息后,根据各个计算节点的物理网卡的网卡信息,确定各个计算节点的网卡资源池可用的资源,并将各个网卡资源池可用的资源存入云平台数据库。
其中,配置模块根据一个物理网卡的网卡型号以及网卡芯片型号,能够确定出这个物理网卡可生成的虚拟网卡数量,根据网卡带宽能够确定这个物理网卡可用的虚拟网卡带宽。
在本公开实施例中,一个网卡资源池可用的资源包括但不限于这个网卡资源池中各个物理网卡可用的虚拟网卡数量、可用的虚拟网卡带宽等,本公开实施例对此不进行具体限定。另外,本公开实施例会将各个网卡资源池可用的资源存储在云平台数据库中,以作为每个计算节点提供SR-IOV资源云化的数据,用于支撑虚拟机的调度。
需要说明的第一点是,配置模块获取到的上述物理网卡的网卡信息,本公开实施例支持存储至云平台数据库。而任意一个网卡资源池在云平台上部署完成后,如果后续涉及到对该网卡资源池的扩容、减容和硬件变更,则还需要在配置模块上进行刷新 配置,以更新云平台数据库上保存的该网卡资源池可用的资源的相关信息。
需要说明的第二点是,调度模块在选择出用于创建虚拟机的最佳计算节点后,还会根据创建的虚拟机所占用的虚拟网卡的参数信息,来对该计算节点的网卡资源池当前可用的资源进行更新,并将更新后的数据存储至云平台数据库,以便用于下一次SR-IOV资源调度。
其中,上述参数信息包括但不限于待创建的虚拟机所占用的虚拟网卡数量、虚拟网卡带宽以及待创建的虚拟机所占用的虚拟网卡的亲和性信息。
针对云平台上任意一个网卡资源池的应用,可概括为下述步骤:
步骤C、如图1A所示,本公开实施例提供的调度模块包括第一调度模块和第二调度模块,以用于SR-IOV资源调度。
即,本公开实施例利用图1A中的第一调度模块以及第二调度模块,来综合在云平台提供的多个计算节点中选择进行虚拟机创建的最佳计算节点。
步骤D、如图1A所示,用户通过输入界面指定待创建的虚拟机所占用的虚拟网卡的参数信息。
其中,输入界面可为命令行界面(Command Line Inte rface,CLI)或图形用户界面(Graphical User Interface,GUI)等,本公开实施例对此不进行具体限定。
在本公开实施例中,虚拟网卡数量用于说明待创建的虚拟机需要占用多少个网卡;虚拟网卡带宽,用于说明每个虚拟网卡的带宽多大;虚拟网卡的亲和性信息用于指示同一个虚拟机所占用的不同虚拟网卡之间是否来自于同一个物理网卡,为此,本公开实施例提出了虚拟网卡的亲和性和反亲和性的概念。其中,亲和性指代同一个虚拟机所占用的不同虚拟网卡之间来自于同一个物理网卡,反亲和性指代同一个虚拟机所占用的不同虚拟网卡之间来自于不同的物理网卡。
参见图1B,下面基于在计算节点中配置的如图1A所示的网卡资源池对虚拟机的创建步骤进行说明。
步骤101、用户侧从输入界面发起鉴权请求获取有效鉴权信息,并通过输入界面向应用程序编程接口(Application Programming Interface,API)模块发送虚拟机创建请求,指定待创建的虚拟机所占用的虚拟网卡的参数信息。
在本公开实施例中,除了可指定上述参数信息外,还可指定待创建的虚拟机的CPU资源信息、内存资源信息等,本公开实施例对此不进行具体限定。
步骤102、API模块在接收到虚拟机创建请求后,在云平台数据库中初始化相应的虚拟机详细信息。
其中,该虚拟机详细信息中包括上述参数信息。此外,该虚拟机详细信息中还可包括待创建的虚拟机的CPU资源信息、内存资源信息等,本公开实施例对此不进行具体限定。
步骤103、调度模块基于上述参数信息和各个网卡资源池当前的使用情况,选择最佳的计算节点,以用于创建虚拟机。
步骤104、调用计算节点上的云代理模块来创建虚拟机。
步骤105、计算节点上的云代理模块,调用执行模块来从云平台数据库获取针对待创建的虚拟机的虚拟机详细信息。
步骤106、云代理模块根据上述虚拟机详细信息,从镜像模块获取创建虚拟机所需的镜像信息。
步骤107、云代理模块根据上述虚拟机详细信息,从网络模块获取创建虚拟机所需的网络信息。
需要说明的是,用户还可以在虚拟机创建请求中指定物理网卡创建VF,来作为待创建的虚拟机的虚拟网卡。此外,本公开实施例还可限制虚拟网卡的服务质量(Quality of Service,QoS),详细描述请参见后文。
步骤108、云代理模块根据上述虚拟机详细信息,从存储模块获取创建虚拟机所需的存储信息。
步骤109、云代理模块准备好创建虚拟机的各种资源后,调用计算节点上的hypervisor来创建虚拟机。
步骤1010、云代理模块向用户侧返回虚拟机创建结果。
下面通过一个实施例对本公开实施例提供的网卡资源池构建过程进行详细说明。
图2是本公开实施例提供的一种网卡资源池的构建方法的流程图。该方法应用于云平台,云平台的每一个计算节点上均包括一个网卡资源池,该网卡资源池包括本计算节点上配置的全部物理网卡,参见图2,本公开实施例提供的方法流程包括:
201、对于云平台上的任意一个计算节点,配置模块向该计算节点的云代理模块发送调用请求。
其中,该调用请求用于调用一个计算节点来收集本计算节点的网卡信息,该网卡信息具体为本计算节点的网卡资源池中各个物理网卡的网卡信息。
202、在该云代理模块接收到配置模块的调用请求后,本计算节点从操作系统获取本计算节点的网卡资源池中各个物理网卡的网卡信息。
203、该云代理模块向配置模块返回本计算节点的网卡资源池中各个物理网卡的网卡信息。
204、重复执行上述步骤,配置模块在获取到每一个计算节点的网卡资源池的网卡信息后,根据各个计算节点的物理网卡的网卡信息,确定各个网卡资源池可用的资源,并将各个网卡资源池可用的资源,作为各个网卡资源池的资源占用信息存储至云平台数据库。
其中,配置模块获取到的网卡信息本公开实施例支持存储到云平台数据库中。
本公开实施例提供的方法,将云平台的各个计算节点上的网络资源池化,完成了为每一个计算节点分别配置一个网卡资源池,因此实现了对各个计算节点上配置的所有物理网卡的资源进行统一调度,以供上层业务使用,从而云平台能够根据待创建的虚拟机所占用的虚拟网卡的参数信息以及各个计算节点的网卡资源池当前的资源占用信息,自动来将待创建的虚拟机调度到合适的计算节点上进行创建,所以本公开实施例不但无需用户自主进行合适的计算节点的选择,而且兼顾了不同虚拟机的不同性能要求,不会出现因对带宽和时延要求高的多个虚拟机被配置在同一个计算节点,而导致的计算节点资源不足的情况,充分合理地利用了计算节点的网络资源;另外,本公开实施例由于无需用户应对各种情况进行复杂的资源规划,所以业务发放的复杂度低,提升了系统性能。
下面通过一个实施例对本公开实施例提供的虚拟机创建过程进行详细说明。
图3是本公开实施例提供的一种虚拟机创建方法的流程图。该方法应用于云平台,云平台的每一个计算节点上均包括一个网卡资源池,该网卡资源池包括本计算节点上配置的全部物理网卡,参见图3,本公开实施例提供的方法流程包括:
301、云平台的API模块接收虚拟机创建请求,该虚拟机创建请求中包括待创建的虚拟机所占用的虚拟网卡的参数信息。
如前文所述,API模块接收到的虚拟机创建请求是用户通过诸如CLI/GUI等输入界面发起的。该虚拟机创建请求中携带了由用户指定的有关于待创建虚拟机所占用的虚拟网卡的参数信息,比如虚拟网卡数量、虚拟网卡带宽以及虚拟网卡的亲和性信息,本公开实施例对此不进行具体限定。
需要说明的是,如前文步骤2所述,API模块在接收到上述虚拟机创建请求后,还会将上述参数信息存储至云平台数据库,此处不再赘述。
302、API模块调用云平台的调度模块,由调度模块获取云平台中各个计算节点的网卡资源池当前的资源占用信息。
在本公开实施例中,调度模块可从云平台数据库中获取各个计算节点的网卡资源池当前的资源占用信息。其中,针对一个网卡资源池来说,该网卡资源池当前的资源占用信息包括但不限于该资源池中各个物理网卡当前可用的虚拟网卡数量和可用的虚拟网卡带宽。
303、云平台的调度模块,根据上述参数信息以及各个网卡资源池的资源占用信息,在云平台的至少一个计算节点中,确定用于进行虚拟机创建的目标计算节点。
本公开实施例提供的调度模块分为第一调度模块和第二调度模块,其中第一调度模块和第二调度模块可在两个不同的层面上来进行计算节点的筛选。
针对第一调度模块来说,其可在获取上述参数信息之外,再额外获取云平台上每一个计算节点的当前可用CPU资源和当前可用内存资源,进而根据每一个计算节点的当前可用CPU资源和当前可用内存资源,在云平台的至少一个计算节点中,初步筛选出候选计算节点。
针对第二调度模块来说,其可在候选计算节点的基础上,再根据上述参数信息,来进一步地筛选出进行虚拟机创建的最佳计算节点。即,第二调度模块根据上述参数信息以及各个网卡资源池当前的资源占用信息,在候选计算节点中确定出进行虚拟机创建的目标计算节点。比如,若一个网卡资源池当前的资源占用信息能够符合上述参数信息的要求,则便可将其作为目标计算节点。其中,在候选计算节点中筛选最佳的计算节点时,可根据各个网卡资源池当前的资源占用信息来对各个计算节点进行权重排序,例如,可用的资源越多的计算节点的权重值越大,进而在依据待创建的虚拟机的参数信息来进一步进行最佳的计算节点筛选。比如,一个网卡资源池的虚拟网卡数量以及虚拟网卡带宽符合要求,但是本公开实施例还会进一步地核查该网卡资源池是否符合待创建的虚拟机所占用的虚拟网卡的亲和性信息要求。
举例来说,假设上述参数信息指示虚拟网卡数量3个,每个虚拟网卡的带宽为10M,虚拟网卡之间符合亲和性,则若某一个网卡资源池中的某一个物理网卡当前可用的虚拟网卡数量为4个,可用的虚拟网卡带宽为40M,则便可将其确定为目标计算节点。
在另一个实施例中,在确定出目标计算节点后,调度模块还可基于待创建的虚拟机所占用的虚拟网卡的参数信息,来重新计算目标计算节点的网卡资源池的资源占用信息,并将目标计算节点更新后的资源占用信息存储至云平台数据库中。
继续以上述例子为例,在确定出目标计算节点后,待创建的虚拟机占用目标计算节点的网卡资源池的3个虚拟网卡以及30M带宽,所以针对该网卡资源池来说,当前剩余的资源仅为来自于一个物理网卡的1个虚拟网卡以及10M带宽。
304、调度模块调用标计算节点,由目标计算节点创建虚拟机。
在本公开实施例中,确定出目标计算节点后,便可调用目标计算节点上的云代理模块来创建虚拟机。而针对目标计算节点上的云代理模块来说,其会先获取用于创建虚拟机所需要的其他资源。详细地,云代理模块会调用执行模块来从云平台数据库获取待创建的虚拟机的虚拟机详细信息,进而执行模块从云平台数据库中获取针对待创建的虚拟机的虚拟机详细信息,提供给云代理模块。之后,云代理模块便可根据上述虚拟机详细信息,从镜像模块获取创建虚拟机所需的镜像信息,从网络模块获取创建虚拟机所需的网络信息,以及从存储模块获取创建虚拟机所需的存储信息。而云代理模块在准备好创建虚拟机的各种资源后,便可调用目标计算节点上的hypervisor来创建虚拟机。
在另一个实施例中,在创建完毕虚拟机后,本公开实施例还会对创建的虚拟机的虚拟网卡设置带宽QoS。详细地,若创建的虚拟机所占用的任意一个虚拟网卡的当前带宽大于用户初始指定的带宽,则本公开实施例会将该虚拟网卡占用的带宽限制在初始指定的带宽之内,以保证虚拟机之间、虚拟机内部网卡之间,合理使用带宽,避免相互抢占带宽资源,影响上层业务。
综上所述,本公开实施例带来的有益效果如下:
A、本公开实施例将云平台的各个计算节点上的网络资源池化,完成了为每一个计算节点分别配置一个网卡资源池,因此实现了对各个计算节点上配置的所有物理网卡的资源进行统一调度,以供上层业务使用,从而云平台能够根据待创建的虚拟机所占用的虚拟网卡的参数信息以及各个计算节点的网卡资源池当前的资源占用信息,自动来将待创建的虚拟机调度到合适的计算节点上进行创建,所以本公开实施例不但无需用户自主进行合适的计算节点的选择,而且兼顾了不同虚拟机的不同性能要求,不会出现因对带宽和时延要求高的多个虚拟机被配置在同一个计算节点,而导致的计算节点资源不足的情况,充分合理地利用了计算节点的网络资源;另外,本公开实施例由于无需用户应对各种情况进行复杂的资源规划,所以业务发放的复杂度低,提升了系统性能。
B、本公开实施例在构建网卡资源池的基础上,还支持对创建的虚拟机所占用的虚拟网卡设置带宽QoS,保证虚拟机之间,虚拟机内部网卡之间,合理使用带宽,避免相互抢占带宽资源,影响上层业务。
C、本公开实施例在构建网卡资源池的基础上,还引入了虚拟网卡的亲和性和反亲和性,通过指定亲和性信息,比如指定不同的虚拟网卡来自同一个物理网卡,则可以提升转发效率,而若指定不同的虚拟网卡来自于不同的物理网卡,则即便一个物理网卡出现故障了,也不会影响到该虚拟机的其它虚拟网卡,可以提高虚拟机的可靠性。
在另一个实施例中,以开源OpenStack为例,则图1所示的系统架构图中的各个模块可以实例化为图4所示的OpenStack各个组件。
Keystone:相当于图1中鉴权模块,在OpenStack中Keystone是OpenStack Identity的项目名称,对其他所有OpenStack项目进行身份验证。该服务通过OpenStack API提供令牌、策略和目录功能。
Glance:相当于图1中的镜像模块,其提供虚拟机镜像的查询、上传和下载服务。
其中,Glance提供restful API可以查询虚拟机镜像的metadata,并且可以获得镜像内容。通过Glance,虚拟机镜像可以被存储到多种存储上,比如简单的文件存储或者对象存储。
Neutron:相当于图1中的网络模块,其为整个OpenStack环境提供网络支持,包括二层交换、三层路由、负载均衡、防火墙和VPN等。
Cinder:相当于图1中的存储模块,其核心功能是对卷的管理,允许对卷、卷的类型、卷的快照、卷备份进行处理。它为后端不同的存储设备提供了统一的接口,不同的块设备服务厂商在cinder中实现其驱动支持以与OpenStack进行整合。
而Nova是OpenStack中负责计算资源管理的项目,主要负责虚拟机生命周期管理、其他计算资源生命周期管理,包括几个重要组件,如:Nova-api、Nova-conductor、Nova-scheduler、Nova-compute等。其中,Nova-api:进行诸如参数提取,参数校验,数据对象操作等,相当于图1中的API模块。
Nova-conductor:复杂操作实施,nova-compute的数据库访问代理。
Nova-scheduler:虚拟机位置调度,相当于图1中的调度模块。
Nova-compute:计算节点管理,虚拟机生命周期管理动作本地实施,相当于图1中的云代理模块。
图5是本公开实施例提供的一种虚拟机创建装置的结构示意图。该装置应用于云平台,其中,云平台的每一个计算节点上均包括网卡资源池,一个网卡资源池包括本计算节点上配置的各个物理网卡,参见图5,该装置包括:
接收模块501,用于接收虚拟机创建请求,所述虚拟机创建请求中包括待创建的虚拟机所占用的虚拟网卡的参数信息;
第一获取模块502,用于获取所述云平台中各个计算节点的网卡资源池当前的资源占用信息;
第一确定模块503,用于根据所述参数信息以及各个网卡资源池当前的资源占用信息,在所述云平台的至少一个计算节点中,确定用于进行虚拟机创建的目标计算节点;
创建模块504,用于调用所述目标计算节点创建所述虚拟机。
本公开实施例提供的装置,将云平台的各个计算节点上的网络资源池化,完成了为每一个计算节点分别配置一个网卡资源池,因此实现了对各个计算节点上配置的所有物理网卡的资源进行统一调度,以供上层业务使用,从而云平台能够根据待创建的虚拟机所占用的虚拟网卡的参数信息以及各个计算节点的网卡资源池当前的资源占用 信息,自动来将待创建的虚拟机调度到合适的计算节点上进行创建,所以本公开实施例不但无需用户自主进行合适的计算节点的选择,而且兼顾了不同虚拟机的不同性能要求,不会出现因对带宽和时延要求高的多个虚拟机被配置在同一个计算节点,而导致的计算节点资源不足的情况,充分合理地利用了计算节点的网络资源;另外,本公开实施例由于无需用户应对各种情况进行复杂的资源规划,所以业务发放的复杂度低,提升了系统性能。
在另一个实施例中,所述参数信息中包括待创建的虚拟机所占用的虚拟网卡数量、虚拟网卡带宽以及所述虚拟机所占用的虚拟网卡的亲和性信息;
其中,所述亲和性信息用于指示同一个虚拟机所占用的不同虚拟网卡之间是否来自于同一个物理网卡;
当所述亲和性信息指示保持亲和性时,同一个虚拟机所占用的不同虚拟网卡之间来自于同一个物理网卡;
当所述亲和性信息指示保持反亲和性时,同一个虚拟机所占用的不同虚拟网卡之间来自于不同的物理网卡。
在另一个实施例中,该装置还包括:
第二获取模块,用于获取每一个计算节点的网卡资源池中各个物理网卡的网卡信息;
第二确定模块,用于对于每一个网卡资源池,根据所述网卡资源池中各个物理网卡的网卡信息,确定所述网卡资源池中各个物理网卡可用的虚拟网卡数量以及可用的虚拟网卡带宽;将所述网卡资源池中各个物理网卡可用的虚拟网卡数量以及可用的虚拟网卡带宽,作为所述网卡资源池的资源占用信息;
存储模块,用于将所述网卡资源池的资源占用信息存储至云平台数据库;
所述第二获取模块,用于从所述云平台数据库中获取各个计算节点的网卡资源池当前的资源占用信息。
在另一个实施例中,该装置还包括:
所述第二确定模块,还用于在确定出所述目标计算节点后,基于所述参数信息,重新计算所述目标计算节点的网卡资源池的资源占用信息;
所述存储模块,还用于将所述目标计算节点更新后的资源占用信息存储至所述云平台数据库中。
在另一个实施例中,第一确定模块,还用于获取所述云平台的每一个计算节点的当前可用CPU资源和当前可用内存资源;根据所述每一个计算节点的当前可用CPU资源和当前可用内存资源,在所述云平台的至少一个计算节点中,确定候选计算节点;根据所述参数信息以及所述各个网卡资源池当前的资源占用信息,在所述候选计算节点中确定所述目标计算节点。
在另一个实施例中,该装置还包括:
处理模块,用于在创建完毕所述虚拟机后,若所述虚拟机所占用的任意一个虚拟网卡的当前带宽大于所述参数信息中指定的目标带宽,则将所述任意一个虚拟网卡占用的带宽限制在所述目标带宽之内。
上述所有可选技术方案,可以采用任意结合形成本公开的可选实施例,在此不再 一一赘述。
需要说明的是:上述实施例提供的虚拟机创建装置在创建虚拟机时,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将设备的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。另外,上述实施例提供的虚拟机创建装置与虚拟机创建方法实施例属于同一构思,其具体实现过程详见方法实施例,这里不再赘述。
图6是本公开实施例提供的一种虚拟机创建装置的结构示意图,该装置600可因配置或性能不同而产生比较大的差异,可以包括一个或一个以上处理器(central processing units,CPU)601和一个或一个以上的存储器602,其中,所述存储器602中存储有至少一条指令,所述至少一条指令由所述处理器601加载并执行以实现上述各个方法实施例提供的虚拟机创建方法。当然,该装置还可以具有有线或无线网络接口、键盘以及输入输出接口等部件,以便进行输入输出,该装置还可以包括其他用于实现装置功能的部件,在此不做赘述。
在示例性实施例中,还提供了一种计算机可读存储介质,例如包括指令的存储器,上述指令可由终端中的处理器执行以完成上述实施例中的虚拟机创建方法。例如,所述计算机可读存储介质可以是ROM、随机存取存储器(RAM)、CD-ROM、磁带、软盘和光数据存储装置等。
本领域普通技术人员可以理解实现上述实施例的全部或部分步骤可以通过硬件来完成,也可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,上述提到的存储介质可以是只读存储器,磁盘或光盘等。
以上所述仅为本公开的可选实施例,并不用以限制本公开,凡在本公开的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本公开的保护范围之内。

Claims (15)

  1. 一种虚拟机创建方法,其特征在于,所述方法应用于云平台,所述云平台的每一个计算节点上均包括网卡资源池,所述网卡资源池包括本计算节点上配置的各个物理网卡,所述方法包括:
    接收虚拟机创建请求,所述虚拟机创建请求中包括待创建的虚拟机所占用的虚拟网卡的参数信息;
    获取所述云平台中各个计算节点的网卡资源池当前的资源占用信息;
    根据所述参数信息以及各个网卡资源池当前的资源占用信息,在所述云平台的至少一个计算节点中,确定用于进行虚拟机创建的目标计算节点;
    调用所述目标计算节点创建所述虚拟机。
  2. 根据权利要求1所述的方法,其特征在于,所述参数信息中包括待创建的虚拟机所占用的虚拟网卡数量、虚拟网卡带宽以及所述虚拟机所占用的虚拟网卡的亲和性信息;
    其中,所述亲和性信息用于指示同一个虚拟机所占用的不同虚拟网卡之间是否来自于同一个物理网卡;
    当所述亲和性信息指示保持亲和性时,同一个虚拟机所占用的不同虚拟网卡之间来自于同一个物理网卡;
    当所述亲和性信息指示保持反亲和性时,同一个虚拟机所占用的不同虚拟网卡之间来自于不同的物理网卡。
  3. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    获取每一个计算节点的网卡资源池中各个物理网卡的网卡信息;
    对于每一个网卡资源池,根据所述网卡资源池中各个物理网卡的网卡信息,确定所述网卡资源池中各个物理网卡可用的虚拟网卡数量以及可用的虚拟网卡带宽;
    将所述网卡资源池中各个物理网卡可用的虚拟网卡数量以及可用的虚拟网卡带宽,作为所述网卡资源池的资源占用信息;
    将所述网卡资源池的资源占用信息存储至云平台数据库;
    所述获取所述云平台中各个计算节点的网卡资源池当前的资源占用信息,包括:
    从所述云平台数据库中获取各个计算节点的网卡资源池当前的资源占用信息。
  4. 根据权利要求3所述的方法,其特征在于,所述方法还包括:
    在确定出所述目标计算节点后,基于所述参数信息,重新计算所述目标计算节点的网卡资源池的资源占用信息;
    将所述目标计算节点更新后的资源占用信息存储至所述云平台数据库中。
  5. 根据权利要求1所述的方法,其特征在于,所述根据所述参数信息以及各个网卡资源池当前的资源占用信息,在所述云平台的至少一个计算节点中,确定用于进行虚拟机创建的目标计算节点,包括:
    获取所述云平台的每一个计算节点的当前可用中央处理单元CPU资源和当前可用内存资源;
    根据所述每一个计算节点的当前可用CPU资源和当前可用内存资源,在所述云平台的至少一个计算节点中,确定候选计算节点;
    根据所述参数信息以及所述各个网卡资源池当前的资源占用信息,在所述候选计算节点中确定所述目标计算节点。
  6. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    在创建完毕所述虚拟机后,若所述虚拟机所占用的任意一个虚拟网卡的当前带宽大于所述参数信息中指定的目标带宽,则将所述任意一个虚拟网卡占用的带宽限制在所述目标带宽之内。
  7. 一种虚拟机创建装置,其特征在于,所述装置应用于云平台,所述云平台的每一个计算节点上均包括网卡资源池,所述网卡资源池包括本计算节点上配置的各个物理网卡,所述装置包括:
    接收模块,用于接收虚拟机创建请求,所述虚拟机创建请求中包括待创建的虚拟机所占用的虚拟网卡的参数信息;
    第一获取模块,用于获取所述云平台中各个计算节点的网卡资源池当前的资源占用信息;
    第一确定模块,用于根据所述参数信息以及各个网卡资源池当前的资源占用信息,在所述云平台的至少一个计算节点中,确定用于进行虚拟机创建的目标计算节点;
    创建模块,用于调用所述目标计算节点创建所述虚拟机。
  8. 根据权利要求7所述的装置,其特征在于,所述参数信息中包括待创建的虚拟机所占用的虚拟网卡数量、虚拟网卡带宽以及所述虚拟机所占用的虚拟网卡的亲和性信息;
    其中,所述亲和性信息用于指示同一个虚拟机所占用的不同虚拟网卡之间是否来自于同一个物理网卡;
    当所述亲和性信息指示保持亲和性时,同一个虚拟机所占用的不同虚拟网卡之间来自于同一个物理网卡;
    当所述亲和性信息指示保持反亲和性时,同一个虚拟机所占用的不同虚拟网卡之间来自于不同的物理网卡。
  9. 根据权利要求7所述的装置,其特征在于,所述装置还包括:
    第二获取模块,用于获取每一个计算节点的网卡资源池中各个物理网卡的网卡信息;
    第二确定模块,用于对于每一个网卡资源池,根据所述网卡资源池中各个物理网卡的网卡信息,确定所述网卡资源池中各个物理网卡可用的虚拟网卡数量以及可用的虚拟网卡带宽;将所述网卡资源池中各个物理网卡可用的虚拟网卡数量以及可用的虚拟网卡带宽,作为所述网卡资源池的资源占用信息;
    存储模块,用于将所述网卡资源池的资源占用信息存储至云平台数据库;
    所述第二获取模块,用于从所述云平台数据库中获取各个计算节点的网卡资源池当前的资源占用信息。
  10. 根据权利要求9所述的装置,其特征在于,所述装置还包括:
    所述第二确定模块,还用于在确定出所述目标计算节点后,基于所述参数信息,重新计算所述目标计算节点的网卡资源池的资源占用信息;
    所述存储模块,还用于将所述目标计算节点更新后的资源占用信息存储至所述云 平台数据库中。
  11. 根据权利要求7所述的装置,其特征在于,所述第一确定模块,还用于获取所述云平台的每一个计算节点的当前可用中央处理单元CPU资源和当前可用内存资源;根据所述每一个计算节点的当前可用CPU资源和当前可用内存资源,在所述云平台的至少一个计算节点中,确定候选计算节点;根据所述参数信息以及所述各个网卡资源池当前的资源占用信息,在所述候选计算节点中确定所述目标计算节点。
  12. 根据权利要求7所述的装置,其特征在于,所述装置还包括:
    处理模块,用于在创建完毕所述虚拟机后,若所述虚拟机所占用的任意一个虚拟网卡的当前带宽大于所述参数信息中指定的目标带宽,则将所述任意一个虚拟网卡占用的带宽限制在所述目标带宽之内。
  13. 一种存储介质,其特征在于,所述存储介质中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由处理器加载并执行以实现如权利要求1至6中任一权利要求所述的虚拟机创建方法。
  14. 一种包含指令的计算机程序产品,其特征在于,当其在计算机上运行时,使得计算机能够执行如权利要求1至6中任一权利要求所述的虚拟机创建方法。
  15. 一种云平台,其特征在于,所述云平台包括如权利要求7至12中任一权利要求所述的虚拟机创建装置。
PCT/CN2019/078813 2018-03-22 2019-03-20 虚拟机创建方法及装置 WO2019179453A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
ES19771706T ES2945218T3 (es) 2018-03-22 2019-03-20 Método y aparato de creación de máquina virtual
EP19771706.9A EP3761170B1 (en) 2018-03-22 2019-03-20 Virtual machine creation method and apparatus
US17/026,767 US11960915B2 (en) 2018-03-22 2020-09-21 Method and apparatus for creating virtual machine based on parameter information of a virtual network interface card

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810241274.8 2018-03-22
CN201810241274.8A CN108614726B (zh) 2018-03-22 2018-03-22 虚拟机创建方法及装置

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/026,767 Continuation US11960915B2 (en) 2018-03-22 2020-09-21 Method and apparatus for creating virtual machine based on parameter information of a virtual network interface card

Publications (1)

Publication Number Publication Date
WO2019179453A1 true WO2019179453A1 (zh) 2019-09-26

Family

ID=63658755

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/078813 WO2019179453A1 (zh) 2018-03-22 2019-03-20 虚拟机创建方法及装置

Country Status (5)

Country Link
US (1) US11960915B2 (zh)
EP (1) EP3761170B1 (zh)
CN (1) CN108614726B (zh)
ES (1) ES2945218T3 (zh)
WO (1) WO2019179453A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111580935A (zh) * 2020-05-13 2020-08-25 深信服科技股份有限公司 一种网络通信方法、装置、设备及存储介质
CN113608833A (zh) * 2021-07-19 2021-11-05 曙光信息产业(北京)有限公司 虚拟机创建方法、装置、计算机设备和存储介质
CN113766005A (zh) * 2021-07-29 2021-12-07 苏州浪潮智能科技有限公司 一种基于rdma的批量创建云主机的方法、系统

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108614726B (zh) * 2018-03-22 2022-06-10 华为云计算技术有限公司 虚拟机创建方法及装置
CN111124593B (zh) * 2018-10-31 2023-07-21 中国移动通信有限公司研究院 信息处理方法及装置、网元及存储介质
CN109445801A (zh) * 2018-11-05 2019-03-08 郑州云海信息技术有限公司 一种探测裸机网卡信息的方法和装置
CN109379699B (zh) * 2018-11-12 2020-08-25 中国联合网络通信集团有限公司 创建虚拟化转发面网元的方法及装置
CN110704167B (zh) * 2019-10-09 2023-09-19 腾讯科技(深圳)有限公司 一种创建虚拟机的方法、装置、设备和存储介质
CN111124683A (zh) * 2019-12-25 2020-05-08 泰康保险集团股份有限公司 虚拟资源创建方法、装置及系统
CN113535319A (zh) * 2020-04-09 2021-10-22 深圳致星科技有限公司 一种实现多rdma网卡虚拟化的方法、设备及存储介质
CN112433823A (zh) * 2020-12-08 2021-03-02 上海寒武纪信息科技有限公司 动态虚拟化物理卡的设备及方法
CN113010263A (zh) * 2021-02-26 2021-06-22 山东英信计算机技术有限公司 云平台中的虚拟机的创建方法、系统、设备及存储介质
CN113032107B (zh) * 2021-05-24 2022-05-10 北京金山云网络技术有限公司 一种云数据库的资源管理方法、装置及系统
CN113645057B (zh) * 2021-06-25 2023-04-07 济南浪潮数据技术有限公司 一种云平台支持添加网卡模型的方法、装置
US20210326221A1 (en) * 2021-06-26 2021-10-21 Intel Corporation Network interface device management of service execution failover
CN113760452B (zh) * 2021-08-02 2023-09-26 阿里巴巴新加坡控股有限公司 一种容器调度方法、系统、设备及存储介质
CN114697242A (zh) * 2022-03-21 2022-07-01 浪潮云信息技术股份公司 一种政务云场景下客户虚拟网卡流量管理方法及系统
CN115269126B (zh) * 2022-09-28 2022-12-27 中国人寿保险股份有限公司上海数据中心 一种基于余弦相似度的云平台反亲和调度系统
CN115328666B (zh) * 2022-10-14 2023-07-14 浪潮电子信息产业股份有限公司 设备调度方法、系统、电子设备及计算机可读存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8473947B2 (en) * 2010-01-18 2013-06-25 Vmware, Inc. Method for configuring a physical adapter with virtual function (VF) and physical function (PF) for controlling address translation between virtual disks and physical storage regions
CN103810015A (zh) * 2012-11-09 2014-05-21 华为技术有限公司 虚拟机创建方法和设备
CN104168135A (zh) * 2014-08-06 2014-11-26 中国船舶重工集团公司第七0九研究所 网卡资源池化管理方法及系统
WO2017152633A1 (zh) * 2016-03-09 2017-09-14 中兴通讯股份有限公司 一种端口绑定实现方法及装置
CN108614726A (zh) * 2018-03-22 2018-10-02 华为技术有限公司 虚拟机创建方法及装置

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102932174B (zh) * 2012-10-25 2015-07-29 华为技术有限公司 一种物理网卡管理方法、装置及物理主机
CN103699428A (zh) * 2013-12-20 2014-04-02 华为技术有限公司 一种虚拟网卡中断亲和性绑定的方法和计算机设备
CN103778443B (zh) 2014-02-20 2017-05-03 公安部第三研究所 基于主题模型方法和领域规则库实现场景分析描述的方法
US10481932B2 (en) * 2014-03-31 2019-11-19 Vmware, Inc. Auto-scaling virtual switches
US9473365B2 (en) * 2014-05-08 2016-10-18 Cisco Technology, Inc. Collaborative inter-service scheduling of logical resources in cloud platforms
TWI522921B (zh) * 2014-11-14 2016-02-21 廣達電腦股份有限公司 虛擬機器建立系統以及方法
CN105656969A (zh) * 2014-11-24 2016-06-08 中兴通讯股份有限公司 一种虚拟机迁移决策方法及装置
US10693806B2 (en) * 2015-03-11 2020-06-23 Vmware, Inc. Network bandwidth reservations for system traffic and virtual computing instances
CN105224392B (zh) * 2015-10-13 2018-07-27 中国联合网络通信集团有限公司 一种虚拟计算资源配额管理方法及平台
CN107346264A (zh) * 2016-05-05 2017-11-14 北京金山云网络技术有限公司 一种虚拟机负载均衡调度的方法、装置和服务器设备
US10782992B2 (en) * 2016-11-01 2020-09-22 Nutanix, Inc. Hypervisor conversion

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8473947B2 (en) * 2010-01-18 2013-06-25 Vmware, Inc. Method for configuring a physical adapter with virtual function (VF) and physical function (PF) for controlling address translation between virtual disks and physical storage regions
CN103810015A (zh) * 2012-11-09 2014-05-21 华为技术有限公司 虚拟机创建方法和设备
CN104168135A (zh) * 2014-08-06 2014-11-26 中国船舶重工集团公司第七0九研究所 网卡资源池化管理方法及系统
WO2017152633A1 (zh) * 2016-03-09 2017-09-14 中兴通讯股份有限公司 一种端口绑定实现方法及装置
CN108614726A (zh) * 2018-03-22 2018-10-02 华为技术有限公司 虚拟机创建方法及装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3761170A4 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111580935A (zh) * 2020-05-13 2020-08-25 深信服科技股份有限公司 一种网络通信方法、装置、设备及存储介质
CN113608833A (zh) * 2021-07-19 2021-11-05 曙光信息产业(北京)有限公司 虚拟机创建方法、装置、计算机设备和存储介质
CN113766005A (zh) * 2021-07-29 2021-12-07 苏州浪潮智能科技有限公司 一种基于rdma的批量创建云主机的方法、系统

Also Published As

Publication number Publication date
EP3761170A4 (en) 2021-04-07
CN108614726B (zh) 2022-06-10
US20210004258A1 (en) 2021-01-07
ES2945218T3 (es) 2023-06-29
EP3761170B1 (en) 2023-03-01
CN108614726A (zh) 2018-10-02
US11960915B2 (en) 2024-04-16
EP3761170A1 (en) 2021-01-06

Similar Documents

Publication Publication Date Title
WO2019179453A1 (zh) 虚拟机创建方法及装置
US11593149B2 (en) Unified resource management for containers and virtual machines
US20210042144A1 (en) Virtual machine morphing for heterogeneous migration environments
CN108737468B (zh) 云平台服务集群、构建方法及装置
EP3313023B1 (en) Life cycle management method and apparatus
US11392422B1 (en) Service-managed containers for container orchestration service
CN107959582B (zh) 一种切片实例的管理方法及装置
JPWO2016167086A1 (ja) サーバ選択装置、サーバ選択方法及びサーバ選択プログラム
US11924117B2 (en) Automated local scaling of compute instances
WO2018040525A1 (zh) 资源池的处理方法、装置和设备
US11962599B2 (en) Techniques for automatically configuring minimal cloud service access rights for container applications
US9361120B2 (en) Pluggable cloud enablement boot device and method that determines hardware resources via firmware
US10031761B2 (en) Pluggable cloud enablement boot device and method
EP3879875A1 (en) Resource change method and device, apparatus, and storage medium
US20230221997A1 (en) System and method for subscription management using composed systems
WO2017041650A1 (zh) 用于扩展分布式一致性服务的方法和设备
US11847611B2 (en) Orchestrating and automating product deployment flow and lifecycle management
WO2021233152A1 (zh) 虚拟化网络功能部署方法、管理与编排平台和介质
CN115150268A (zh) Kubernetes集群的网络配置方法、装置、及电子设备
WO2023179580A1 (zh) 一种部署vnf的方法、装置及设备
CN116089020B (zh) 虚拟机运行方法、扩容方法、扩容系统
WO2023274014A1 (zh) 容器集群的存储资源管理方法、装置及系统
CN106775917B (zh) 一种虚拟机启动的方法及系统
CN117369981A (zh) 基于监控器的容器调整方法、设备及存储介质
US20180287868A1 (en) Control method and control device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19771706

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2019771706

Country of ref document: EP

Effective date: 20201001