CN117675583A - Communication method, communication device and communication system - Google Patents

Communication method, communication device and communication system Download PDF

Info

Publication number
CN117675583A
CN117675583A CN202211092108.9A CN202211092108A CN117675583A CN 117675583 A CN117675583 A CN 117675583A CN 202211092108 A CN202211092108 A CN 202211092108A CN 117675583 A CN117675583 A CN 117675583A
Authority
CN
China
Prior art keywords
dpu
virtual network
information
network disk
disk
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211092108.9A
Other languages
Chinese (zh)
Inventor
赖荣文
黄宝君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202211092108.9A priority Critical patent/CN117675583A/en
Publication of CN117675583A publication Critical patent/CN117675583A/en
Pending legal-status Critical Current

Links

Landscapes

  • Hardware Redundancy (AREA)

Abstract

The application provides a communication method, a communication device and a communication system. The method comprises the following steps: two DPUs which are in a main-standby relation are deployed in a server host, and the two DPUs provide network services for the server host, so that the reliability and stability of the server host can be improved. The first DPU and the second DPU respectively create a virtual network disk, and the two virtual network disks are bound to the same physical network disk, so that the access to the physical network disk through the first DPU can be realized through accessing the first virtual network disk, or the access to the physical network disk through the second DPU can be realized through accessing the second virtual network disk, namely, the access to the physical network disk through different DPUs can be realized, and the use of different DPUs is realized.

Description

Communication method, communication device and communication system
Technical Field
The present disclosure relates to the field of communications technologies, and in particular, to a communication method, a communication device, and a communication system.
Background
The network function virtualization (network functions virtualization, NFV) architecture is a standard architecture for defining NFV implementation standards. The idea of NFV is to apply standardized network functions to unified hardware.
In the NFV architecture, standardized software for implementing various network functions can be generally applied to the same hardware device, which requires that NFV have a unified standard. The NFV architecture includes network function virtualization infrastructure (network functions virtualization infrastructure, NFVI), virtual network functions (virtual network functions, VNF), and management and orchestration (management and orchestration, MANO).
The hardware device in the NFV architecture is formed by jointly deploying a plurality of server hosts, and for each server host, how to improve the reliability and stability of the deployed server hosts needs to be solved.
Disclosure of Invention
The embodiment of the application provides a communication method, a communication device and a communication system, which are used for improving the reliability and stability of a server host.
In a first aspect, embodiments of the present application provide a communication method that may be performed by a first DPU or a module (e.g., a chip) applied to the first DPU. Taking the first DPU to perform the method as an example, the method includes: the first DPU receives a request message, wherein the request message requests to establish a virtual machine, the request message comprises information of a first virtual network disk, information of a second virtual network disk and information of a physical network disk, and the first virtual network disk and the second virtual network disk have an association relation; the first DPU creates a first virtual network disk according to the information of the first virtual network disk, and establishes the association between the first virtual network disk and the physical network disk; the first DPU sends information of the second virtual network disk and information of the physical network disk to the second DPU, the information of the second virtual network disk and the information of the physical network disk are used for creating the second virtual network disk and establishing association between the second virtual network disk and the physical network disk, the second DPU is a backup DPU of the first DPU, and the first DPU and the second DPU belong to the same server host.
According to the scheme, two DPUs which are in a main-standby relation are deployed in one server host, and the two DPUs provide network services for the server host, so that the reliability and stability of the server host can be improved. The first DPU and the second DPU respectively create a virtual network disk, and the two virtual network disks are bound to the same physical network disk, so that the access to the physical network disk through the first DPU can be realized through accessing the first virtual network disk, or the access to the physical network disk through the second DPU can be realized through accessing the second virtual network disk, namely, the access to the physical network disk through different DPUs can be realized, and the use of different DPUs is realized.
In a possible implementation method, the request message further includes information of a first virtual network card and information of a second virtual network card, where the first virtual network card and the second virtual network card have an association relationship; the first DPU creates the first virtual network card according to the information of the first virtual network card; the first DPU sends information of the second virtual network card to the second DPU, the information of the second virtual network card being used to create the second virtual network card.
According to the scheme, the first DPU and the second DPU respectively establish the virtual network card, so that communication with the outside through the first DPU can be realized by accessing the first virtual network card, or communication with the outside through the second DPU can be realized by accessing the second virtual network card, that is, communication with the outside through different DPUs can be realized, and use of the different DPUs is realized.
In one possible implementation, the first DPU creates and starts a virtual machine; the virtual machine corresponds to the first virtual network disk and the second virtual network disk, the first virtual network disk is related to the physical network disk through the first DPU, and the second virtual network disk is related to the physical network disk through the second DPU; the virtual machine corresponds to the first virtual network card and the second virtual network card, the first virtual network card is associated with the first DPU, and the second virtual network card is associated with the second DPU.
In one possible implementation, the first DPU establishes a communication channel between the first DPU and the second DPU.
In one possible implementation, the first DPU obtains an arbitration result, where the arbitration result indicates that the first DPU is a master DPU and the second DPU is a slave DPU.
In a possible implementation method, when the first DPU receives an upgrade completion instruction from the second DPU, the NFVI in the first DPU is upgraded, where the upgrade completion instruction indicates that the upgrade of the NFVI in the second DPU is completed, and the first DPU and the second DPU contain the same NFVI.
In one possible implementation, the first DPU receives an upgrade instruction from the VIM device, the upgrade instruction indicating an upgrade to the NFVI; the first DPU sends the upgrade instruction to a second DPU.
According to the scheme, in the NFVI upgrading process, the NFVI in the second DPU (namely the standby DPU) are upgraded firstly, and the NFVI in the first DPU still operates normally at the moment, so that the execution of the service is not interrupted, after the NFVI in the second DPU is upgraded, the second DPU can be upgraded into the main DPU, the service is migrated to the second DPU to continue to be executed, then the NFVI in the first DPU is upgraded, and the service is not interrupted all the time in the process. The scheme can realize the lossless upgrade of the NFVI, avoids the hot migration and batch division of the virtual machine and the restarting of the host, and solves the problems of high complexity, long upgrade time and high cost caused by the adoption of batch upgrade, hot migration and restarting of the host in the existing upgrade mechanism.
In a second aspect, embodiments of the present application provide a communication method that may be performed by a second DPU or a module (e.g., a chip) applied to the second DPU. Taking the second DPU as an example, the method includes: the second DPU receives information of a second virtual network disk and information of a physical network disk from the first DPU, wherein the second DPU is a backup DPU of the first DPU, and the first DPU and the second DPU belong to the same server host; the second DPU creates a second virtual network disk according to the information of the second virtual network disk; the second DPU establishes the association of the second virtual network disk and the physical network disk; the second virtual network disk has an association relationship with a first virtual network disk in the first DPU, and the first virtual network disk has an association relationship with the physical network disk.
According to the scheme, two DPUs which are in a main-standby relation are deployed in one server host, and the two DPUs provide network services for the server host, so that the reliability and stability of the server host can be improved. The first DPU and the second DPU respectively create a virtual network disk, and the two virtual network disks are bound to the same physical network disk, so that the access to the physical network disk through the first DPU can be realized through accessing the first virtual network disk, or the access to the physical network disk through the second DPU can be realized through accessing the second virtual network disk, namely, the access to the physical network disk through different DPUs can be realized, and the use of different DPUs is realized.
In one possible implementation, the second DPU receives information from a second virtual network card of the first DPU; the second DPU creates a second virtual network card according to the information of the second virtual network card; the second virtual network card has an association relationship with the first virtual network card in the first DPU.
According to the scheme, the first DPU and the second DPU respectively establish the virtual network card, so that communication with the outside through the first DPU can be realized by accessing the first virtual network card, or communication with the outside through the second DPU can be realized by accessing the second virtual network card, that is, communication with the outside through different DPUs can be realized, and use of the different DPUs is realized.
In one possible implementation, the second DPU establishes a communication channel between the first DPU and the second DPU.
In one possible implementation, the second DPU obtains an arbitration result, where the arbitration result indicates that the first DPU is a master DPU and the second DPU is a slave DPU.
In a possible implementation method, the second DPU receives an upgrade instruction from the first DPU or VIM device, the upgrade instruction indicating to upgrade the NFVI, the first DPU and the second DPU including the same NFVI; the second DPU upgrades the NFVI in the second DPU; the second DPU sends an upgrade completion instruction to the first DPU, the upgrade completion instruction indicating that an upgrade to the NFVI within the second DPU is complete.
According to the scheme, in the NFVI upgrading process, the NFVI in the second DPU (namely the standby DPU) are upgraded firstly, and the NFVI in the first DPU still operates normally at the moment, so that the execution of the service is not interrupted, after the NFVI in the second DPU is upgraded, the second DPU can be upgraded into the main DPU, the service is migrated to the second DPU to continue to be executed, then the NFVI in the first DPU is upgraded, and the service is not interrupted all the time in the process. The scheme can realize the lossless upgrade of the NFVI, avoids the hot migration and batch division of the virtual machine and the restarting of the host, and solves the problems of high complexity, long upgrade time and high cost caused by the adoption of batch upgrade, hot migration and restarting of the host in the existing upgrade mechanism.
In one possible implementation, the second DPU determines that the first DPU is faulty and upgrades to the master DPU.
According to the scheme, as the two DPUs are deployed on the server host, when software or hardware of one of the two DPUs fails, the other DPU can replace the software or hardware to work, so that the service of the server host is not affected, and the problem of service interruption in the scheme of deploying only a single DPU is solved.
In a third aspect, embodiments of the present application provide a communication method that may be performed by a management device or a module (e.g., a chip) applied to the management device. Taking the management device as an example, the method comprises the following steps: the management device sends a request message to a first DPU, wherein the request message requests to establish a virtual machine, the request message comprises information of a first virtual network disk and information of a physical network disk, and the information of the first virtual network disk and the information of the physical network disk are used for establishing the first virtual network disk and establishing association between the first virtual network disk and the physical network disk; the management device sends information of a second virtual network disk and information of the physical network disk to a second DPU, wherein the information of the second virtual network disk and the information of the physical network disk are used for creating the second virtual network disk and establishing association between the second virtual network disk and the physical network disk; the first virtual network disk and the second virtual network disk have an association relationship, the second DPU is a backup DPU of the first DPU, and the first DPU and the second DPU belong to the same server host.
According to the scheme, two DPUs which are in a main-standby relation are deployed in one server host, and the two DPUs provide network services for the server host, so that the reliability and stability of the server host can be improved. The first DPU and the second DPU respectively create a virtual network disk, and the two virtual network disks are bound to the same physical network disk, so that the access to the physical network disk through the first DPU can be realized through accessing the first virtual network disk, or the access to the physical network disk through the second DPU can be realized through accessing the second virtual network disk, namely, the access to the physical network disk through different DPUs can be realized, and the use of different DPUs is realized.
In a possible implementation method, the management device sends information of a first virtual network card to the first DPU, where the information of the first virtual network card is used to create the first virtual network card; the management device sends information of a second virtual network card to the second DPU, wherein the information of the second virtual network card is used for creating the second virtual network card; the first virtual network card and the second virtual network card have an association relationship.
According to the scheme, the first DPU and the second DPU respectively establish the virtual network card, so that communication with the outside through the first DPU can be realized by accessing the first virtual network card, or communication with the outside through the second DPU can be realized by accessing the second virtual network card, that is, communication with the outside through different DPUs can be realized, and use of the different DPUs is realized.
In a possible implementation method, the management device sends a first upgrade instruction to the second DPU, where the first upgrade instruction indicates to upgrade the NFVI; and the management device receives an upgrade completion instruction from the second DPU, and then sends a second upgrade instruction to the first DPU, wherein the second upgrade instruction indicates upgrading of the NFVI.
According to the scheme, in the NFVI upgrading process, the NFVI in the second DPU (namely the standby DPU) are upgraded firstly, and the NFVI in the first DPU still operates normally at the moment, so that the execution of the service is not interrupted, after the NFVI in the second DPU is upgraded, the second DPU can be upgraded into the main DPU, the service is migrated to the second DPU to continue to be executed, then the NFVI in the first DPU is upgraded, and the service is not interrupted all the time in the process. The scheme can realize the lossless upgrade of the NFVI, avoids the hot migration and batch division of the virtual machine and the restarting of the host, and solves the problems of high complexity, long upgrade time and high cost caused by the adoption of batch upgrade, hot migration and restarting of the host in the existing upgrade mechanism.
In a possible implementation method, the management device determines that the first DPU fails, and sends a notification message to the second DPU, where the notification message notifies the second DPU to upgrade to the master DPU.
According to the scheme, as the two DPUs are deployed on the server host, when software or hardware of one of the two DPUs fails, the other DPU can replace the software or hardware to work, so that the service of the server host is not affected, and the problem of service interruption in the scheme of deploying only a single DPU is solved.
In a fourth aspect, embodiments of the present application provide a communication device comprising a processor and a memory; the memory is configured to store computer instructions that, when executed by the apparatus, cause the apparatus to perform any of the implementation methods of the first to third aspects described above.
In a fifth aspect, embodiments of the present application provide a communication device comprising means for performing the steps of any implementation method of the first to third aspects described above.
In a sixth aspect, embodiments of the present application provide a communication device, including a processor and an interface circuit, where the processor is configured to communicate with other devices through the interface circuit, and perform any implementation method of the first aspect to the third aspect. The processor includes one or more.
In a seventh aspect, embodiments of the present application provide a communications apparatus comprising a processor coupled to a memory, the processor configured to invoke a program stored in the memory to perform any implementation method of the first aspect to the third aspect. The memory may be located within the device or may be located external to the device. And the processor may be one or more.
In an eighth aspect, embodiments of the present application provide a communication device that may be a DPU or a module (e.g., a chip) for use in a DPU. The apparatus has the function of implementing any implementation method of the first aspect or the second aspect. The functions can be realized by hardware, and can also be realized by executing corresponding software by hardware. The hardware or software includes one or more modules corresponding to the functions described above.
In a ninth aspect, embodiments of the present application also provide a computer program product comprising a computer program or instructions which, when executed by a communication device, cause any implementation of the above first to third aspects to be performed.
In a tenth aspect, embodiments of the present application further provide a computer readable storage medium having instructions stored therein that, when run on a communication device, cause any implementation method of the first to third aspects described above to be performed.
In an eleventh aspect, embodiments of the present application further provide a chip system, including: a processor configured to perform any implementation method of the first to third aspects.
In a twelfth aspect, embodiments of the present application further provide a communication system, including: a first DPU for implementing any of the above-described first aspects and a second DPU for implementing any of the second aspects.
Drawings
FIG. 1 is a schematic diagram of an NFV architecture;
fig. 2 is a schematic flow chart of a communication method according to an embodiment of the present application;
fig. 3 is a schematic flow chart of a communication method according to an embodiment of the present application;
fig. 4 is a schematic diagram of two DPUs deployed in a server host according to an embodiment of the present disclosure;
fig. 5 is a schematic flow chart of a communication method according to an embodiment of the present application;
fig. 6 is a schematic diagram of a communication device according to an embodiment of the present application;
fig. 7 is a schematic diagram of a communication device according to an embodiment of the present application.
Detailed Description
FIG. 1 is a schematic diagram of an NFV architecture. The NFV architecture is a standard architecture for defining NFV implementation standards. The idea of NFV is to apply standardized network functions to unified hardware. In the NFV architecture, standardized software for implementing various network functions can be generally applied to the same hardware device, which requires that NFV have a unified standard. The NFV architecture includes NFVI, VNF, and MANO. Wherein:
NFVI is used for virtualization of the underlying hardware resources so that upper layer software functions can run on the virtualized hardware resources, such as within a Virtual Machine (VM) or container. NFVI includes a hardware layer (hardware layer) and a virtualization layer (virtualization layer). The hardware layer includes hardware devices that provide computing, networking, storage resource capabilities. The virtualization layer mainly completes abstraction of hardware resources to form virtual resources, such as virtual computing resources, virtual storage resources and virtual network resources.
The VNF is used for software-defined network functions, and various network functions such as a virtual firewall function, a virtual switch function, etc. are implemented in software. In the NFV architecture, various VNFs are implemented on the basis of NFVI. Since NFVI is a standardized architecture, versatility is achieved for different VNFs.
MANO is used to manage all infrastructure resources (i.e., underlying hardware resources), and the underlying hardware resources are flexibly allocated to VNFs based on their needs. MANO includes virtualization infrastructure management (virtualized infrastructure managers, VIM), VNF management (VNFM), and NFV orchestrator (NFVO). VIM is used for discovery of resources, management allocation of virtual resources, failure handling, etc. VNFM is used to control the life cycle (instantiation, configuration, shutdown, etc.) of the VNF. NFVO is used for orchestration and management of NFV architecture, software resources, network services.
The NFV architecture may interact with the management functions of the service provider. The management functions of the service provider include, for example, an operation and maintenance support system (operations support systems, OSS), a service support system (business support system, BSS).
The NFVI has the problems of: the NFVI upgrade has high complexity, needs batch-wise and thermal migration to the virtual machine of the VNF to avoid service interruption, and needs to restart the server host deploying the NFVI multiple times, and has high complexity, long upgrade time and high cost.
Currently, with the development of network technology, it is proposed in the industry to offload NFVI to an intelligent network card, where the intelligent network card is inserted into a server host, and the function of NFVI is implemented by the intelligent network card, so as to reduce the consumption of computing power of a central processing unit (central processing unit, CPU) on the server host. The intelligent network card can be a data processor (data processing unit, DPU) or other types of processors, and for convenience of explanation, the invention is described by taking the intelligent network card as the DPU.
However, after the NFVI is carried to the intelligent network card, the complexity of software and hardware of the intelligent network card becomes high, and the reliability and stability of the NFVI also decrease. When the intelligent network card fails, the service on the server host is interrupted.
Fig. 2 is a flow chart of a communication method according to an embodiment of the present application. The method comprises the following steps:
in step 201, the first DPU receives a request message.
In one implementation, the first DPU may receive the request message from a management device, such as the VIM device shown in fig. 1, or another management device, which is not limited in this application.
The request message requests to establish the virtual machine, the request message comprises information of a first virtual network disk, information of a second virtual network disk and information of a physical network disk, and the first virtual network disk and the second virtual network disk have an association relationship.
The information of the first virtual network disk includes at least one of a name of the first virtual network disk, a type of the first virtual network disk, a size of the virtual network disk, and information of a shared disk, where the type of the first virtual network disk is a network multipath disk, and the information of the shared disk may be a name of the second virtual network disk.
The information of the second virtual network disk includes at least one of a name of the second virtual network disk, a type of the second virtual network disk, a size of the second virtual network disk, and information of the shared disk, where the type of the second virtual network disk is a network multipath disk, and the information of the shared disk may be a name of the first virtual network disk.
The type of the first virtual network disk is the same as the type of the second virtual network disk, and the size of the first virtual network disk is the same as the size of the second virtual network disk.
In step 202, the first DPU creates a first virtual network disk according to the information of the first virtual network disk, and establishes an association between the first virtual network disk and the physical network disk.
In step 203, the first DPU sends information of the second virtual network disk and information of the physical network disk to the second DPU.
The second DPU is a backup DPU for the first DPU. The first DPU may be referred to as a master DPU and the second DPU may be referred to as a slave DPU. The first DPU and the second DPU are deployed in the same server host.
In an implementation method, before the step 203, the first DPU and the second DPU further establish a communication channel, and perform primary-standby arbitration to obtain an arbitration result, where the arbitration result indicates that the first DPU is a primary DPU and the second DPU is a standby DPU.
In step 204, the second DPU creates a second virtual network disk according to the information of the second virtual network disk, and establishes an association between the second virtual network disk and the physical network disk.
According to the scheme, two DPUs which are in a main-standby relation are deployed in one server host, and the two DPUs provide network services for the server host, so that the reliability and stability of the server host can be improved. The first DPU and the second DPU respectively create a virtual network disk, and the two virtual network disks are bound to the same physical network disk, so that the access to the physical network disk through the first DPU can be realized through accessing the first virtual network disk, or the access to the physical network disk through the second DPU can be realized through accessing the second virtual network disk, namely, the access to the physical network disk through different DPUs can be realized, and the use of different DPUs is realized.
In an implementation method, the request message of step 201 further includes information of the first virtual network card and information of the second virtual network card, where the first virtual network card and the second virtual network card have an association relationship, and the embodiment of fig. 2 may further include the following steps 205 to 207.
In step 205, the first DPU creates a first virtual network card according to the information of the first virtual network card.
In step 206, the first DPU sends the information of the second virtual network card to the second DPU.
In step 207, the second DPU creates a second virtual network card according to the information of the second virtual network card.
The first virtual network card is also called a first virtual network port, and the second virtual network card is also called a second virtual network port.
The information of the first virtual network card comprises at least one of a name of the first virtual network card, a type of the first virtual network card, information of a binding group to which the first virtual network card belongs, a binding mode of the first virtual network card and a shared physical network card type of the first virtual network card. The binding mode of the first virtual network card is a master-slave mode, and the type of the shared physical network card of the first virtual network card is a mode for preferentially selecting different physical network cards.
The information of the second virtual network card comprises at least one of the name of the second virtual network card, the type of the second virtual network card, the information of the binding group to which the second virtual network card belongs, the binding mode of the second virtual network card and the shared physical network card type of the second virtual network card. The binding mode of the second virtual network card is a main and standby mode, and the type of the shared physical network card of the second virtual network card is to preferentially select different physical network cards.
The name of the first virtual network card is different from the name of the second virtual network card, the type of the first virtual network card is the same as the type of the second virtual network card, the information of the binding group to which the first virtual network card belongs is the same as the information of the binding group to which the second virtual network card belongs, namely the first virtual network card and the second virtual network card belong to the same binding group, the binding mode of the first virtual network card is the same as the binding mode of the first virtual network card, and the shared physical network card type of the first virtual network card is the same as the shared physical network card type of the second virtual network card.
Through the above steps 205 to 207, the first DPU and the second DPU create one virtual network card respectively, so that communication with the outside via the first DPU can be achieved by accessing the first virtual network card, or communication with the outside via the second DPU can be achieved by accessing the second virtual network card, that is, communication with the outside via the different DPUs can be achieved, and use of the different DPUs is achieved.
It should be noted that, the step 203 and the step 206 may be performed in the same step, the step 202 and the step 205 may be performed in the same step, and the step 204 and the step 207 may be performed in the same step, which is not limited in this application.
In a possible implementation method, following step 207, the following step 208 is further included.
In step 208, the first DPU creates and starts a virtual machine.
The virtual machine corresponds to a first virtual network disk and a second virtual network disk, wherein the first virtual network disk is related to the physical network disk through a first DPU, and the second virtual network disk is related to the physical network disk through a second DPU. The virtual machine corresponds to a first virtual network card and a second virtual network card, the first virtual network card is associated with a first DPU, and the second virtual network card is associated with a second DPU.
Based on the scheme, the virtual machine created by the first DPU is associated with two virtual network disks and two virtual network cards, and the virtual machine can access to the physical network disks through the first DPU and the second DPU respectively and can communicate with the outside through the first DPU and the second DPU respectively, so that the stability and the reliability of the service provided by the virtual machine are improved.
In an implementation method, when the NVFI in the DPU needs to be upgraded, the standby DPU may be upgraded first, and then the main DPU may be upgraded. For example, the first DPU receives an upgrade instruction from the management device, the upgrade instruction indicating to upgrade the NFVI, and the first DPU sends the upgrade instruction to the second DPU, where the first DPU and the second DPU contain the same NFVI. After receiving the upgrade instruction, the second DPU upgrades the NFVI in the second DPU, and after the second DPU upgrades the NFVI, the second DPU sends an upgrade completion instruction to the first DPU, wherein the upgrade completion instruction indicates that the upgrade of the NFVI in the second DPU is completed. And after receiving the upgrade completion instruction, the first DPU upgrades the NFVI in the first DPU. For another example, the second DPU receives an upgrade instruction from the management device, the upgrade instruction indicates that the NFVI is upgraded, the second DPU upgrades the NFVI in the second DPU after receiving the upgrade instruction, and after the second DPU finishes upgrading the NFVI, the second DPU sends an upgrade completion instruction to the first DPU, the upgrade completion instruction indicates that the NFVI in the second DPU is upgraded. And after receiving the upgrade completion instruction, the first DPU upgrades the NFVI in the first DPU. According to the scheme, in the NFVI upgrading process, the NFVI in the second DPU (namely the standby DPU) are upgraded firstly, and the NFVI in the first DPU still operates normally at the moment, so that the execution of the service is not interrupted, after the NFVI in the second DPU is upgraded, the second DPU can be upgraded into the main DPU, the service is migrated to the second DPU to continue to be executed, then the NFVI in the first DPU is upgraded, and the service is not interrupted all the time in the process. The scheme can realize the lossless upgrade of the NFVI, avoids the hot migration and batch division of the virtual machine and the restarting of the host, and solves the problems of high complexity, long upgrade time and high cost caused by the adoption of batch upgrade, hot migration and restarting of the host in the existing upgrade mechanism.
In one implementation, when the second DPU determines that the first DPU is faulty, the second DPU is upgraded to the master DPU. According to the scheme, as the two DPUs are deployed on the server host, when software or hardware of one of the two DPUs fails, the other DPU can replace the software or hardware to work, so that the service of the server host is not affected, and the problem of service interruption in the scheme of deploying only a single DPU is solved.
Fig. 3 is a flow chart of a communication method according to an embodiment of the present application. The embodiment of fig. 3 is a specific implementation of the embodiment of fig. 2 described above. This embodiment includes a configuration flow of the DPU (involving steps 301 to 309 below) and a creation flow of the virtual machine (involving steps 310 to 317 below).
The method comprises the following steps:
step 301, DPU1 installs cloud software, and dpumrg1 is newly deployed.
dpumrg1 is a management module of DPU1 for performing primary-backup arbitration between DPU1 and DPU2, and establishing a communication channel between dpumrg1 and dpumrg 2 for information interaction between DPU1 and DPU2, such as a management message that cloud software of DPU1 may send to cloud software of DPU 2.
The dpumrg1 may be a physical entity hardware module or a logical functional unit such as an application.
Step 302, DPU2 installs cloud software and newly deploys dpumrg2.
dpumrg2 is a management module of DPU2 for performing primary and backup arbitration between DPU2 and DPU1, and establishing a communication channel between dpumrg2 and dpumrg 1.
The dpumrg2 may be a physical entity hardware module or a logical functional unit such as an application.
The order between the above steps 301 and 302 is not limited.
The names dpumrg1 and dpumrg2 are only examples, and other names may be substituted in practical applications.
In step 303, a communication channel is established between dpumrg1 and dpumrg2, and the master-slave arbitration is completed.
For convenience of explanation, the DPU1 is taken as a master DPU, and the DPU2 is taken as a slave DPU (also referred to as a slave DPU), and the DPU1 and the DPU2 are in a master-slave relationship.
The communication protocol between dpumrg1 and dpumrg2 may be user datagram protocol (user datagram protocol, UDP), transmission control protocol (transmission control protocol, TCP), or the like.
Two paths can be established between dpumrg1 and dpumrg2 (see fig. 4):
path 1, also called the external path, is the path that dpumrg1 and dpumrg2 establish through the egress port.
Path 2, also called an internal path, is a path established by dpumrg1 and dpumrg2 through a program dpud on the server host, that is, a management program dpud is deployed on the server host, the dpud establishes communication with dpumgr1 and dpumgr2 based on peripheral component interconnect express (peripheral component interconnect express, PCIE) interfaces, respectively, and the dpud can complete information forwarding between dpumgr1 and dpumgr 2. The name of the dpud is merely an example, and other names may be substituted in practical applications.
One implementation way of completing the master-slave arbitration between dpumgr1 and dpumgr2 is as follows: DPUMgr1 and DPUMgr2 acquire a distributed lock provided by a VIM or a DPud respectively, and the DPU where the DPUMgr of the distributed lock is acquired is a main DPU, and the other DPU is a standby DPU. For example, dpumgr1 acquires a distributed lock, DPU1 is a master DPU, and DPU2 is a standby DPU. Optionally, after the main/standby arbitration is completed between dpumgr1 and dpumgr2, heartbeat keep-alive can be increased, one heartbeat is for T seconds, and continuous n times of interruption triggers the main/standby arbitration to be completed again according to the method, wherein the value of T, n can be set by itself.
In step 304, DPU1 (i.e., the master DPU) obtains the resource information of the server host, the resource information of DPU1, and the resource information of DPU 2.
The resource information of the server host includes, but is not limited to: information of CPU resources, information of storage resources and information of disk resources. The resource information of DPU1 includes, but is not limited to: information on virtual storage resources of DPU1, information on virtual network resources of DPU 1. The resource information of DPU2 includes, but is not limited to: information on virtual storage resources of DPU2, information on virtual network resources of DPU 2.
Wherein, DPU1 obtains the resource information of DPU2 through the communication channel between dpumgr1 and dpumgr 2.
In step 305, DPU1 sends the resource information of the server host, the resource information of DPU1, and the resource information of DPU2 to the VIM.
In step 306, the vim sends configuration information to DPU 1.
The VIM determines configuration information for the DPU1 and the DPU2 according to the received resource information of the server host, the resource information of the DPU1, and the resource information of the DPU2, where the configuration information includes physical network information and logical network information.
In step 307, DPU1 sends configuration information to DPU 2.
The configuration information is the configuration information received by DPU1 from VIM in step 306.
In step 308, the dpu1 performs network configuration according to the configuration information.
Step 309, dpu2 performs network configuration according to the configuration information.
The order between the steps 307 and 308 is not limited, and the order between the steps 308 and 309 is not limited.
In step 310, the vnf sends a request message to the VNFM.
The request message is for requesting creation of a virtual machine, and includes VNF description information.
The VNF description information includes multi-path shared disk information and multi-virtual network card binding information.
Wherein the multipath shared disk information includes information of the virtual network disk 1 and information of the virtual network disk 2. The virtual network disk 1 and the virtual network disk 2 are two different virtual network disks corresponding to the same physical network disk, namely, the virtual network disk 1 and the virtual network disk 2 are both virtual network disks and point to the same physical network disk.
The information of the virtual network disk 1 includes at least one of a name of the virtual network disk 1, a type of the virtual network disk 1, a size of the virtual network disk 1, and information of a shared disk, which may be a name of the virtual network disk 2, the type of the virtual network disk 1 being a network multipath disk.
The information of the virtual network disk 2 includes at least one of a name of the virtual network disk 2, a type of the virtual network disk 2, a size of the virtual network disk 2, and information of a shared disk, which may be a name of the virtual network disk 1, the type of the virtual network disk 2 being a network multipath disk.
The type of the virtual disk 1 is the same as the type of the virtual disk 2, and the size of the virtual disk 1 is the same as the size of the virtual disk 2.
Multipath shared disk information is described below in connection with one example.
Wherein, external_volumes represent multi-path shared disk information. The name of the virtual network disk 1 is vol_1, the type of the virtual network disk 1 is network_multi path_volume, the size of the virtual network disk 1 is 463 Gigabits (GB), and the shared disk information of the virtual network disk 1 is vol_2. The name of the virtual network disk 2 is vol_2, the type of the virtual network disk 2 is network_multi path_volume, the size of the virtual network disk 2 is 463GB, and the shared disk information of the virtual network disk 2 is vol_1.
The multi-virtual network card binding information comprises information of the virtual network card 1 and information of the virtual network card 2. The virtual network card 1 is also referred to as a virtual network port 1, and the virtual network card 2 is also referred to as a virtual network port 2.
The information of the virtual network card 1 includes at least one of a name of the virtual network card 1, a type of the virtual network card 1, information of a binding group to which the virtual network card 1 belongs, a binding mode of the virtual network card 1, and a shared physical network card type of the virtual network card 1. The binding mode of the virtual network card 1 is a master-slave mode, and the type of the shared physical network card of the virtual network card 1 is that different physical network cards are selected preferentially.
The information of the virtual network card 2 includes at least one of a name of the virtual network card 2, a type of the virtual network card 2, information of a binding group to which the virtual network card 2 belongs, a binding mode of the virtual network card 2, and a shared physical network card type of the virtual network card 2. The binding mode of the virtual network card 2 is a master-slave mode, and the type of the shared physical network card of the virtual network card 2 is that different physical network cards are selected preferentially.
The name of the virtual network card 1 is different from the name of the virtual network card 2, the type of the virtual network card 1 is the same as the type of the virtual network card 2, the information of the binding group to which the virtual network card 1 belongs is the same as the information of the binding group to which the virtual network card 2 belongs, the binding mode of the virtual network card 1 is the same as the binding mode of the virtual network card 1, and the shared physical network card type of the virtual network card 1 is the same as the shared physical network card type of the virtual network card 2.
The following describes the multi-virtual network card binding information in connection with an example.
Wherein, demo_vdu_port1 represents information of the virtual network card 1, and demo_vdu_port2 represents information of the virtual network card 2. The name of the virtual network card 1 is vNic1, and the name of the virtual network card 2 is vNic2. The types of the virtual network cards 1 and 2 are vnfd.net.port, the binding group to which the virtual network cards 1 and 2 belong is bond_demo_1, the binding mode of the virtual network cards 1 and 2 is active-standby mode (active-standby), the shared physical network card type of the virtual network cards 1 and 2 is private, and the private indicates that the virtual network cards 1 and 2 preferentially select different physical network cards.
In the above example, the VDU represents a virtualized deployment unit (virtualisation deployment unit), which may support the deployment of a subset of VNFs and the description of operational behavior.
In other application scenarios, the bonding_mode may also be valued as a link aggregation control protocol (link aggregation control protocol, LACP) load balancing mode, and the phy_nic_type may also be valued as share, which indicates that the priority virtual network cards 1 and 2 preferentially select the same physical network card.
In step 311, the vnfm sends a request message to the VIM.
The request message is used for requesting creation of the virtual machine, and the request message includes VNF description information and resource requirement information, where the resource requirement information includes, for example, memory resource requirement information, computing resource requirement information, network resource requirement information, and the like.
In step 312, the vim selects a server host according to the resource requirement information and creates a physical network disk.
The VIM selects a server host capable of meeting the resource demand information according to the resource demand information, and the subsequent VIM requests the server host to create a virtual machine.
The VIM creates a physical network disk according to the VNF description information, where the physical network disk is a physical network disk, and the size of the physical network disk is the same as the size of the virtual network disk described in the VNF description information. For example, in the foregoing example, the size of the virtual network disk is 463GB, and the size of the physical network disk is 463GB.
In step 313, the vim sends a request message to DPU 1.
The request message is used for requesting creation of the virtual machine, and includes VNF description information and information of the physical network disk.
The information of the physical network disk includes, for example, path information of the physical network disk, and the like.
In step 314, the dpu1 creates the virtual network disk 1 according to the VNF description information, and establishes an association between the virtual network disk 1 and the physical network disk, and creates the virtual network card 1.
Specifically, the DPU1 mounts the physical network disk into the DPU1, then the DPU1 associates the physical network disk with the PCI-E device 1 in the DPU1, the PCI-E device 1 corresponds to the virtual network disk 1, and the PCI-E device 1 is directly connected to the virtual machine after the virtual machine is subsequently built.
The DPU1 binds the virtual network card 1 with the PCI-E device 2 in the DPU1, the PCI-E device 2 corresponds to the virtual network card 1, and the PCI-E device 2 is directly connected to the virtual machine after the virtual machine is subsequently established.
Wherein PCI-E device 1 is different from PCI-E device 2.
In step 315, DPU1 sends information of virtual network disk 2, information of virtual network card 2, and information of physical network disk to DPU 2.
In step 316, dpu2 creates virtual network disk 2 and establishes an association of virtual network disk 2 with a physical network disk, and creates virtual network card 2.
The method of establishing the association between the virtual network disk 2 and the physical network disk by the DPU2 is similar to the method in step 314, and will not be described in detail.
In step 317, DPU1 creates and starts a virtual machine.
The virtual machine corresponds to a virtual network disk 1 and a virtual network disk 2, the virtual network disk 1 being associated to a physical network disk by the DPU1, the virtual network disk 2 being also associated to the physical network disk by the DPU 2. Multipath software (multipath as shown in fig. 4) within the virtual machine may present the virtual network disk 1 and virtual network disk 2 applications as one target disk (v/dev/vda as shown in fig. 4) to which applications on the virtual machine have access and which can connect to the same physical network disk through two paths.
The virtual machine corresponds to a virtual network card 1 and a virtual network card 2, the virtual network card 1 is from the DPU1, and the virtual network card 2 is from the DPU2. The network card binding software (bond as shown in fig. 4) in the virtual machine may present the application corresponding to the virtual network card 1 and the virtual network card 2 as one target network card (eth as shown in fig. 4, also referred to as a target port), through which the application in the virtual machine accesses the service, and the target network card may access the service through the virtual network card 1 and via the DPU1, and may also access the service through the virtual network card 2 and via the DPU2.
By the scheme, the virtual machine can access the physical network disk through multiple paths and access the service through multiple virtual network cards by deploying the two DPUs which are the main and standby DPUs, and reliability and stability are improved.
In an implementation method, when the NVFI in the DPU needs to be upgraded, the standby DPU may be upgraded first, and then the main DPU may be upgraded. For example, when DPU1 receives an upgrade instruction from VIM, DPU2 is notified to upgrade, and after DPU2 receives the upgrade instruction, the local port is closed, and then upgrade of NFVI in DPU2 is completed. After the NFVI upgrade in the DPU2 is completed, the DPU2 notifies the DPU1 that the upgrade is completed, and the DPU2 is upgraded to the master DPU, for example, the DPU2 may set the virtual network card 1 bound in the corresponding virtual machine to a failure through the network card driving interface, so as to trigger the virtual machine to use the virtual network card 2, thereby upgrading the DPU2 to the master DPU. DPU1, after determining that the NFVI upgrade within DPU2 is complete, upgrades the NFVI within DPU 1. According to the scheme, in the NFVI upgrading process, the NFVI in the DPU2 (namely the standby DPU) are upgraded firstly, and the NFVI in the DPU1 still operates normally, so that the execution of the service is not interrupted, after the NFVI in the DPU2 is upgraded, the DPU2 can be upgraded into a main DPU, the service is migrated to the DPU2 to continue to execute, and then the NFVI in the DPU1 is upgraded, wherein the service is not interrupted all the time in the process. The scheme can realize the lossless upgrade of the NFVI, avoids the hot migration and batch division of the virtual machine and the restarting of the host, and solves the problems of high complexity, long upgrade time and high cost caused by the adoption of batch upgrade, hot migration and restarting of the host in the existing upgrade mechanism.
In one implementation method, when the main DPU (i.e., DPU 1) fails, the multipath software (multipath as shown in fig. 4) and the network card binding software (bond as shown in fig. 4) on the virtual machine can be automatically switched to the virtual network disk 2 and the virtual network card 2 of the DPU2, so that no service loss is caused. When dpumgr2 on DPU2 detects that dpumgr1 on DPU1 is interrupted, a distributed lock provided by VIM or dpud is acquired, and DPU2 is lifted as a main DPU. According to the scheme, as the two DPUs are deployed on the server host, when software or hardware of one of the two DPUs fails, the other DPU can replace the software or hardware to work, so that the service of the server host is not affected, and the problem of service interruption in the scheme of deploying only a single DPU is solved.
Fig. 5 is a flow chart of a communication method according to an embodiment of the present application. The method comprises the following steps:
in step 501, the management device sends a request message to the first DPU.
The management device may be, for example, the VIM device shown in fig. 1, or may be another management device, which is not limited in this application.
The request message requests the establishment of the virtual machine, and the request message includes information of the first virtual network disk and information of the physical network disk.
In step 502, the first DPU creates a first virtual network disk according to the information of the first virtual network disk, and establishes an association between the first virtual network disk and a physical network disk.
In step 503, the management device sends information of the second virtual network disk and information of the physical network disk to the second DPU.
The second DPU is a backup DPU for the first DPU. The first DPU may be referred to as a master DPU and the second DPU may be referred to as a slave DPU. The first DPU and the second DPU are deployed in the same server host.
In step 504, the second DPU creates a second virtual network disk according to the information of the second virtual network disk, and establishes an association between the second virtual network disk and the physical network disk.
For a specific implementation method of the information of the first virtual network disk and the information of the second virtual network disk, reference may be made to the related description in the embodiment of fig. 2.
According to the scheme, two DPUs which are in a main-standby relation are deployed in one server host, and the two DPUs provide network services for the server host, so that the reliability and stability of the server host can be improved. The first DPU and the second DPU respectively create a virtual network disk, and the two virtual network disks are bound to the same physical network disk, so that the access to the physical network disk through the first DPU can be realized through accessing the first virtual network disk, or the access to the physical network disk through the second DPU can be realized through accessing the second virtual network disk, namely, the access to the physical network disk through different DPUs can be realized, and the use of different DPUs is realized.
In an implementation method, the embodiment of fig. 5 may further include the following steps 505 to 508.
In step 505, the management device sends information of the first virtual network card to the first DPU.
In one implementation method, the step 505 is combined with the step 501 into one step, that is, the request message of the step 501 carries the information of the first virtual network card.
In step 506, the first DPU creates a first virtual network card according to the information of the first virtual network card.
In step 507, the management device sends information of the second virtual network card to the second DPU.
In step 508, the second DPU creates a second virtual network card according to the information of the second virtual network card.
The first virtual network card is also called a first virtual network port, and the second virtual network card is also called a second virtual network port. The first virtual network card and the second virtual network card have an association relationship.
For a specific implementation method of the information of the first virtual network card and the information of the second virtual network card, reference may be made to the related description in the embodiment of fig. 2.
Through the steps 505 to 508, the first DPU and the second DPU respectively create one virtual network card, so that communication with the outside via the first DPU can be achieved by accessing the first virtual network card, or communication with the outside via the second DPU can be achieved by accessing the second virtual network card, that is, communication with the outside via different DPUs can be achieved, and use of different DPUs is achieved.
It should be noted that, the step 507 and the step 503 may be performed in the same step, the step 502 and the step 506 may be performed in the same step, and the step 504 and the step 508 may be performed in the same step, which is not limited in this application.
In a possible implementation method, after step 508, the following step 509 is further included.
In step 509, the first DPU creates and starts a virtual machine.
The virtual machine corresponds to a first virtual network disk and a second virtual network disk, wherein the first virtual network disk is related to the physical network disk through a first DPU, and the second virtual network disk is related to the physical network disk through a second DPU. The virtual machine corresponds to a first virtual network card and a second virtual network card, the first virtual network card is associated with a first DPU, and the second virtual network card is associated with a second DPU.
Based on the scheme, the virtual machine created by the first DPU is associated with two virtual network disks and two virtual network cards, and the virtual machine can access to the physical network disks through the first DPU and the second DPU respectively and can communicate with the outside through the first DPU and the second DPU respectively, so that the stability and the reliability of the service provided by the virtual machine are improved.
In an implementation method, when the NVFI in the DPU needs to be upgraded, the standby DPU may be upgraded first, and then the main DPU may be upgraded. For example, the management device sends a first upgrade instruction to the second DPU, where the first upgrade instruction indicates that the NFVI is upgraded, the second DPU upgrades the NFVI in the second DPU after receiving the first upgrade instruction, and sends an upgrade completion instruction to the management device after the second DPU completes the NFVI upgrade, where the upgrade completion instruction indicates that the NFVI in the second DPU is upgraded. After receiving the upgrade completion instruction, the management device sends a second upgrade instruction to the first DPU, the second upgrade instruction instructs upgrade of the NFVI, and then the first DPU upgrades the NFVI in the first DPU. According to the scheme, in the NFVI upgrading process, the NFVI in the second DPU (namely the standby DPU) are upgraded firstly, and the NFVI in the first DPU still operates normally at the moment, so that the execution of the service is not interrupted, after the NFVI in the second DPU is upgraded, the second DPU can be upgraded into the main DPU, the service is migrated to the second DPU to continue to be executed, then the NFVI in the first DPU is upgraded, and the service is not interrupted all the time in the process. The scheme can realize the lossless upgrade of the NFVI, avoids the hot migration and batch division of the virtual machine and the restarting of the host, and solves the problems of high complexity, long upgrade time and high cost caused by the adoption of batch upgrade, hot migration and restarting of the host in the existing upgrade mechanism.
In an implementation method, when the management device determines that the first DPU fails, a notification message is sent to the second DPU, the notification message notifies the second DPU to upgrade to the master DPU, and the second DPU upgrades to the master DPU after receiving the notification message. According to the scheme, as the two DPUs are deployed on the server host, when software or hardware of one of the two DPUs fails, the other DPU can replace the software or hardware to work, so that the service of the server host is not affected, and the problem of service interruption in the scheme of deploying only a single DPU is solved.
With respect to the embodiment of fig. 5, there may also be a specific implementation method similar to the embodiment of fig. 3, which is not described herein.
In the above embodiments of the present application, two DPUs are deployed on the same server host, where the two DPUs are in a master-slave relationship with each other. Of course, three or more DPUs may be deployed in practical application, one of the DPUs is a main DPU, and the other is a standby DPU, and the implementation process of the specific scheme is similar to the embodiments of fig. 2, 3 or 5, and will not be repeated.
In an implementation method, in the embodiment of the present application, multiple DPUs deployed on the same server host may also be used for load balancing, that is, multiple DPUs share traffic of one or more services together, so as to avoid an increase in failure rate caused by overload of a single DPU.
In one implementation, the "virtual machine" in the embodiment of fig. 2 or 3 described above may also be replaced with a "container". A container is an abstraction of the application layer that packages code and dependency together. Multiple containers may run on the same server host and share the operating system kernel with other containers, each running as an isolated process in user space. The containers typically occupy less space than virtual machines and can handle more applications.
It will be appreciated that, in order to implement the functions of the above embodiments, the first DPU, the second DPU or the management device includes corresponding hardware structures and/or software modules that perform the respective functions. Those of skill in the art will readily appreciate that the elements and method steps of the examples described in connection with the embodiments disclosed herein may be implemented as hardware or a combination of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application scenario and design constraints imposed on the solution.
Fig. 6 and fig. 7 are schematic structural diagrams of possible communication devices according to embodiments of the present application. These communication means may be used to implement the functions of the first DPU, the second DPU or the management device in the above-described method embodiments of fig. 2, 3 or 5, and thus may also implement the advantages provided by the above-described method embodiments. In the embodiment of the present application, the communication apparatus may be the first DPU, the second DPU, or the management device, or may be a module (such as a chip) applied to the first DPU, the second DPU, or the management device.
The communication device 600 shown in fig. 6 includes a processing unit 610 and a transceiving unit 620. The communication device 600 is configured to implement the functions of the first DPU or the second DPU in the above-described method embodiment.
When the embodiment of fig. 6 is used to implement the operation of the first DPU in the embodiment of fig. 2 or fig. 3, the transceiver unit 620 is configured to receive a request message, where the request message requests to establish a virtual machine, the request message includes information of a first virtual network disk, information of a second virtual network disk, and information of a physical network disk, and the first virtual network disk and the second virtual network disk have an association relationship; a processing unit 610, configured to create the first virtual network disk according to the information of the first virtual network disk, and establish an association between the first virtual network disk and the physical network disk; the transceiver unit 620 is further configured to send the information of the second virtual network disk and the information of the physical network disk to a second DPU, where the information of the second virtual network disk and the information of the physical network disk are used to create the second virtual network disk and establish an association between the second virtual network disk and the physical network disk, and the second DPU is a backup DPU of the first DPU, and the first DPU and the second DPU belong to the same server host.
In a possible implementation method, the request message further includes information of a first virtual network card and information of a second virtual network card, where the first virtual network card and the second virtual network card have an association relationship; the processing unit 610 is further configured to create the first virtual network card according to the information of the first virtual network card; the transceiver unit 620 is further configured to send information of the second virtual network card to the second DPU, where the information of the second virtual network card is used to create the second virtual network card.
In a possible implementation method, the processing unit 610 is further configured to create and start a virtual machine; the virtual machine corresponds to the first virtual network disk and the second virtual network disk, and the virtual machine corresponds to the first virtual network card and the second virtual network card.
In a possible implementation, the processing unit 610 is further configured to establish a communication channel between the first DPU and the second DPU.
In a possible implementation method, the processing unit 610 is further configured to obtain an arbitration result, where the arbitration result indicates that the first DPU is a primary DPU and the second DPU is a standby DPU.
In a possible implementation method, the transceiver unit 620 is further configured to receive an upgrade completion instruction from the second DPU; the processing unit 610 is further configured to upgrade the NFVI in the first DPU, where the upgrade completion instruction indicates that the upgrade is completed for the NFVI in the second DPU, and the first DPU and the second DPU contain the same NFVI.
In a possible implementation method, the transceiver unit 620 is further configured to receive an upgrade instruction, where the upgrade instruction indicates to upgrade the NFVI; and sending the upgrade instruction to the second DPU.
When the embodiment of fig. 6 is used to implement the operation of the second DPU in the embodiment of fig. 2 or 3, the transceiver unit 620 is configured to receive the information of the second virtual network disk and the information of the physical network disk from the first DPU, where the second DPU is a backup DPU of the first DPU, and the first DPU and the second DPU belong to the same server host; a processing unit 610, configured to create the second virtual network disk according to the information of the second virtual network disk; establishing the association between the second virtual network disk and the physical network disk; the second virtual network disk has an association relationship with a first virtual network disk in the first DPU, and the first virtual network disk has an association relationship with the physical network disk.
In a possible implementation method, the transceiver unit 620 is further configured to receive information of the second virtual network card from the first DPU; a processing unit 610, configured to create the second virtual network card according to information of the second virtual network card; the second virtual network card has an association relationship with the first virtual network card in the first DPU.
In a possible implementation, the processing unit 610 is configured to establish a communication channel between the first DPU and the second DPU.
In a possible implementation method, the processing unit 610 is configured to obtain an arbitration result, where the arbitration result indicates that the first DPU is a master DPU and the second DPU is a standby DPU.
In a possible implementation method, the transceiver unit 620 is further configured to receive an upgrade instruction from the first DPU or the management device, where the upgrade instruction indicates to upgrade the NFVI, and the first DPU and the second DPU include the same NFVI; a processing unit 610, configured to upgrade the NFVI in the second DPU; the transceiver unit 620 is further configured to send an upgrade completion instruction to the first DPU, where the upgrade completion instruction indicates that the upgrade is completed for the NFVI in the second DPU.
In a possible implementation method, the processing unit 610 is configured to determine that the first DPU fails and upgrade to the master DPU.
When the embodiment of fig. 6 is used to implement the operation of the management device in the embodiment of fig. 5, the transceiver unit 620 is configured to send a request message to the first DPU, where the request message requests to establish a virtual machine, the request message includes information of a first virtual network disk and information of a physical network disk, and the information of the first virtual network disk and the information of the physical network disk are used to create the first virtual network disk and establish an association between the first virtual network disk and the physical network disk; transmitting information of a second virtual network disk and information of the physical network disk to a second DPU, wherein the information of the second virtual network disk and the information of the physical network disk are used for creating the second virtual network disk and establishing association between the second virtual network disk and the physical network disk; the first virtual network disk and the second virtual network disk have an association relationship, the second DPU is a backup DPU of the first DPU, and the first DPU and the second DPU belong to the same server host.
In a possible implementation method, the transceiver unit 620 is further configured to send information of a first virtual network card to the first DPU, where the information of the first virtual network card is used to create the first virtual network card; transmitting information of a second virtual network card to the second DPU, wherein the information of the second virtual network card is used for creating the second virtual network card; the first virtual network card and the second virtual network card have an association relationship.
In a possible implementation method, the transceiver unit 620 is further configured to send a first upgrade instruction to the second DPU, where the first upgrade instruction indicates to upgrade the NFVI; and receiving an upgrade completion instruction from the second DPU, and sending a second upgrade instruction to the first DPU, wherein the second upgrade instruction indicates upgrading of the NFVI.
In a possible implementation method, the processing unit 610 is configured to determine that the first DPU fails, and send a notification message to the second DPU, where the notification message notifies the second DPU to upgrade to the master DPU.
The more detailed descriptions of the processing unit 610 and the transceiver unit 620 may be directly obtained by referring to the related descriptions in the above method embodiments, and are not repeated herein.
The communication device 700 shown in fig. 7 includes a processor 710 and an interface circuit 720. Processor 710 and interface circuit 720 are coupled to each other. It is understood that the interface circuit 720 may be a transceiver or an input-output interface. Optionally, the communication device 700 may further comprise a memory 730 for storing instructions to be executed by the processor 710 or for storing input data required by the processor 710 to execute instructions or for storing data generated after the processor 710 executes instructions.
When the communication device 700 is used to implement the above-mentioned method embodiment, the processor 710 is configured to implement the function of the above-mentioned processing unit 610, and the interface circuit 720 is configured to implement the function of the above-mentioned transceiver unit 620.
It is to be appreciated that the processor in embodiments of the present application may be a central processing unit (central processing unit, CPU), but may also be other general purpose processors, digital signal processors (digital signal processor, DSP), application specific integrated circuits (application specific integrated circuit, ASIC), field programmable gate arrays (field programmable gate array, FPGA) or other programmable logic devices, transistor logic devices, hardware components, or any combination thereof. The general purpose processor may be a microprocessor, but in the alternative, it may be any conventional processor.
The method steps in the embodiments of the present application may be implemented by hardware, or may be implemented by a processor executing software instructions. The software instructions may be comprised of corresponding software modules that may be stored in random access memory, flash memory, read only memory, programmable read only memory, erasable programmable read only memory, electrically erasable programmable read only memory, registers, hard disk, removable disk, CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. In addition, the ASIC may reside in a base station or terminal. The processor and the storage medium may reside as discrete components in a base station or terminal.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer programs or instructions. When the computer program or instructions are loaded and executed on a computer, the processes or functions described in the embodiments of the present application are performed in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, a base station, a user equipment, or other programmable apparatus. The computer program or instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer program or instructions may be transmitted from one website site, computer, server, or data center to another website site, computer, server, or data center by wired or wireless means. The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that integrates one or more available media. The usable medium may be a magnetic medium, e.g., floppy disk, hard disk, tape; but also optical media such as digital video discs; but also semiconductor media such as solid state disks. The computer readable storage medium may be volatile or nonvolatile storage medium, or may include both volatile and nonvolatile types of storage medium.
In the various embodiments of the application, if there is no specific description or logical conflict, terms and/or descriptions between the various embodiments are consistent and may reference each other, and features of the various embodiments may be combined to form new embodiments according to their inherent logical relationships.
In the present application, "at least one" means one or more, and "a plurality" means two or more. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a alone, a and B together, and B alone, wherein a, B may be singular or plural. In the text description of the present application, the character "/", generally indicates that the associated object is an or relationship; in the formulas of the present application, the character "/" indicates that the front and rear associated objects are a "division" relationship.
It will be appreciated that the various numerical numbers referred to in the embodiments of the present application are merely for ease of description and are not intended to limit the scope of the embodiments of the present application. The sequence number of each process does not mean the sequence of the execution sequence, and the execution sequence of each process should be determined according to the function and the internal logic.

Claims (21)

1. A method of communication, comprising:
the method comprises the steps that a first data processor DPU receives a request message, wherein the request message requests to establish a virtual machine, the request message comprises information of a first virtual network disk, information of a second virtual network disk and information of a physical network disk, and the first virtual network disk and the second virtual network disk have an association relation;
the first DPU creates a first virtual network disk according to the information of the first virtual network disk, and establishes the association between the first virtual network disk and the physical network disk;
the first DPU sends the information of the second virtual network disk and the information of the physical network disk to a second DPU, the information of the second virtual network disk and the information of the physical network disk are used for creating the second virtual network disk and establishing the association between the second virtual network disk and the physical network disk, the second DPU is a backup DPU of the first DPU, and the first DPU and the second DPU belong to the same server host.
2. The method of claim 1, wherein the request message further includes information of a first virtual network card and information of a second virtual network card, the first virtual network card and the second virtual network card having an association;
The method further comprises the steps of:
the first DPU creates the first virtual network card according to the information of the first virtual network card;
and the first DPU sends the information of the second virtual network card to the second DPU, and the information of the second virtual network card is used for creating the second virtual network card.
3. The method of claim 2, wherein the method further comprises:
the first DPU creates and starts a virtual machine;
the virtual machine corresponds to the first virtual network disk and the second virtual network disk, and the virtual machine corresponds to the first virtual network card and the second virtual network card.
4. A method according to any one of claims 1 to 3, wherein the method further comprises:
the first DPU establishes a communication channel between the first DPU and the second DPU.
5. The method of any one of claims 1 to 4, wherein the method further comprises:
the first DPU obtains an arbitration result, the arbitration result indicates that the first DPU is a main DPU, and the second DPU is a standby DPU.
6. The method of any one of claims 1 to 5, wherein the method further comprises:
And when the first DPU receives an upgrade completion instruction from the second DPU, upgrading the Network Function Virtualization Infrastructure (NFVI) in the first DPU, wherein the upgrade completion instruction indicates that the upgrade of the NFVI in the second DPU is completed, and the first DPU and the second DPU contain the same NFVI.
7. The method of claim 6, wherein the method further comprises:
the first DPU receives an upgrade instruction, wherein the upgrade instruction indicates that the NFVI is upgraded;
and the first DPU sends the upgrade instruction to a second DPU.
8. A method of communication, comprising:
a second data processor DPU receives information of a second virtual network disk and information of a physical network disk from a first DPU, wherein the second DPU is a backup DPU of the first DPU, and the first DPU and the second DPU belong to the same server host;
the second DPU creates a second virtual network disk according to the information of the second virtual network disk;
the second DPU establishes the association of the second virtual network disk and the physical network disk;
the second virtual network disk has an association relationship with a first virtual network disk in the first DPU, and the first virtual network disk has an association relationship with the physical network disk.
9. The method of claim 8, wherein the method further comprises:
the second DPU receives information of a second virtual network card from the first DPU;
the second DPU creates a second virtual network card according to the information of the second virtual network card;
the second virtual network card has an association relationship with the first virtual network card in the first DPU.
10. The method of claim 8 or 9, wherein the method further comprises:
the second DPU establishes a communication channel between the first DPU and the second DPU.
11. The method of any one of claims 8 to 10, wherein the method further comprises:
the second DPU obtains an arbitration result, the arbitration result indicates that the first DPU is a main DPU, and the second DPU is a standby DPU.
12. The method of any one of claims 8 to 11, wherein the method further comprises:
the second DPU receives an upgrade instruction from the first DPU or a management device, the upgrade instruction indicating upgrade of the NFVI, the first DPU and the second DPU including the same NFVI;
the second DPU upgrades the NFVI in the second DPU;
The second DPU sends an upgrade completion instruction to the first DPU, the upgrade completion instruction indicating that an upgrade to the NFVI within the second DPU is complete.
13. The method of any one of claims 8 to 12, wherein the method further comprises:
and if the second DPU determines that the first DPU fails, upgrading the first DPU to a main DPU.
14. A method of communication, comprising:
the management equipment sends a request message to a first DPU, wherein the request message requests to establish a virtual machine, the request message comprises information of a first virtual network disk and information of a physical network disk, and the information of the first virtual network disk and the information of the physical network disk are used for establishing the first virtual network disk and establishing association between the first virtual network disk and the physical network disk;
the management device sends information of a second virtual network disk and information of the physical network disk to a second DPU, wherein the information of the second virtual network disk and the information of the physical network disk are used for creating the second virtual network disk and establishing association between the second virtual network disk and the physical network disk;
the first virtual network disk and the second virtual network disk have an association relationship, the second DPU is a backup DPU of the first DPU, and the first DPU and the second DPU belong to the same server host.
15. The method of claim 14, wherein the method further comprises:
the management device sends information of a first virtual network card to the first DPU, wherein the information of the first virtual network card is used for creating the first virtual network card;
the management device sends information of a second virtual network card to the second DPU, wherein the information of the second virtual network card is used for creating the second virtual network card;
the first virtual network card and the second virtual network card have an association relationship.
16. The method of claim 14 or 15, wherein the method further comprises:
the management device sends a first upgrade instruction to a second DPU, wherein the first upgrade instruction indicates that the NFVI is upgraded;
and the management equipment receives an upgrade completion instruction from the second DPU, and then sends a second upgrade instruction to the first DPU, wherein the second upgrade instruction indicates that the NFVI is upgraded.
17. The method of any one of claims 14 to 16, wherein the method further comprises:
and the management equipment determines that the first DPU fails, and sends a notification message to the second DPU, wherein the notification message notifies the second DPU to upgrade to the main DPU.
18. A communication device comprising a processor and interface circuitry for receiving signals from other communication devices than the communication device and transmitting to the processor or sending signals from the processor to other communication devices than the communication device, the processor being configured to implement the method of any one of claims 1 to 7, or to implement the method of any one of claims 8 to 13, or to implement the method of any one of claims 14 to 17, by logic circuitry or executing code instructions.
19. A computer program product comprising a computer program which, when executed by a communication device, implements the method of any one of claims 1 to 7, or implements the method of any one of claims 8 to 13, or is for implementing the method of any one of claims 14 to 17.
20. A computer readable storage medium, characterized in that the storage medium has stored therein a computer program or instructions which, when executed by a communication device, implements the method of any of claims 1 to 7, or implements the method of any of claims 8 to 13, or is used to implement the method of any of claims 14 to 17.
21. A communication system comprising a first DPU for implementing the method of any one of claims 1 to 7 and a second DPU for implementing the method of any one of claims 8 to 13.
CN202211092108.9A 2022-09-08 2022-09-08 Communication method, communication device and communication system Pending CN117675583A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211092108.9A CN117675583A (en) 2022-09-08 2022-09-08 Communication method, communication device and communication system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211092108.9A CN117675583A (en) 2022-09-08 2022-09-08 Communication method, communication device and communication system

Publications (1)

Publication Number Publication Date
CN117675583A true CN117675583A (en) 2024-03-08

Family

ID=90085098

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211092108.9A Pending CN117675583A (en) 2022-09-08 2022-09-08 Communication method, communication device and communication system

Country Status (1)

Country Link
CN (1) CN117675583A (en)

Similar Documents

Publication Publication Date Title
US11003480B2 (en) Container deployment method, communication method between services, and related apparatus
US20210004258A1 (en) Method and Apparatus for Creating Virtual Machine
WO2017152633A1 (en) Port binding implementation method and device
KR101530472B1 (en) Method and apparatus for remote delivery of managed usb services via a mobile computing device
US7752635B2 (en) System and method for configuring a virtual network interface card
US7743189B2 (en) PCI function south-side data management
US7818559B2 (en) Boot negotiation among multiple boot-capable devices
US20090144731A1 (en) System and method for distribution of resources for an i/o virtualized (iov) adapter and management of the adapter through an iov management partition
CN111314799A (en) Terminal system architecture, communication system, communication method, and storage medium
JP6680901B2 (en) Management method and device
US10908895B2 (en) State-preserving upgrade of an intelligent server adapter
WO2013049991A1 (en) Network adapter hardware state migration discovery in a stateful environment
CN113312143B (en) Cloud computing system, command processing method and virtualization simulation device
US11743117B2 (en) Streamlined onboarding of offloading devices for provider network-managed servers
EP3985508A1 (en) Network state synchronization for workload migrations in edge devices
WO2018214965A1 (en) Wireless network function virtualization method and device
CN115858102A (en) Method for deploying virtual machine supporting virtualization hardware acceleration
US11321109B2 (en) Container engine for selecting driver based on container metadata
CN113918174A (en) Bare metal server deployment method, deployment controller and server cluster
CN116800616B (en) Management method and related device of virtualized network equipment
CN113127144B (en) Processing method, processing device and storage medium
CN116724543A (en) Container cluster management method and device
CN117675583A (en) Communication method, communication device and communication system
WO2022141293A1 (en) Elastic scaling method and apparatus
US20230325222A1 (en) Lifecycle and recovery for virtualized dpu management operating systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication