CN117271426A - Edge multi-host stacking all-in-one machine for separating pipes and pipe separating method - Google Patents

Edge multi-host stacking all-in-one machine for separating pipes and pipe separating method Download PDF

Info

Publication number
CN117271426A
CN117271426A CN202311225962.2A CN202311225962A CN117271426A CN 117271426 A CN117271426 A CN 117271426A CN 202311225962 A CN202311225962 A CN 202311225962A CN 117271426 A CN117271426 A CN 117271426A
Authority
CN
China
Prior art keywords
host
management
machine
stacking
hosts
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311225962.2A
Other languages
Chinese (zh)
Inventor
段嘉
张军
张波
王硕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Siteshun Technology Co ltd
Original Assignee
Shenzhen Siteshun Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Siteshun Technology Co ltd filed Critical Shenzhen Siteshun Technology Co ltd
Priority to CN202311225962.2A priority Critical patent/CN117271426A/en
Publication of CN117271426A publication Critical patent/CN117271426A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2002Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where interconnections or communication control functionality are redundant
    • G06F11/2007Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where interconnections or communication control functionality are redundant using redundant communication media
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/16Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
    • G06F15/163Interprocessor communication
    • G06F15/173Interprocessor communication using an interconnection network, e.g. matrix, shuffle, pyramid, star, snowflake
    • G06F15/17306Intercommunication techniques
    • G06F15/17312Routing techniques specific to parallel machines, e.g. wormhole, store and forward, shortest path problem congestion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45562Creating, deleting, cloning virtual machine instances
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45595Network integration; Enabling network access in virtual machine instances

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computer Hardware Design (AREA)
  • Signal Processing (AREA)
  • Mathematical Physics (AREA)
  • Quality & Reliability (AREA)
  • Computer And Data Communications (AREA)

Abstract

The utility model relates to a technical field of all-in-one specifically is a multi-host stack all-in-one, including two at least virtual switch and at least three host computers, every the host computer is all connected respectively on two virtual switch, one of them install and dispose cluster management software in the host computer as management host computer, deploy and operate application program and service as work host computer in other host computers to add other host computers in the management object of cluster management software, can realize the effect of effective separation through dividing into management host computer and work host computer with many host computers, be convenient for the management and the application to work host computer.

Description

Edge multi-host stacking all-in-one machine for separating pipes and pipe separating method
Technical Field
The present disclosure relates to edge computers, and more particularly, to a method for separating and designing an edge multi-host stack tube.
Background
With the continuous expansion of the application field of computers, more and more computer systems need to use multiple hosts for distributed computing and data processing. Conventional multi-host systems require the use of a large amount of space and equipment and are relatively complex to manage.
An edge multi-host stacking all-in-one refers to stacking multiple edge computing host devices together to form a single integrated computing system. It combines the computational power and network resources of multiple hosts and manages and controls through an integrated architecture.
An edge multi-host stacking all-in-one machine typically includes multiple physical hosts that are connected together by way of high-speed interconnects, share resources, and perform collaborative computing. Such all-in-one machines typically have additional network devices, storage devices, and associated management software and operating systems.
The edge multi-host stacking all-in-one machine has wide application in the field of edge computing. It can provide higher computational performance and processing power, support large-scale data processing and analysis, and meet the demands for low latency, high reliability, and flexibility. The technology has potential application in various fields, such as intelligent cities, industrial Internet of things, automatic driving and the like.
However, in the prior art, in the edge multi-host stacking integrated machine, there is still room for improvement for the cooperation of multiple hosts.
Disclosure of Invention
Aiming at the defects in the prior art, one of the purposes of the application is to provide a multi-host stacking all-in-one machine with separated edges, so that a plurality of hosts in the multi-host stacking all-in-one machine with the edges can work cooperatively, and the management and the use are convenient.
In order to achieve the above technical object, the present invention provides a multi-host stacking all-in-one machine with separated edge, which comprises at least two virtual switches and at least three hosts, wherein each host is respectively connected to two virtual switches, wherein cluster management software is installed and configured in one host as a management host, application programs and services are deployed and run in the other hosts as working hosts, and the other hosts are added into management objects of the cluster management software.
By adopting the technical scheme, the host computer of the cluster management software is configured to be a management host computer so as to be convenient for managing and monitoring other host computers, and other host computers are working host computers and are managed and monitored by the management host computers.
Furthermore, an automatic deployment system is arranged on the working host machine and is used for installing and deploying software, programs and the like required in the working host machine.
By the technical scheme, load balancing means that the management host coordinates the load of each working host, and when the load of one host is too high, the management host shares work with other idle or low-load hosts.
Further, the automated deployment procedure includes a normalization module: the standardized module is used for standardizing the disks and/or networks of the edge multi-host cluster;
on-line system: the online system is used for connecting a plurality of hosts in the edge multi-host cluster to form independent computers capable of operating uniformly;
and (3) executing an installation module: the execution installation module is used for completing the installation of resources required by specific software, systems and the like.
By adopting the technical scheme, the automatic deployment of the working host can be realized.
Further, the online system is a Kubernetes system.
Furthermore, the automatic deployment program also comprises a custom menu, wherein the custom menu is used as a window for starting functions, and a user drives a standardized module or an online system or executes an installation module to work.
Further, the specific standard of the disk standardization is divided into two disks, wherein one system disk is a data disk, and the other system disk is a data disk.
Further, specific standards of the network standardization are as follows: multiple working host networks are initialized in a subnet, IP is standardized, multiple host networks are initialized in a subnet, IP is standardized, one master IP is standardized, and the rest are working IP. An IP of 255 is used as a master IP, and 245 to 254 are used as working IPs.
Further, a management host is configured with a native mixed management platform for managing the virtual switch, and the management host comprises a container management module and a virtual switch management module, wherein the virtual switch management module manages Pod in a native cloud management mode, and a KubeVirt enhancement plug-in is inserted into the cloud platform.
Further, at least two network cards are configured on the working host, and a network card binding module for binding a plurality of network cards into a logic card, a network card communication module for realizing communication between the network cards, and a path selection module for selecting a path for information transmission are configured.
Further, the network card binding module is implemented through Linux NIC Bonding.
Further, the network card communication module uses the dynamic link aggregation mode of 802.3 ad.
Further, the path selection module selects a hash strategy for carrying out.
Further, path selection is implemented at the network layer.
Further, the hash policy has at least 4 selection factors including "source", "target", "address of source", "address of target", and configuration of "source", "target", "address of source", "address of target" in the kernel.
Further, the selection factors of the hash strategy also comprise encryption, and the kernel expansion adds the encryption factors.
Further, the selection factors of the hash strategy also comprise an APP ID, and the kernel expands the factors added with the APP ID.
Further, the system also comprises an operation host, wherein the operation host is responsible for the configuration of the whole cluster and is used for operating and controlling the use of the whole host.
Another technical object of the present invention is to provide a cloud primary mix management method for the split-edge multi-host stacking all-in-one machine for a pipe according to any one of claims 1 to 16, comprising the steps of: the method comprises the following steps: the method comprises the steps of firstly, installing a Kubernetes cluster; secondly, installing KubeVirt; the third step is to create a virtual switch, and a virtual machine instance is required to be defined for creating a virtual switch in Kubernetes; fourth, managing virtual switches using kubcctl command line tools or Kubernetes Dashboard; fifth, virtual switches are used to access the virtual switch by creating and connecting a virtual switch instance to a service or exposing to an external network through a NodePort.
Further, the method for installing the Kubernetes cluster comprises the following steps: a Kubernetes cluster is run, created by kubeadm, kops, minikube or other tools.
Further, the method for installing the KubeVirt comprises the following steps: kubeVirt was installed on Kubernetes clusters using YAML files provided in the gitthub repository of KubeVirt or using Helm Chart.
Another technical object of the present invention is to provide a device management method for a split-edge multi-host stacking all-in-one machine according to any one of claims 1 to 16, wherein the device management method includes establishing a decentralized network, constructing a closed-loop mesh structure, configuring a decentralized management protocol, introducing an intelligent contract, using the intelligent contract to define and execute management rules of the edge all-in-one machine, managing all logical lower nodes, and broadcasting metadata of the node to all upper nodes by the lower nodes.
Further, the method for establishing the decentralization network is A1: selecting a proper decentralization technology, A2: designing a network topology structure; a3: configuring nodes and a communication mechanism; a4: implementing protocols and rules; a5: node management and maintenance; a6: security and privacy protection; a7: testing and optimizing.
Another technical object of the present invention is to provide a pipe separation method for an edge multi-host stacking all-in-one machine, which is characterized by comprising the separated edge multi-host stacking all-in-one machine as set forth in any one of claims 1 to 16, wherein two routers are planned, a plurality of hosts are respectively communicated with left and right routers, and the routers are in butt joint with an external network. Selecting a host as a management host to take charge of cluster management and coordination work; other hosts serve as working hosts, application programs and services are deployed and run in the working hosts, ports of the management hosts are outward, and the whole host management is achieved.
Based on the technical scheme, the invention has at least the following technical effects:
1. the multiple hosts are divided into the management host and the working host, the management host is used for managing the working host, and a distributed architecture scheme of the multiple-host stacking integrated machine is provided, so that the overall stability and the high availability of the system are improved; cost is saved, and network bandwidth pressure is relieved.
2. The load balancing and fault tolerance mechanism is realized through the double routers and the double power supplies, the plurality of network cards are bound into one logic card in a network card binding mode, the whole logic card is not affected when one card fails, the failure rate is reduced, network failure can occur only when all network cards fail simultaneously, the failure rate is small, in addition, the logic card has the advantages that compared with the switching of the main card and the standby card, the switching step is reduced, and the phenomena of information delay, network blocking and the like caused by the action are avoided.
3. Through the management of the container management module and the virtual switch, the container and the virtual switch can be managed simultaneously, and meanwhile, the development and deployment cycle is faster in a cloud native management Pod mode, and the containerized application program can be constructed, tested and deployed faster, so that the development cycle is shortened.
4. Through the decentralized management method, the robustness and performance of the whole system are improved, and the problems of single-point failure, bandwidth bottleneck and high delay in centralized management are solved.
Drawings
FIG. 1 is a hardware connection diagram of a split edge multi-host stacking all-in-one machine for pipes in an embodiment;
FIG. 2 is a hardware link diagram for reducing operating hosts for a split edge multi-host stacking all-in-one machine in an embodiment.
Detailed Description
The present application is described in further detail below with reference to the accompanying drawings.
Examples: an integrated machine for stacking multiple hosts with separated edges comprises a software part and a hardware part, wherein the hardware part comprises a plurality of hosts, and referring to fig. 1, in the embodiment, 5 hosts are taken as examples, one host is taken as an operation host, the other host is taken as a management host, and all hosts are connected through two virtual switches, namely, each host is connected to the two virtual switches. The two virtual switches are connected with two external networks behind the router.
The operation host is responsible for the configuration of the whole cluster, the cluster management software is configured on the management host, the application programs and services are deployed and run in the working host, and the rest hosts are added into the management objects of the cluster management software.
In another embodiment, referring to fig. 2, the separate edge multi-host stacking all-in-one machine for pipes includes a management host and a plurality of working hosts, excluding an operation host, and the operation host may be connected to the edge multi-host stacking all-in-one machine through a common computer host to perform an operation function during actual working.
The cloud primary mixed management platform for managing the virtual switch is configured in the cluster management software, and specifically comprises a container management module and a virtual switch management module, so that a management host can manage the virtual switch and can manage containers, the managed containers in the embodiment comprise a working host, and the virtual switch management module is inserted with a KubeVirt enhancement plug-in the cloud platform in a mode of cloud primary management Pod. The virtual switch management module is used for managing the starting, stopping, deleting and the like of the virtual switch.
The specific management method of the cloud primary mixed management platform is a cloud primary mixed management method and comprises the steps of firstly installing a Kubernetes cluster, firstly running the Kubernetes cluster, and creating the cluster through kubeadm, kops, minikube or other tools.
And secondly, installing the KubeVirt, and installing the KubeVirt on the Kubernetes cluster. Installation may be performed using YAML files provided in the GitHub repository of KubeVirt or using Helm Chart.
The third step creates virtual switches, creating a virtual switch in Kubernetes requires defining a virtual machine instance. This may be accomplished by creating a YAML file containing the virtual switch definitions.
Fourth, the virtual switch is managed using kubctl command line tool or Kubernetes Dashboard. For example, start, stop, delete virtual switches, etc.
Fifth, virtual switches are used to access the virtual switch by creating and connecting a virtual switch instance to a service or exposing to an external network through a NodePort.
By the method, the virtual switch is scheduled in a mode of managing Pod by using cloud primordia, and the virtual switch is managed by using automatic expansion, rolling update and automatic recovery functions provided by Kubernetes, so that the virtual switch has higher usability and elasticity: kubernetes can automatically detect and recover from failures, and scale applications as needed, thereby improving usability and resiliency.
In order to better manage the working host, the application also provides an equipment management method for decentralization, which comprises the steps of establishing a decentralization network, constructing a closed-loop network structure, configuring a decentralization management protocol, introducing an intelligent contract, using the intelligent contract to prescribe and execute the management rule of the edge all-in-one machine, managing all logic lower nodes by upper nodes, and broadcasting metadata of the nodes to all upper nodes by the lower nodes.
The specific method for establishing the decentralised network is as follows: a1: selecting a suitable decentralization technique: the decentralized network may be implemented using a variety of techniques, such as blockchains, distributed ledgers, peer-to-peer networks, and the like. According to specific requirements and technical requirements, a proper decentralization technology is selected.
A2: designing a network topology structure: and designing a topological structure of the network according to the layout and the connection mode of the edge all-in-one machine. The use of a peer-to-peer network architecture or a multi-center architecture may be considered to ensure the reliability and performance of the network connection.
A3: configuration node and communication mechanism: a node is configured for each edge all-in-one and a communication mechanism between the nodes is determined. Communication between nodes may be achieved through point-to-point connections between nodes or broadcast mechanisms.
A4: implementing protocols and rules: protocols and rules of the network are defined to ensure that interactions and communications between nodes are secure, reliable and consistent. Automated execution of protocols and rules may be accomplished using smart contracts and the like.
A5: node management and maintenance: the nodes are managed and maintained, including registration, authentication, state management, etc. of the nodes. Ensuring the correct operation of the nodes and the stability of the participating network.
A6: security and privacy protection: taking the security and privacy protection of the network into consideration, appropriate security measures such as encryption communication, identity verification, data privacy protection and the like are adopted to ensure the security of the network and the data.
A7: testing and optimizing: after the decentralization of the network is implemented, testing and optimization are performed to ensure that the performance and reliability of the network meet expectations, and adjustment and improvement are performed according to actual conditions.
The data and computing tasks are then distributed to multiple hosts of the edge all-in-one machine by decentralized storage and processing. Each host independently bears a part of data storage and processing tasks, thereby improving the robustness and performance of the whole system.
To ensure that the work between the individual hosts is coordinated, a de-centralized management protocol is used to manage and coordinate the multiple hosts of the edge all-in-one. This may reduce reliance on a centralized server while ensuring that the work between the various hosts is coordinated.
In order to improve the reliability and efficiency of the system, the present embodiment introduces a smart contract, which is used to specify and execute the management rules of the edge all-in-one machine. The intelligent contracts can automatically execute and manage interaction and task allocation among the edge integrated machines, so that the reliability and efficiency of the system are improved.
In order to ensure that the data and the calculation tasks of the edge all-in-one machine are safe and reliable, the embodiment establishes a trust mechanism based on the identity verification and data tracing technology of the blockchain, and introduces a decentralization trust mechanism.
In this embodiment, the upper node refers to a management host, and the lower node refers to a working host.
The working host needs to install and deploy the needed application, software or program, so an automatic deployment system is arranged to be installed on the working host, and the automatic deployment system comprises: and (3) a standardization module: the standardized module is used for standardizing the disks and/or networks of the edge multi-host cluster, the specific standard is that the disks are divided into a system disk and a data disk, the specific process of standardization is that the disk is packaged and mirrored by using an isolinux technology, and the mirrored disks are divided into two standard disks, wherein one of the standard disks is the system disk and the other standard disk is the data disk.
The network standardization is specifically to initialize a plurality of working host networks in one subnet while standardizing IP, initialize a plurality of host networks in one subnet while standardizing IP, one master IP, and the rest are working IP. Taking an IP of 255 as a main control IP and 245-254 as working IP;
on-line system: the method comprises the steps of selecting a Kubernetes system, wherein the online system is used for connecting a plurality of hosts in an edge multi-host cluster to form independent computers capable of uniformly operating;
and (3) executing an installation module: the execution installation module is used for completing the installation of resources required by specific software, systems and the like.
The automatic deployment program also comprises a custom menu, wherein the custom menu is used as a window for starting functions and is used for a user to drive a standardized module or an online system or execute an installation module to work. An anacod technology is automatically selected by a menu. When the motor standardized button in the menu is automatically used, the standardized module initializes the network in a subnet, takes the IP of 255 as the main control IP and takes 245-254 as the working IP.
When the disk normalization button is clicked, the normalization module divides the disk into two standard disks, one of which is the system disk and the other is the data disk.
The implementation process of the automatic deployment program is that after the software, the program or the resource to be installed is ready, when the first host installation, the second host installation or the third host installation is clicked, the execution installation module installs the ready software, the program or the resource to the corresponding host.
The specific application method in the automatic deployment program is as follows:
the method comprises the following steps: and acquiring offline resources, namely downloading or copying the content to be deployed of the automation program into one host in the edge multi-host cluster, wherein the offline resources can be software to be installed or programs to be used.
And a second step of: setting an image warehouse, downloading all needed resource images into the image warehouse, and setting an offline warehouse for accessing offline resources in an offline environment, wherein the offline resources can be software to be installed or programs to be used and the like;
and a third step of: the path of the software and system images that need to be installed is configured in the deployment program. The step can enable the automatic program to be installed through the path of the system mirror image in the running process, and further can be more efficient and quicker.
Fourth step: the automation program is copied to the hosts that need to be deployed and the program is executed on each host.
Fifth step: opening the custom menu, and clicking the buttons of disk standardization, network standardization, installation of the first host and installation of the second host in sequence.
Sixth step: and after all the hosts are deployed, verifying the deployment result. The verification can be performed by log information output by an automation program or accessing a deployed system.
The multi-host stacking integrated machine is a novel computing architecture, integrates a plurality of hosts into a chassis, and can realize more efficient computation and storage through high-speed network interconnection. Network card communication in a multi-host stacking all-in-one machine is very important for the operation of the multi-host stacking all-in-one machine.
Configuring a double network card in each working host, and setting a network card binding module for binding a plurality of network cards; the network card communication module is used for realizing information transmission among a plurality of network cards; and the path selection module is used for selecting a proper path in the information transmission process.
The network card binding module binds a plurality of network cards into one logic network card, which is realized by Linux NIC Bonding in the embodiment. When any network card fails, other network cards normally operate without affecting the operation and transmission speed of the network, and meanwhile, the plurality of network cards can realize capacity expansion function, further realize load balancing and redundancy backup, and improve the reliability and efficiency of communication.
The network card communication module may be a static link aggregation or a dynamic link aggregation through a link aggregation manner, and in this embodiment, the dynamic link aggregation manner of 802.3ad is selected.
The path selection module, in this embodiment, selects a hash policy for path selection, and is implemented on a layer3, which is a network layer.
In order to achieve better path selection, an extended layer3+5 custom policy is applied, where the layer3+5 policy configures 5 factors on the network layer3 by applying a hash policy, and the 5 factors include "source", "target", "address of source", "address of target" and "APP ID".
The application-level-based traffic management can be realized by adding the APP ID as a configuration factor, and the layer3+5 custom strategy is based, so that the layer3+6 custom strategy is further expanded, namely 6 factors are configured on a network layer, and the sixth factor is encryption and provides safe transmission on the basis.
In order to realize the two self-defined strategies, the embodiment expands the kernel, adds the APP ID factor and the encryption factor into the kernel, and is used for identifying and selecting the system and the hash strategy
Therefore, the working host machine realizes internal network load balancing through the double routers, and the network bandwidth can be doubled by using the double gigabit network card, so that faster network connection and higher speed are provided. The dual gigabit network card can process more data streams simultaneously, thereby improving network throughput. This is very useful for applications that need to handle large amounts of data, such as video streaming, large file transfers, etc. The dual gigabit network card can provide redundancy to ensure that when one network card fails, the other network card can take over network connection, thereby improving network stability and reliability.
For highly loaded servers, the dual gigabit network card can provide better performance because it can handle more data streams simultaneously, thereby reducing latency and improving response speed.
According to the method, all network cards are bound into one logic card through Linux NIC Bonding, information transmission among the network cards is achieved through a dynamic link aggregation mode of 802.3ad, information transmission paths are selected through a hash strategy, when one of the network cards fails, the whole logic card is not affected, information transmission is not carried out with the selected failure card on the information transmission paths, and therefore disaster recovery effect is achieved. Thereby realizing the advantages of safer network and more accurate information transmission.
In order to ensure sufficient electric quantity, a working power supply and a standby power supply are configured in each host, when the working power supply fails, the standby power supply can continue to operate, and if one power supply fails, the other power supply can immediately take over, so that the system downtime is avoided.
The voltage load is realized through double power supplies, and the independent start and stop of a plurality of hosts are realized through independent power supplies. The dual power supply can balance the system load, reduce the risk of overload of a single power supply, and improve the system stability. Dual power supplies may provide more power capacity to support more hardware devices and higher system loads, thereby improving system performance.
In the specific implementation process, firstly, a plurality of hosts are required to be set in terms of hardware, one of the hosts is taken as a management host, the rest is a working host, a double network card is configured on each working host, two network cards are bound into a logic card through Linux NIC Bonding, then cluster management software is configured on the management host and used for managing the plurality of working hosts, a cloud primary mixed management platform is configured in the cluster management software, the management mode between the management host and the working host is a decentralization management method, and an automatic deployment system is configured on the working host, so that software, applications and the like required by the working host can be automatically deployed.
The embodiments of the present invention are all preferred embodiments of the present application, and are not intended to limit the scope of the present application in this way, therefore: all equivalent changes in structure, shape and principle of this application should be covered in the protection scope of this application.

Claims (23)

1. The integrated machine is characterized by comprising at least two virtual switches and at least three hosts, wherein each host is respectively connected to the two virtual switches, cluster management software is installed and configured in one host as a management host, application programs and services are deployed and run in the other hosts as working hosts, and the other hosts are added into management objects of the cluster management software.
2. A split-edge multiple-host stacking all-in-one machine for pipes as claimed in claim 1, wherein the working hosts are provided with an automated deployment system for installing and deploying software, programs and the like required in the working hosts.
3. A split-edge multiple-host stacking all-in-one machine for pipes as claimed in claim 2, wherein said automated deployment procedure comprises a normalization module: the standardized module is used for standardizing the disks and/or networks of the edge multi-host cluster;
on-line system: the online system is used for connecting a plurality of hosts in the edge multi-host cluster to form independent computers capable of operating uniformly;
and (3) executing an installation module: the execution installation module is used for completing the installation of resources required by specific software, systems and the like.
4. A split-edge multiple-host stacking all-in-one machine for pipes as claimed in claim 3, wherein said on-line system is a Kubernetes system.
5. The split-edge multiple host stacking all-in-one machine for a pipe of claim 2, wherein the automated deployment program further comprises a custom menu as a function-enabled window for a user to drive a standardized module or an on-line system or to perform installation module work.
6. A split-edge multiple host stacking all-in-one machine for pipes as claimed in claim 3, wherein the specific criteria for disk standardization is split into two disks, one system disk and the other data disk.
7. A split edge-to-host stacking all-in-one machine for pipes as claimed in claim 3, wherein the specific criteria for network standardization are: multiple working host networks are initialized in a subnet, IP is standardized, multiple host networks are initialized in a subnet, IP is standardized, one master IP is standardized, and the rest are working IP. And taking the IP of 255 as a main control IP and taking 245-254 as a working IP.
8. The split-edge multi-host stacking all-in-one machine for a pipe according to claim 1, wherein the management host is internally provided with a native mixed management platform for managing virtual switches, the management platform comprises a container management module and a virtual switch management module, the virtual switch management module manages Pod by means of cloud native, and a KubeVirt enhancement plug-in is inserted into the cloud platform.
9. The integrated machine for stacking multiple hosts with separated edges according to claim 1, wherein at least two network cards are configured on the working hosts, and a network card binding module for binding multiple network cards into one logic card, a network card communication module for realizing communication between the network cards, and a path selection module for selecting a path for information transmission are configured.
10. The split-edge multiple host stacking all-in-one machine for pipes of claim 9, wherein said network card binding module is implemented by Linux NIC Bonding.
11. The integrated machine of claim 9, wherein the network card communication module is configured by dynamic link aggregation with 802.3 ad.
12. The split-edge multiple host stacking all-in-one machine for pipes of claim 9, wherein said path selection module is configured to perform a hash strategy.
13. A split-edge multiple host stacking all-in-one machine for pipes as claimed in claim 9, wherein the path selection is implemented at the network layer.
14. The split-edge multiple host stacking all-in-one machine according to claim 12, wherein the hash policy has at least 4 selection factors including "source", "target", "address of source", "address of target", and configuration of "source", "target", "address of source", "address of target" in the kernel.
15. The split-edge multiple host stacking all-in-one machine for pipes of claim 12, wherein the selection factor of the hash policy further comprises "encryption", and the kernel extension adds an "encryption" factor.
16. The split-edge multiple host stacking all-in-one machine for pipes of claim 12, wherein the hash policy selection factors further comprise "APP ID" and the kernel extends factors added to "APP ID".
17. A split-edge multiple host stacking all-in-one machine for pipes as claimed in claim 1, further comprising an operating host responsible for configuration of the entire cluster for operating and controlling the use of the entire host.
18. A cloud primary mix management method for a split-edge multi-host stacking all-in-one machine for a pipe as claimed in any one of claims 1 to 17, comprising the steps of: the method comprises the following steps: the method comprises the steps of firstly, installing a Kubernetes cluster; secondly, installing KubeVirt; the third step is to create a virtual switch, and a virtual machine instance is required to be defined for creating a virtual switch in Kubernetes; fourth, managing virtual switches using kubcctl command line tools or Kubernetes Dashboard; fifth, virtual switches are used to access the virtual switch by creating and connecting a virtual switch instance to a service or exposing to an external network through a NodePort.
19. The cloud primary mixed management method of claim 18, wherein the method for installing Kubernetes clusters is as follows: a Kubernetes cluster is run, created by kubeadm, kops, minikube or other tools.
20. The cloud primary mix management method of claim 19, wherein the method of installing KubeVirt is: kubeVirt was installed on Kubernetes clusters using YAML files provided in the gitthub repository of KubeVirt or using Helm Chart.
21. A method of device management for a split-edge multi-host stacking all-in-one machine as claimed in any one of claims 1 to 17, comprising establishing an decentralized network, constructing a closed-loop mesh structure, configuring a decentralized management protocol, introducing an intelligent contract, using the intelligent contract to define and execute the management rules of the edge all-in-one machine, the upper nodes managing all logical lower nodes, the lower nodes broadcasting the node metadata to all the upper nodes.
22. The device management method according to claim 21, wherein the method for establishing a decentralized network is A1: selecting a proper decentralization technology, A2: designing a network topology structure; a3: configuring nodes and a communication mechanism; a4: implementing protocols and rules; a5: node management and maintenance; a6: security and privacy protection; a7: testing and optimizing.
23. The method for separating the pipe of the integrated machine for stacking the multiple edge hosts is characterized by comprising the integrated machine for stacking the multiple edge hosts, which is separated by the pipe, according to any one of claims 1-16, planning two routers, communicating multiple hosts with left and right routers respectively, and butting the routers with an external network. Selecting a host as a management host to take charge of cluster management and coordination work; other hosts serve as working hosts, application programs and services are deployed and run in the working hosts, ports of the management hosts are outward, and the whole host management is achieved.
CN202311225962.2A 2023-09-20 2023-09-20 Edge multi-host stacking all-in-one machine for separating pipes and pipe separating method Pending CN117271426A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311225962.2A CN117271426A (en) 2023-09-20 2023-09-20 Edge multi-host stacking all-in-one machine for separating pipes and pipe separating method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311225962.2A CN117271426A (en) 2023-09-20 2023-09-20 Edge multi-host stacking all-in-one machine for separating pipes and pipe separating method

Publications (1)

Publication Number Publication Date
CN117271426A true CN117271426A (en) 2023-12-22

Family

ID=89200295

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311225962.2A Pending CN117271426A (en) 2023-09-20 2023-09-20 Edge multi-host stacking all-in-one machine for separating pipes and pipe separating method

Country Status (1)

Country Link
CN (1) CN117271426A (en)

Similar Documents

Publication Publication Date Title
Sarmiento et al. Decentralized SDN control plane for a distributed cloud-edge infrastructure: A survey
US10050850B2 (en) Rack awareness data storage in a cluster of host computing devices
US9329894B2 (en) Method and apparatus for extending local area networks between clouds and permanently migrating virtual machines using static network addresses
US8661286B2 (en) QProcessor architecture in a cluster configuration
CN104956332B (en) Method, storage medium and computing system for managing computing resources
CA2621249C (en) Application of virtual servers to high availability and disaster recovery solutions
Wood et al. CloudNet: dynamic pooling of cloud resources by live WAN migration of virtual machines
EP2922238B1 (en) Resource allocation method
CN108632067B (en) Disaster recovery deployment method, device and system
US20160224367A1 (en) Method and system for migration of virtual machines and virtual applications between cloud-computing facilities
US8990824B2 (en) System and method for automated virtual network configuration
EP1867097B1 (en) Disaster recovery architecture
US20070234116A1 (en) Method, apparatus, and computer product for managing operation
CN111880902A (en) Pod creation method, device, equipment and readable storage medium
US20070233872A1 (en) Method, apparatus, and computer product for managing operation
CN112311646B (en) Hybrid cloud based on super-fusion system and deployment method
CN110008005B (en) Cloud platform-based power grid communication resource virtual machine migration system and method
US20220404983A1 (en) Data and configuration integrity checking post-rollback using backups in virtualized computing environments
CN113467873A (en) Virtual machine scheduling method and device, electronic equipment and storage medium
CN112073499A (en) Dynamic service method of multi-machine type cloud physical server
CN117271426A (en) Edge multi-host stacking all-in-one machine for separating pipes and pipe separating method
Dell
US11645158B2 (en) Automated rollback in virtualized computing environments
CN115981671A (en) Service building method based on container technology
AU2011255219A1 (en) Configuring the cluster

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination