CN110515869B - Multi-Host CPU cascading method and system - Google Patents

Multi-Host CPU cascading method and system Download PDF

Info

Publication number
CN110515869B
CN110515869B CN201810497465.0A CN201810497465A CN110515869B CN 110515869 B CN110515869 B CN 110515869B CN 201810497465 A CN201810497465 A CN 201810497465A CN 110515869 B CN110515869 B CN 110515869B
Authority
CN
China
Prior art keywords
network card
pcie
host cpu
host
cpu
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810497465.0A
Other languages
Chinese (zh)
Other versions
CN110515869A (en
Inventor
叶晓龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN201810497465.0A priority Critical patent/CN110515869B/en
Publication of CN110515869A publication Critical patent/CN110515869A/en
Application granted granted Critical
Publication of CN110515869B publication Critical patent/CN110515869B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/20Handling requests for interconnection or transfer for access to input/output bus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/20Handling requests for interconnection or transfer for access to input/output bus
    • G06F13/28Handling requests for interconnection or transfer for access to input/output bus using burst mode transfer, e.g. direct memory access DMA, cycle steal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2213/00Indexing scheme relating to interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F2213/0026PCI express

Abstract

The application provides a multi-Host CPU cascade method and a system, wherein the system comprises: the device comprises a control chip, a CPU, a PCIE switching chip and a PCIE network card; wherein: the control chip is used for issuing a first configuration instruction to the PCIE network card; the PCIE network card is used for virtualizing a physical port into a plurality of VF network cards according to the first configuration instruction; the control chip is further configured to issue a second configuration instruction to the PCIE switch chip; the PCIE switching chip is used for distributing a VF network card for each Host Port according to the second configuration instruction; the control chip is also used for controlling the CPU to be powered on and started; the CPU is used for carrying out PCI scanning after being electrified and started and establishing a virtual PCI bus domain of the CPU; the CPU is also used for identifying the VF network card belonging to the virtual PCI bus domain of the CPU, loading the VF network card drive and realizing the communication between the inside and the outside through the VF network card. The method can meet the requirement that a plurality of CPUs are used as Host CPUs.

Description

Multi-Host CPU cascading method and system
Technical Field
The present application relates to communications technologies, and in particular, to a method and a system for cascading multiple Host CPUs.
Background
In a conventional PCIE (Peripheral Component Interconnect Express) transparent bridge (transparent bridging) cascading scheme, a master and a slave are strictly distinguished, a CPU (central processing Unit) serving as a master is called HOST CPU (HOST CPU), and a CPU or PCIE device serving as a slave is called EP (Endpoint), and in this mode, only 1 CPU is supported as HOST CPU, and other CPUs or PCIE devices can only serve as slave devices.
However, at present, some mainstream smart chips with GPUs (Graphics Processing units) only support a main mode due to the limitation of chip design, or some CPUs only support a main mode due to the limitation of application scenarios, that is, can only serve as a Host CPU, so that a multi-CPU/GPU cluster design cannot be performed by using a traditional PCIE transparent bridge cascade scheme.
Disclosure of Invention
In view of the above, the present application provides a multi-Host CPU cascading method and system.
Specifically, the method is realized through the following technical scheme:
according to a first aspect of the embodiments of the present application, a multi-Host CPU cascade system is provided, including a control chip, a CPU, a PCIE switch chip and a PCIE network card, where the PCIE switch chip supports an MR-IOV function, the PCIE network card supports an SR-IOV function, a plurality of Host Port ports are provided on the switch chip, the CPU is connected to the switch chip through the Host Port, where:
the control chip is used for issuing a first configuration instruction to the PCIE network card;
the PCIE network card is used for virtualizing a physical port into a plurality of VF network cards according to the first configuration instruction;
the control chip is further configured to issue a second configuration instruction to the PCIE switch chip;
the PCIE switching chip is used for distributing a VF network card for each Host Port according to the second configuration instruction;
the control chip is also used for controlling the CPU to be powered on and started;
the CPU is used for carrying out PCI scanning after being electrified and started and establishing a virtual PCI bus domain of the CPU;
the CPU is also used for identifying the VF network card belonging to the virtual PCI bus domain of the CPU, loading the VF network card drive and realizing the communication between the inside and the outside through the VF network card.
Optionally, the control chip is specifically configured to identify a physical function PF of the PCIE network card, load a PF driver, and issue a first configuration instruction to the PCIE network card.
Optionally, the PCIE network card includes a first physical port and a second physical port;
the PCIE network card is specifically configured to virtualize the first physical port into a plurality of first type VF network cards, and virtualize the second physical port into a plurality of second type VF network cards;
the PCIE switching chip is specifically used for respectively allocating a first type VF network card and a second type VF network card to each Host Port;
the CPU is specifically configured to implement intra-pair communication by using the first type VF network card belonging to the virtual PCI bus domain of the CPU, and implement external communication by using the second type VF network card belonging to the virtual PCI bus domain of the CPU.
Optionally, the PCIE switch chip is further provided with an NTB module or/and a DMA module;
the CPU is also used for realizing the communication in the pair by an NTB mode or/and a DMA mode.
Optionally, the CPU is integrated with a GPU.
According to a second aspect of the embodiments of the present application, a multi-Host CPU cascade method is provided, which is applied to a multi-Host CPU cascade system including a control chip, a CPU, a PCIE switch chip, and a PCIE network card, where the PCIE switch chip supports an MR-IOV function, the PCIE network card supports an SR-IOV function, a plurality of Host Port ports are provided on the switch chip, and the CPU is connected to the switch chip through the Host ports, and the method further includes:
the control chip issues a first configuration instruction to the PCIE network card;
the PCIE network card virtualizes a physical port into a plurality of virtual function VF network cards according to the first configuration instruction;
the control chip issues a second configuration instruction to the PCIE switching chip;
the PCIE switching chip distributes a VF network card for each Host Port according to the second configuration instruction;
the control chip controls the CPU to be powered on and started;
after the CPU is powered on and started, Peripheral Component Interconnect (PCI) scanning is carried out, and a virtual PCI bus domain of the CPU is established;
and the CPU identifies the VF network card belonging to the virtual PCI bus domain of the CPU, loads the VF network card drive and realizes the communication between the inside and the outside through the VF network card.
Optionally, the issuing, by the control chip, a first configuration instruction to the PCIE network card includes:
the control chip identifies a physical function PF of the PCIE network card, loads a PF driver, and issues a first configuration instruction to the PCIE network card.
Optionally, the PCIE network card includes a first physical port and a second physical port;
the PCIE network card virtualizes a physical port into a plurality of VF network cards according to the first configuration instruction, including:
the PCIE network card virtualizes the first physical port into a plurality of first type VF network cards, and virtualizes the second physical port into a plurality of second type VF network cards;
the PCIE switching chip allocates a VF network card for each Host Port according to the second configuration instruction, and the method comprises the following steps:
the PCIE switching chip respectively distributes a first type VF network card and a second type VF network card for each Host Port;
the CPU realizes the communication between inside and outside through the VF network card, and the communication comprises the following steps:
the CPU realizes the internal communication through the first type VF network card subordinate to the virtual PCI bus domain of the CPU, and realizes the external communication through the second type VF network card subordinate to the virtual PCI bus domain of the CPU.
Optionally, the PCIE switch chip is further provided with an NTB module or/and a DMA module;
the method further comprises the following steps:
the CPU realizes the communication in pairs through an NTB mode or/and a DMA mode.
Optionally, the CPU is integrated with a GPU.
According to the multi-Host CPU cascade system, a set of PCIE-based multi-Host CPU cascade system is constructed by utilizing PCIE switching chips supporting MR-IOV functions and PCIE network cards supporting SR-IOV functions, the control chips control the PCIE network cards to be virtualized into a plurality of VF network cards, and the PCIE switching chips respectively give each Host Port, so that the CPU accessed through the Host ports arranged on the PCIE switching chips can establish a virtual PCI bus domain of the CPU, and the distributed VF network cards are utilized to realize the communication between the inside and the outside, the requirement that the plurality of CPUs are used as the Host CPUs is met, the requirement that the plurality of Host CPUs share one physical network card to realize the data communication between the inside and the outside is met, a standard network interface is directly supported, the compatibility of upper layer codes is strong, and the expansion is convenient.
Drawings
FIG. 1 is an architectural diagram of a multi-Host CPU cascade system, shown in an exemplary embodiment of the present application;
FIG. 2 is a schematic flow chart diagram illustrating a multi-Host CPU cascading method according to an exemplary embodiment of the present application;
fig. 3 is a schematic architecture diagram illustrating a specific application scenario according to an exemplary embodiment of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
In order to make the technical solutions provided in the embodiments of the present application better understood and make the above objects, features and advantages of the embodiments of the present application more comprehensible, the technical solutions in the embodiments of the present application are described in further detail below with reference to the accompanying drawings.
Referring to fig. 1, which is a schematic diagram of an architecture of a multi-Host CPU cascade system provided in an embodiment of the present application, as shown in fig. 1, the multi-Host CPU cascade system may include a control chip 110, a CPU120, a PCIE switch chip 130, and a PCIE network card 140, where the CPU120 is connected to the PCIE switch chip 130 through a Host Port (Port) arranged on the PCIE switch chip 130. Wherein:
the control chip 110 is configured to issue a first configuration instruction to the PCIE network card 140;
the PCIE network card 140 is configured to virtualize a physical port into multiple VF (virtual Function) network cards according to the first configuration instruction;
the control chip 110 is further configured to issue a second configuration instruction to the PCIE switch chip 130;
the PCIE switch chip 130 is configured to allocate a VF network card to each Host Port according to the second configuration instruction;
the control chip 110 is further configured to control the CPU120 to be powered on and started;
the CPU120 is configured to perform peripheral component interconnect PCI scanning after power-on startup, and establish a virtual PCI bus domain of the CPU;
the CPU120 is further configured to identify a VF network card belonging to its own virtual PCI bus domain, load a VF network card driver, and implement internal/external communication through the VF network card.
It should be noted that, in the embodiment of the present application, unless otherwise specified, all the mentioned CPUs refer to CPUs accessed through a Host Port of a PCIE switch chip, that is, Host CPUs, and the embodiment of the present application is not repeated in the following.
In this embodiment of the application, in order to implement Multi-Host CPU cascade, the PCIE switch chip 130 needs to support an MR-IOV (Multi-Root I/O (Input/Output) Virtualization, multiple Input/Output Virtualization) function, so that the PCIE switch chip can be virtualized into multiple PCI bus domains by using one PCI bus domain; the PCIE network card 140 needs to support an SR-IOV (Single-Root I/O Virtualization, Single input/output Virtualization) function, so that the PCIE network card can virtualize one PF network card into multiple VF network cards.
The PCIE switch chip 130 may be provided with a plurality of Host ports, so that a plurality of CPUs 120 may pass through the plurality of Host ports and the PCIE switch chip 130, where the plurality of CPUs 120 connected to the PCIE switch chip 130 through the Host ports all serve as Host CPUs, thereby implementing multi-Host CPU cascade.
In this embodiment, the control chip 110 may issue a configuration instruction (referred to as a first configuration instruction herein) to the PCIE network card 140 to instruct the PCIE network card 140 to perform VF network card virtualization operation.
For example, the control chip 110 may identify a PF (Physical Function) of the PCIE network card 140, load a PF driver, and issue a first configuration instruction to the PCIE network card 140.
When the PCIE network card 140 receives the first configuration instruction sent by the control chip 110, the PCIE network card 140 may enable the SR-IOV function, and virtualize the physical port into a plurality of VF network cards.
The number of the PCIE network cards 140 virtualizing the physical ports into VF network cards may be indicated by the control chip 110 through the first configuration instruction or configured in the PCIE network card 140 in advance.
In this embodiment of the application, after the PCIE network card 140 completes the VF network card virtualization operation, the control chip 110 may issue a configuration instruction (referred to as a second configuration instruction herein) to the PCIE switch chip 130 to instruct the PCIE switch chip 130 to allocate the VF network card for each Host Port.
When the PCIE switch chip 130 receives the second configuration instruction sent by the control chip 110, the PCIE switch chip 130 may allocate a VF network card to each Host Port.
For example, the PCIE switch chip 130 may allocate a VF network card to each Host Port; alternatively, the PCIE switch chip 130 may allocate a plurality of VF network cards to some or all of the Host ports respectively.
In this embodiment of the application, after the PCIE switch chip 130 completes the VF network card allocation, the control chip 110 may control the CPU120 that accesses the PCIE switch chip through the Host Port to be powered on and started.
After the CPU120 is powered on and started, PCI scanning may be performed to establish its own virtual PCI bus domain, and further, the CPU120 may identify a VF network card (assigned to a VF network card of a Host Port connected to the CPU120 by the PCIE switch chip 130) belonging to its own virtual PCI bus domain, and load a VF network card driver, and implement communication between the inside (i.e., between the Host CPU and the Host CPU)/the outside (i.e., between the Host CPU and the external network) through the VF network card belonging to its own virtual PCI bus domain.
It should be noted that, in this embodiment of the present application, the CPU120 establishes its own virtual PCI bus domain, identifies the VF network card belonging to its own virtual PCI bus domain, and after loading the VF network card driver, needs to perform IP address configuration on the VF network card before implementing the internal/external communication through the VF network card, and details of implementation thereof are not described herein.
Therefore, in the embodiment of the application, a set of PCIE-based multi-HOST CPU cascade system is constructed by using a PCIE switch chip supporting an MR-IOV function and a PCIE network card supporting an SR-IOV function, so that the requirement that a plurality of CPUs are used as HOST CPUs is met, the requirement that a plurality of HOST CPUs share one physical network card to realize internal/external data communication is met, a standard network interface is directly supported, the compatibility of upper layer codes is strong, and the expansion is convenient.
Further, in one embodiment of the present application, the PCIE network card 140 may include a first physical port and a second physical port;
correspondingly, the PCIE network card 140 may be specifically configured to virtualize the first physical port into a plurality of first type VF network cards, and virtualize the second physical port into a plurality of second type VF network cards;
the PCIE switch chip 130 may be specifically configured to allocate a first type VF network card and a second type VF network card to each Host Port respectively;
the CPU120 may be specifically configured to implement intra-pair communication through a first type VF network card belonging to its own virtual PCI bus domain, and implement external communication through a second type VF network card belonging to its own virtual PCI bus domain.
In this embodiment, in order to improve controllability and processing efficiency of data communication, when the PCIE network card 140 has a plurality of physical ports, a part of the physical ports (referred to as first physical ports herein) may be used for data communication between the Host CPUs, and a part of the physical ports (referred to as second physical ports herein) may be used for data communication between the Host CPUs and the external network.
Accordingly, in this embodiment, when the control chip 110 performs VF network card virtualization configuration on the PCIE network card 140, the PCIE network card 140 may be instructed to virtualize the first physical port as a VF network card for inward communication (referred to as a first type VF network card herein), and virtualize the second physical port as a VF network card for outward communication (referred to as a second type VF network card herein).
Similarly, when the control chip 110 performs VF network card allocation configuration on the PCIE switch chip 130, the PCIE switch chip 130 may be instructed to allocate a first type VF network card and a second type VF network card for each Host Port.
For example, the PCIE switch chip 130 may respectively allocate one first type VF network card and one second type VF network card to each Host Port, or the PCIE switch chip 130 may respectively allocate a plurality of first type VF network cards and a plurality of second type VF network cards to each Host Port.
The number of the first type VF network cards and the number of the second type VF network cards allocated by the PCIE switch chip 130 to the same Host Port may be the same or different; the number of the first type VF network cards and the number of the second type VF network cards allocated by the PCIE switch chip 130 to different Host ports may be the same or different.
In this embodiment, the CPU120 performs PCI scanning in the above manner, establishes its own virtual PCI bus domain, identifies the VF network card belonging to its own virtual PCI bus domain, and loads the VF network card drive, and then may implement internal communication by using the first type VF network card belonging to its own virtual PCI bus domain, and implement external communication by using the second type VF network card belonging to its own virtual PCI bus domain.
Further, in one embodiment of the present application, the PCIE switch chip 130 may further include an NTB (Non-transparent bridge) module or/and a DMA (Direct Memory Access) module;
accordingly, the CPU120 may also be configured to implement intra-pair communication in an NTB manner or/and a DMA manner.
In this embodiment, when there is an NTB module or/and a DMA module in the PCIE switch chip 130, the CPU120 may implement intra-pair communication in an NTB manner or/and a DMA manner, in addition to implementing intra-pair communication in the manner described above.
For specific implementation of the communication inside the CPU120 through the NTB mode or/and the DMA mode, reference may be made to related descriptions in the related art, and details of the embodiment of the present application are not described herein.
Further, in one embodiment of the present application, the CPU may be integrated with a GPU.
In this embodiment, for a CPU integrated with a GPU, multi-Host GPU cascade may also be implemented in the manner described above, and specific implementation thereof may refer to relevant descriptions in the foregoing method embodiments, and details of this embodiment are not described herein again.
Referring to fig. 2, a schematic flow diagram of a multi-Host CPU cascading method provided in the embodiment of the present application is shown, where the multi-Host CPU cascading method may be applied to a multi-Host CPU cascading system including a control chip, a CPU, a PCIE switch chip, and a PCIE network card, for example, the multi-Host CPU cascading method may be applied to the multi-Host CPU cascading system shown in fig. 1, and as shown in fig. 2, the multi-Host CPU cascading method may include the following steps:
step S200, the control chip issues a first configuration instruction to the PCIE network card.
In this embodiment of the application, the control chip may issue a first configuration instruction to the PCIE network card to instruct the PCIE network card 140 to perform VF network card virtualization operation.
For example, the control chip 110 may identify a PF of the PCIE network card, load a PF driver, and issue a first configuration instruction to the PCIE network card.
Step S210, the PCIE network card virtualizes the physical port into a plurality of VF network cards according to the first configuration instruction.
In the embodiment of the application, when the PCIE network card receives the first configuration instruction sent by the control chip, the SR-IOV function may be enabled, and the physical port is virtualized into a plurality of VF network cards.
The number of the PCIE network cards virtualizing the physical ports into VF network cards may be indicated by the control chip through the first configuration instruction or configured in the PCIE network card 140 in advance.
Step S220, the control chip issues a second configuration instruction to the PCIE switch chip.
In this embodiment of the application, after the PCIE network card completes the VF network card virtualization operation, a second configuration instruction may be issued to the PCIE switch chip to instruct the PCIE switch chip to allocate the VF network card for each Host Port.
And step S230, the PCIE switching chip distributes a VF network card for each Host Port according to the second configuration instruction.
In this embodiment of the application, when the PCIE switch chip receives the second configuration instruction sent by the control chip, the PCIE switch chip may allocate the VF network card to each Host Port respectively.
For example, the PCIE switch chip may allocate a VF network card to each Host Port; or, the PCIE switch chip may allocate a plurality of VF network cards to some or all of the Host ports, respectively.
And step S240, the control chip controls the CPU to be powered on and started.
Step S250, after the CPU is powered on and started, PCI scanning is carried out, and a virtual PCI bus domain of the CPU is established.
Step S260, the CPU identifies the VF network card belonging to its own virtual PCI bus domain, loads the VF network card driver, and implements the internal/external communication through the VF network card.
In the embodiment of the application, after the PCIE switch chip completes the VF network card allocation, the CPU that accesses the PCIE switch chip through the Host Port may be controlled to be powered on and started.
After the CPU is powered on and started, PCI scanning can be carried out, a virtual PCI bus domain of the CPU is established, then the CPU can identify a VF network card subordinate to the virtual PCI bus domain of the CPU, and load a VF network card drive, and internal/external communication is realized through the VF network card.
In one embodiment of the present application, the PCIE network card may include a first physical port and a second physical port;
correspondingly, virtualizing the physical port into a plurality of VF network cards according to the first configuration instruction by the PCIE network card may include:
the PCIE network card virtualizes the first physical port into a plurality of first type VF network cards, and virtualizes the second physical port into a plurality of second type VF network cards;
the PCIE switch chip allocates a VF network card to each Host Port according to the second configuration instruction, which may include:
the PCIE switching chip respectively distributes a first type VF network card and a second type VF network card for each Host Port;
the CPU realizes the internal/external communication through the VF network card, and may include:
the CPU realizes the internal communication through the first type VF network card subordinate to the virtual PCI bus domain of the CPU, and realizes the external communication through the second type VF network card subordinate to the virtual PCI bus domain of the CPU.
In this embodiment, in order to improve controllability and processing efficiency of data communication, when the PCIE network card 140 has a plurality of physical ports, a first physical port may be used for data communication between the Host CPU, and a second physical port may be used for data communication between the Host CPU and the external network.
Correspondingly, when the control chip performs VF network card virtualization configuration on the PCIE network card, the control chip may instruct the PCIE network card to virtualize the first physical port as a first type VF network card for internal communication, and virtualize the second physical port as a second type VF network card for external communication.
Similarly, when the control chip performs VF network card allocation configuration on the PCIE switch chip, the control chip may instruct the PCIE switch chip to allocate the first type VF network card and the second type VF network card to each Host Port respectively.
In this embodiment, the CPU may implement internal communication through the first type VF network card belonging to its own virtual PCI bus domain, and implement external communication through the second type VF network card belonging to its own virtual PCI bus domain.
In one embodiment of the present application, an NTB module or/and a DMA module may also be disposed in the PCIE switch chip;
correspondingly, the CPU can also realize the communication in the pair by an NTB mode or/and a DMA mode.
In one embodiment of the present application, the CPU may be integrated with a GPU.
In this embodiment, for a CPU integrated with a GPU, multi-Host GPU cascade may also be implemented in the manner described above, and specific implementation thereof may refer to relevant descriptions in the foregoing method embodiments, and details of this embodiment are not described herein again.
It should be noted that, in this embodiment of the present application, when it is necessary to expand the internal/external communication bandwidths of each Host CPU in the multi-Host CPU cascade system, a new PCIE network card supporting an SR-IOV function may be accessed in the system, and the VF network card virtualization operation and the VF network card allocation operation are performed according to the above manner, and specific implementation thereof is not described herein again.
In order to enable those skilled in the art to better understand the technical solutions provided in the embodiments of the present application, the following describes the technical solutions provided in the embodiments of the present application with reference to specific application scenarios.
Please refer to fig. 3, which is a schematic diagram of a specific application scenario provided in the embodiment of the present application, as shown in fig. 3, in the embodiment, a control chip is an MCPU (Management CPU), a PCIE switch chip is set to 16 Host ports (it is assumed that the Host ports 1 to 16 are respectively provided), and an NTB module and a DMA module are provided, and a PCIE network card includes two physical ports (it is assumed that the Port1 and the Port2 are respectively provided); the PCIE switch chip may be connected to the MCPU through a Management Port, and connected to the PCIE network card through a Downstream Port.
It should be noted that, in an actual application scenario, the number of Host ports may be expanded in a manner of cascading multiple PCIE switch chips.
In this embodiment, after the MCPU identifies the PF of the PCIE network card, the PF driver may be loaded, and a first configuration instruction is issued to the PCIE network card.
When the PCIE network card receives the first configuration instruction, the port1 may be divided into 16 first-type VF network cards (assumed to be VF a 1-VF a16), and the port2 may be divided into 16 second-type VF network cards (assumed to be VF b 1-VF b 16).
After the PCIE network card completes the VF network card virtualization operation, the MCPU may issue a second configuration instruction to the PCIE switch chip.
When the PCIE switch chip receives the second configuration instruction, it may allocate a first type VF network card and a second type VF network card to each Host Port respectively.
Assume that the PCIE switch chip allocates VF a1 and VF b1 to Host Port1, allocates VF a2 and VF b2, … to Host Port2, and allocates VF a16 and VF b16 to Host Port 16.
After the PCIE switch chip completes the VF network card allocation, the MCPU may control the CPU connected to the PCIE switch chip through the Host Port to be powered on and started.
The following description will take a CPU (hereinafter referred to as Host CPU1) accessing a PCIE switch chip through Host Port1 as an example.
After the Host CPU1 is powered on and started, PCI scanning can be performed to establish a virtual PCI bus domain subordinate to itself.
The PCI bus domain scanned by the MCPU is a physical PCI bus domain, and the PCI domain to which each HOST CPU belongs is a slave virtual PCI bus domain established based on the MR-IOV technology.
The Host CPU1 recognizes the VF network card (i.e., VF a1 and VF b1) belonging to its own virtual PCI bus domain, loads the VF network card driver, and configures IP addresses for the VF a1 and VF b1, respectively, and the Host CPU1 can perform data communication with another Host CPU through the VF a1 and perform data communication with an external network through the VF b 1.
In addition, the Host CPU1 can also perform data communication with other Host CPUs by the NTB method or the DMA method.
The Host CPU1 may be connected to the NTB module, the DMA module, and the VF network card through a PCI-PCI bridge (P-P for short).
In the embodiment of the application, a set of PCIE-based multi-HOST CPU cascade system is constructed by using PCIE switching chips supporting MR-IOV function and PCIE network cards supporting SR-IOV function, the control chip controls the PCIE network cards to be virtualized into a plurality of VF network cards, and the PCIE switching chips respectively give each HOST Port, so that the CPU accessed through the HOST ports arranged on the PCIE switching chips can establish a virtual PCI bus domain of the CPU, and the distributed VF network cards are used for realizing the communication between the inside and the outside, thereby not only meeting the requirement that the plurality of CPUs are used as the HOST CPU, but also well meeting the requirement that the plurality of HOST CPUs share one physical network card to realize the data communication between the inside and the outside, and directly supporting the standard network interface, the compatibility of upper layer codes is strong, and the expansion is convenient.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the scope of protection of the present application.

Claims (8)

1. The utility model provides a many owner central processing unit Host CPU cascade system, its characterized in that, includes control chip, Host CPU, peripheral hardware subassembly high speed interconnection PCIE switching chip and PCIE network card, PCIE switching chip supports many input/output virtualization MR-IOV functions, PCIE network card supports single input/output virtualization SR-IOV function, be provided with a plurality of Host Port on the switching chip, Host CPU passes through the Host Port with the switching chip is connected, wherein:
the control chip is used for issuing a first configuration instruction to the PCIE network card;
the PCIE network card comprises a first physical port used for data communication between the Host CPU and a second physical port used for data communication between the Host CPU and the external network, and is used for virtualizing the first physical port into a plurality of first type VF network cards according to a first configuration instruction and virtualizing the second physical port into a plurality of second type VF network cards;
the control chip is further configured to issue a second configuration instruction to the PCIE switch chip;
the PCIE switching chip is used for respectively distributing a first type VF network card and a second type VF network card for each Host Port according to the second configuration instruction;
the control chip is also used for controlling the Host CPU to be powered on and started;
the Host CPU is used for carrying out Peripheral Component Interconnect (PCI) scanning after being electrified and started, and establishing a virtual PCI bus domain of the Host CPU;
the Host CPU is also used for identifying the VF network card subordinate to the virtual PCI bus domain of the Host CPU, loading a VF network card drive, realizing internal communication by identifying the first type VF network card subordinate to the virtual PCI bus domain of the Host CPU, and realizing external communication by identifying the second type VF network card subordinate to the virtual PCI bus domain of the Host CPU.
2. The multi-Host CPU cascade system of claim 1,
the control chip is specifically configured to identify a physical function PF of the PCIE network card, load a PF driver, and issue a first configuration instruction to the PCIE network card.
3. The multi-Host CPU cascade system of claim 1, wherein the PCIE switching chip is further provided with a non-transparent bridge NTB module or/and a direct memory access DMA module;
the Host CPU is also used for realizing the communication in the pair by an NTB mode or/and a DMA mode.
4. The multi-Host CPU cascade system of claim 1, wherein the Host CPU is integrated with a Graphics Processing Unit (GPU).
5. A method for cascading Host CPUs of multiple main central processing units is applied to a multi-Host CPU cascading system comprising a control chip, a Host CPU, a peripheral component high-speed interconnection PCIE switching chip and a PCIE network card, wherein the PCIE switching chip supports multiple input/output virtualization MR-IOV functions, the PCIE network card comprises a first physical Port and a second physical Port and is used for supporting a single input/output virtualization SR-IOV function, a plurality of Host Port ports are arranged on the switching chip, the Host CPU is connected with the switching chip through the Host ports, and the method further comprises the following steps:
the control chip issues a first configuration instruction to the PCIE network card;
the PCIE network card virtualizes the first physical port into a plurality of first type VF network cards according to a first configuration command, and virtualizes the second physical port into a plurality of second type VF network cards;
the control chip issues a second configuration instruction to the PCIE switching chip;
the PCIE switching chip respectively allocates a first type VF network card and a second type VF network card for each Host Port according to the second configuration instruction;
the control chip controls the Host CPU to be powered on and started;
after the Host CPU is electrified and started, Peripheral Component Interconnect (PCI) scanning is carried out, and a virtual PCI bus domain of the Host CPU is established;
the Host CPU identifies the VF network card subordinate to the virtual PCI bus domain of the Host CPU, loads a VF network card drive, realizes internal communication by identifying the first type VF network card subordinate to the virtual PCI bus domain of the Host CPU, and realizes external communication by identifying the second type VF network card subordinate to the virtual PCI bus domain of the Host CPU.
6. The method according to claim 5, wherein the issuing, by the control chip, a first configuration instruction to the PCIE network card includes:
the control chip identifies a physical function PF of the PCIE network card, loads a PF driver, and issues a first configuration instruction to the PCIE network card.
7. The method according to claim 5, wherein the PCIE switching chip is further provided with a non-transparent bridge NTB module or/and a direct memory access DMA module;
the method further comprises the following steps:
the Host CPU realizes the communication in pairs through an NTB mode or/and a DMA mode.
8. The method of claim 5, wherein the Host CPU is integrated with a Graphics Processing Unit (GPU).
CN201810497465.0A 2018-05-22 2018-05-22 Multi-Host CPU cascading method and system Active CN110515869B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810497465.0A CN110515869B (en) 2018-05-22 2018-05-22 Multi-Host CPU cascading method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810497465.0A CN110515869B (en) 2018-05-22 2018-05-22 Multi-Host CPU cascading method and system

Publications (2)

Publication Number Publication Date
CN110515869A CN110515869A (en) 2019-11-29
CN110515869B true CN110515869B (en) 2021-09-21

Family

ID=68622093

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810497465.0A Active CN110515869B (en) 2018-05-22 2018-05-22 Multi-Host CPU cascading method and system

Country Status (1)

Country Link
CN (1) CN110515869B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113630184A (en) * 2021-08-19 2021-11-09 北京华智信科技发展有限公司 Optical fiber communication network system
CN115208843B (en) * 2022-07-13 2023-06-30 天津津航计算技术研究所 Cascade realization system and method for board-level domestic switch

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102707991A (en) * 2012-05-17 2012-10-03 中国科学院计算技术研究所 Multi-root I/O (Input/Output) virtualization sharing method and system
CN102946366A (en) * 2012-11-12 2013-02-27 杭州华为数字技术有限公司 In-band management method and system
US9135101B2 (en) * 2013-03-01 2015-09-15 Avago Technologies General Ip (Singapore) Pte Ltd Virtual function timeout for single root input/output virtualization controllers

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104601684A (en) * 2014-12-31 2015-05-06 曙光云计算技术有限公司 Cloud server system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102707991A (en) * 2012-05-17 2012-10-03 中国科学院计算技术研究所 Multi-root I/O (Input/Output) virtualization sharing method and system
CN102946366A (en) * 2012-11-12 2013-02-27 杭州华为数字技术有限公司 In-band management method and system
US9135101B2 (en) * 2013-03-01 2015-09-15 Avago Technologies General Ip (Singapore) Pte Ltd Virtual function timeout for single root input/output virtualization controllers

Also Published As

Publication number Publication date
CN110515869A (en) 2019-11-29

Similar Documents

Publication Publication Date Title
CN106796529B (en) Method for using PCIe device resources by using commodity type PCI switch using unmodified PCIe device driver on CPU in PCIe fabric
EP3264280B1 (en) Method and apparatus for extending pcie domain
US11036533B2 (en) Mechanism to dynamically allocate physical storage device resources in virtualized environments
US7516252B2 (en) Port binding scheme to create virtual host bus adapter in a virtualized multi-operating system platform environment
US9734096B2 (en) Method and system for single root input/output virtualization virtual functions sharing on multi-hosts
US9223734B2 (en) Switch with synthetic device capability
CN104767838B (en) Micro server, method of assigning MAC address, and computer-readable recording medium
CN102650976B (en) Control device and method supporting single IO (Input/Output) virtual user level interface
US8918568B2 (en) PCI express SR-IOV/MR-IOV virtual function clusters
US8972611B2 (en) Multi-server consolidated input/output (IO) device
CN102722414A (en) Input/output (I/O) resource management method for multi-root I/O virtualization sharing system
EP1131732B1 (en) A direct memory access engine for supporting multiple virtual direct memory access channels
CN105320628A (en) Adaptation device, system and method for enabling single I/O device to be shared by multiple root nodes
US20090006702A1 (en) Sharing universal serial bus isochronous bandwidth between multiple virtual machines
EP4053706A1 (en) Cross address-space bridging
EP3716084A1 (en) Apparatus and method for sharing a flash device among multiple masters of a computing platform
US20170124018A1 (en) Method and Device for Sharing PCIE I/O Device, and Interconnection System
CN110515869B (en) Multi-Host CPU cascading method and system
TWI616759B (en) Apparatus assigning controller and apparatus assigning method
WO2017181851A1 (en) Bios starting method and device
WO2015043175A1 (en) Server system and operation system starting method thereof, and starting management node
WO2023221525A1 (en) Resource allocation method and apparatus of circuit board, circuit board, and storage medium
US10025736B1 (en) Exchange message protocol message transmission between two devices
CN117271105A (en) Chip, chip control method and related device
CN116893988A (en) Interface device and method of operating the same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant