CN114968897A - Communication method, network equipment and storage medium - Google Patents

Communication method, network equipment and storage medium Download PDF

Info

Publication number
CN114968897A
CN114968897A CN202210458364.9A CN202210458364A CN114968897A CN 114968897 A CN114968897 A CN 114968897A CN 202210458364 A CN202210458364 A CN 202210458364A CN 114968897 A CN114968897 A CN 114968897A
Authority
CN
China
Prior art keywords
switch matrix
cpu
fpga
service fpga
service
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210458364.9A
Other languages
Chinese (zh)
Inventor
刘阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
New H3C Technologies Co Ltd Hefei Branch
Original Assignee
New H3C Technologies Co Ltd Hefei Branch
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by New H3C Technologies Co Ltd Hefei Branch filed Critical New H3C Technologies Co Ltd Hefei Branch
Priority to CN202210458364.9A priority Critical patent/CN114968897A/en
Publication of CN114968897A publication Critical patent/CN114968897A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/16Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
    • G06F15/163Interprocessor communication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • G06F13/42Bus transfer protocol, e.g. handshake; Synchronisation
    • G06F13/4282Bus transfer protocol, e.g. handshake; Synchronisation on a serial bus, e.g. I2C bus, SPI bus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2213/00Indexing scheme relating to interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F2213/0026PCI express

Abstract

The present specification provides a method of communication, a network device, and a storage medium, the method comprising: the CPU is connected with each service FPGA through a switch matrix, and selects a target service FPGA from each service FPGA for communication through the switch matrix. By the method, the bus requirement under the scene of multiple FPGAs can be effectively reduced, the related functions can be normally completed when no PCIE is available, the number of chips can be effectively reduced, and the chip cost and the software development cost of a single board are reduced.

Description

Communication method, network equipment and storage medium
Technical Field
The present disclosure relates to the field of communications technologies, and in particular, to a communication method, a network device, and a storage medium.
Background
As data traffic continues to increase explosively and communication infrastructures continue to be perfected, routers and switches, which are backbone facilities of communication networks, assume the role of nerves and blood vessels. The core routers in the operator backbone network are the main communication arteries that aggregate the billions of traffic. In a high-density and high-complexity core router board, an FPGA (Field Programmable Gate Array) serves as a Programmable high-performance and high-flexibility chip and plays an indispensable role.
Through the use of hardware description language programming, the FPGA can implement a variety of functions such as algorithms, communications, and the like. In order to facilitate the use of developers, manufacturers often embed various IP cores such as ethernet servers, PCIE, Interlaken, and the like in the FPGA, and the burden of the developers can be greatly reduced by flexibly calling the IP cores.
The starting mode of the FPGA is special, a storage area similar to the RAM exists in the chip, after the chip is electrified, the logic code needs to be loaded into the storage area in an external mode, and then the FPGA can normally operate. After the single board is powered off, the logic codes in the FPGA disappear, so that the FPGA needs to be loaded again every time the single board is powered on. In addition, the statistics and configuration of the single board CPU on the internal information of the FPGA are often performed through a PCIE bus or a custom localbus bus. A plurality of FPGA chips often exist on a complex single board at the same time, so that management of the FPGA chips is a more complex system.
Disclosure of Invention
The embodiment of the disclosure provides a communication method, network equipment and a storage medium, by which the bus requirement in a multi-FPGA scene can be effectively reduced, the related functions can be ensured to be normally completed when no PCIE is available, the number of chips can be effectively reduced, and the chip cost and the software development cost of a single board are reduced.
The embodiment of the disclosure provides a communication method, which includes:
the CPU is connected with each service FPGA through a switch matrix;
the CPU selects a target service FPGA from each service FPGA through a switch matrix for communication;
the switch matrix is composed of a plurality of analog switches or digital switches, and a target service FPGA is selected according to a selection signal sent by the CPU.
Wherein, CPU passes through switch matrix and is connected with each business FPGA, includes:
the CPU is connected with one end of the switch matrix through a bus, and the other end of the switch matrix is connected with each service FPGA.
The method comprises the following steps that the CPU selects a target service FPGA from various service FPGAs through a switch matrix for communication, and comprises the following steps:
the CPU controls the switch matrix to select the target service FPGA through the control signal, and after the switch matrix selects the target service FPGA, the CPU is communicated with the target service FPGA.
In another embodiment, the CPU is further connected with a CPLD through a bus, and the CPLD is connected with each service FPGA;
and the CPLD loads data for each service FPGA after receiving the CPU instruction.
Wherein, the switch matrix comprises a plurality of analog switches or digital switches, including:
a selection circuit is constructed by utilizing a plurality of analog switches or digital switches, and a switch matrix is formed by utilizing the selection circuit.
The embodiment of the present disclosure further includes a network device: the network equipment comprises a CPU, a switch matrix and a plurality of service FPGAs, wherein the CPU is connected with each service FPGA through the switch matrix, and the network equipment comprises:
the receiving module is used for receiving instruction information;
the CPU is used for receiving the instruction information and selecting a target service FPGA from each service FPGA for communication through the switch matrix according to the instruction information;
the switch matrix is composed of a plurality of analog switches or digital switches, and a target service FPGA is selected according to the instruction information sent by the CPU.
Wherein, CPU passes through switch matrix and is connected with each business FPGA, includes:
the CPU is connected with one end of the switch matrix through a bus, and the other end of the switch matrix is connected with each service FPGA.
Selecting a target service FPGA from each service FPGA for communication through a switch matrix according to the instruction information, wherein the communication comprises the following steps:
and the CPU controls the switch matrix to select the target service FPGA through a control signal in the instruction information, and after the switch matrix selects the target service FPGA, the CPU is communicated with the target service FPGA.
An embodiment of the present disclosure further provides a network device, where the network device includes: memory, a processor and a program stored on the memory and executable on the processor, which when executed by the processor implements the method steps of any of the above embodiments.
The embodiments of the present disclosure also provide a computer-readable storage medium, on which a program is stored, which when executed by a processor implements the method steps of any of the above embodiments.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present specification and together with the description, serve to explain the principles of the specification.
Fig. 1 is a schematic circuit diagram according to an embodiment of the disclosure.
Fig. 2 is a flowchart illustrating a communication method according to an embodiment of the disclosure.
Fig. 3 is a schematic circuit diagram according to an embodiment of the disclosure.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with this description. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the specification, as detailed in the claims that follow.
The terminology used in the description herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the description. As used in this specification and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, the first information may also be referred to as second information, and similarly, the second information may also be referred to as first information, without departing from the scope of the present specification. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
As shown in fig. 1, there are 6 FPGA chips in the application scenario. The CPU needs to provide 6 PCIE buses simultaneously to support the requirements of information (telemeasurement) statistics, software configuration and the like of each FPGA; meanwhile, a special small FPGA chip needs to be added, another PCIE bus sent by the CPU is converted into 6 loading buses, and initialization codes are loaded for each FPGA. The program loading of the small FPGA needs an additional SPI FLASH to complete; in addition, the CPLD is also required to convert the LPC bus sent out by the CPU into a localbus bus, perform register read-write management on each FPGA and other peripherals, and the like.
Because of the variety of bus requirements of FPGAs, the design is quite complex, requiring multiple different buses to be provided on the board. Moreover, since the PCIE, LPC bus and the like sent by the CPU cannot directly load the FPGA, a dedicated small FPGA, SPI FLASH and other devices need to be additionally added to initially load the FPGA, which increases the complexity of the board and the difficulty of software and hardware design, and increases the cost of redundant devices.
Moreover, in some specific FPGA models, the PCIE IP core is not supported for cost reasons, and if functions such as telemetric statistics and software configuration for the FPGA need to be implemented using a PCIE bus, a developer needs to develop a PCIE IP core code by himself. This is clearly a huge and impractical effort.
To solve the above technical problem, an embodiment of the present disclosure provides a method for communication, as shown in fig. 2, the method includes:
s201, connecting a CPU with each service FPGA through a switch matrix;
s202, the CPU selects a target service FPGA from each service FPGA through a switch matrix for communication;
the switch matrix is composed of a plurality of analog switches or digital switches, and the target service FPGA is selected according to the selection signal sent by the CPU.
In this embodiment, the CPU may be connected to one end of the switch matrix through a bus, for example, an LPC bus, and the other end of the switch matrix is connected to each service FPGA, and the CPU may control the switch matrix to select a target FPGA to be accessed by the CPU from each service FPGA by sending a control signal to the switch matrix, where the control signal may be a GPIO signal.
In one example, the GPIO signals may be connected to the switch matrix solely through a GPIO bus to control the switch matrix.
In this embodiment, the switch matrix may be a selection circuit constructed by using several analog switches or digital switches, as shown in fig. 3.
In the example of fig. 3, the system is divided into two stages, an upper stage is composed of 2 245 chips, a lower stage is composed of 3 245 chips, the lower stage 245 chips are respectively connected with the service FPGAs, the upper stage 245 chip is connected with the CPU and used for receiving control signals of the CPU, and the lower stage 245 chip receives instructions sent by the upper stage 245 chip and selects a target FPGA from the service FPGAs so that the target FPGA is communicated with the CPU. It should be noted that, in this embodiment, an analog switch is selected, and in other embodiments, a digital switch may be selected.
In this embodiment, the CPU is further connected to a CPLD through a bus (e.g., an LPC bus), and the CPLD is connected to each service FPGA and used for loading the start data for each FPGA.
It can be seen from the above embodiments that the CPU does not need to be respectively communicated with the service FPGAs via PCIE, and can select to access the target FPGA to be accessed by using one bus and the switch matrix. By the scheme, the bus requirements in a multi-FPGA scene can be greatly reduced, the related functions can be normally completed when no PCIE is available, the number of chips can be effectively reduced, and the chip cost of a single board and the software development cost are reduced.
Meanwhile, the inventor calculates the bus efficiency, and according to the LPC protocol format, one read-write operation consists of a Start bit of 1 clock cycle, a CT/DIR direction bit of 1 clock cycle, an ADDR address bit of 4 or 8clock cycles, a TAR inversion bit of 2 clock cycles, a Sync synchronization bit of 1-n clock cycles, a Data bit of 2-2 m clock cycles, a TAR inversion bit of 2 clock cycles, and a Start bit of 1 clock cycle. Since LPC consists of LAD [3:0], reading 8bit data once (address 16bit) requires 2 data clock cycles, 4 address clock cycles; the synchronization bits are set to 24 clock cycles in the worst case, so a clock reading 8-bit data once (address 16bit) requires: 1+1+4+2+24+2+2+1 ═ 37 Clocks.
It takes 8 times to read 64bit data, 10K/s times to read 64bit data per FPGA chip, and it is assumed that there is 8Clocks between each two reads, so it takes 10K × 8 (37+8) ═ 3600K Clocks to read 1 FPGA.
Reading 6 FPGAs requires 6 × 3600K Clocks 21600K Clocks 21.6M Clocks.
And the bus clock of the LPC bus is 33M, which meets the data reading requirement.
The embodiment of the present disclosure further provides a network device, where the network device includes a CPU, a switch matrix, and a plurality of service FPGAs, the CPU is connected to each service FPGA through the switch matrix, and the network device includes:
the receiving module is used for receiving instruction information;
the CPU is used for receiving the instruction information and selecting a target service FPGA from each service FPGA for communication through the switch matrix according to the instruction information;
the switch matrix is composed of a plurality of analog switches or digital switches, and a target service FPGA is selected according to the instruction information sent by the CPU.
The CPU is connected with each service FPGA through a switch matrix, and comprises:
the CPU is connected with one end of the switch matrix through a bus, and the other end of the switch matrix is connected with each service FPGA.
And according to the instruction information, selecting a target service FPGA from each service FPGA through a switch matrix for communication, wherein the communication comprises the following steps:
and the CPU controls the switch matrix to select the target service FPGA through a control signal in the instruction information, and after the switch matrix selects the target service FPGA, the CPU is communicated with the target service FPGA.
An embodiment of the present disclosure further provides a network device, where the network device includes: a memory, a processor and a program stored on the memory and executable on the processor, the program, when executed by the processor, implementing the method steps of any of the embodiments described above.
The computer readable storage medium has a program stored thereon, which when executed by the processor implements the method steps in any of the embodiments described above.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
Other embodiments of the present disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This specification is intended to cover any variations, uses, or adaptations of the specification following, in general, the principles of the specification and including such departures from the present disclosure as come within known or customary practice within the art to which the specification pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the specification being indicated by the following claims.
It will be understood that the present description is not limited to the precise arrangements described above and shown in the drawings, and that various modifications and changes may be made without departing from the scope thereof. The scope of the present description is limited only by the appended claims.
The above description is only a preferred embodiment of the present disclosure, and should not be taken as limiting the present disclosure, and any modifications, equivalents, improvements, etc. made within the spirit and principle of the present disclosure should be included in the scope of the present disclosure.

Claims (10)

1. A method of communication, the method comprising:
the CPU is connected with each service FPGA through a switch matrix;
the CPU selects a target service FPGA from each service FPGA for communication through a switch matrix;
the switch matrix is composed of a plurality of analog switches or digital switches, and a target service FPGA is selected according to a selection signal sent by the CPU.
2. The method of claim 1, wherein the CPU is connected to each service FPGA through a switch matrix, comprising:
the CPU is connected with one end of the switch matrix through a bus, and the other end of the switch matrix is connected with each service FPGA.
3. The method of claim 1, wherein the CPU selects a target service FPGA to communicate from the service FPGAs through a switch matrix, comprising:
the CPU controls the switch matrix to select the target service FPGA through the control signal, and after the switch matrix selects the target service FPGA, the CPU is communicated with the target service FPGA.
4. The method of claim 1, further comprising:
the CPU is also connected with a CPLD through a bus, and the CPLD is connected with each service FPGA;
and the CPLD loads data for each service FPGA after receiving the CPU instruction.
5. The method of claim 1, wherein the switch matrix is composed of a number of analog switches or digital switches, comprising:
a selection circuit is constructed by utilizing a plurality of analog switches or digital switches, and a switch matrix is formed by utilizing the selection circuit.
6. The network equipment is characterized by comprising a CPU, a switch matrix and a plurality of service FPGAs (field programmable gate arrays), wherein the CPU is connected with each service FPGA through the switch matrix, and the network equipment comprises:
the receiving module is used for receiving instruction information;
the CPU is used for receiving the instruction information and selecting a target service FPGA from each service FPGA for communication through the switch matrix according to the instruction information;
the switch matrix is composed of a plurality of analog switches or digital switches, and a target service FPGA is selected according to the instruction information sent by the CPU.
7. The network device of claim 6, wherein the CPU is connected to each service FPGA through a switch matrix, comprising:
the CPU is connected with one end of the switch matrix through a bus, and the other end of the switch matrix is connected with each service FPGA.
8. The network device of claim 6, wherein selecting a target service FPGA from the service FPGAs for communication through the switch matrix according to the instruction information comprises:
and the CPU controls the switch matrix to select the target service FPGA through a control signal in the instruction information, and after the switch matrix selects the target service FPGA, the CPU is communicated with the target service FPGA.
9. A network device, characterized in that the network device comprises: memory, processor and program stored on the memory and executable on the processor, which when executed by the processor implements the method steps of any of claims 1 to 5.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a program which, when being executed by a processor, carries out the method steps of any one of claims 1 to 5.
CN202210458364.9A 2022-04-28 2022-04-28 Communication method, network equipment and storage medium Pending CN114968897A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210458364.9A CN114968897A (en) 2022-04-28 2022-04-28 Communication method, network equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210458364.9A CN114968897A (en) 2022-04-28 2022-04-28 Communication method, network equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114968897A true CN114968897A (en) 2022-08-30

Family

ID=82978872

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210458364.9A Pending CN114968897A (en) 2022-04-28 2022-04-28 Communication method, network equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114968897A (en)

Similar Documents

Publication Publication Date Title
US7325221B1 (en) Logic system with configurable interface
JPH06187283A (en) Card
CN111651384A (en) Register reading and writing method, chip, subsystem, register group and terminal
JP2002024201A (en) Semiconductor integrated circuit
CN111563059B (en) PCIe-based multi-FPGA dynamic configuration device and method
CN115496018A (en) Multi-version verification method, device and equipment for SoC (System on chip)
CN116032746B (en) Information processing method and device of resource pool, storage medium and electronic device
CN107168744B (en) System and method for the load of DSP chip file
CN114968897A (en) Communication method, network equipment and storage medium
CN115374042A (en) Bus switching method, device, equipment and medium
US7453380B2 (en) Apparatus and method for processing analog signals and outputting digitally converted analog signals using serial bus
JPH09259068A (en) Extended input and output interface
JPS61153748A (en) Data processor
EP0779582A1 (en) Data processor having bus controller
US6240496B1 (en) Architecture and configuring method for a computer expansion board
JPH11120002A (en) Device with a plurality of dsp's
CN114281393A (en) Method and system for simultaneously programming multiple memory devices
JPH09204243A (en) Data transfer method
JP4723334B2 (en) DMA transfer system
JP2000099452A (en) Dma control device
US11966749B2 (en) Processor and booting method thereof
US20230116107A1 (en) Processor and booting method thereof
CN110046120B (en) Data processing method, device and system based on IIC protocol and storage medium
JP4024713B2 (en) Data processing system and control device
CN117743235A (en) I2C device access method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination