CN215642686U - Acceleration board card and computing system - Google Patents

Acceleration board card and computing system Download PDF

Info

Publication number
CN215642686U
CN215642686U CN202122258024.5U CN202122258024U CN215642686U CN 215642686 U CN215642686 U CN 215642686U CN 202122258024 U CN202122258024 U CN 202122258024U CN 215642686 U CN215642686 U CN 215642686U
Authority
CN
China
Prior art keywords
chip
power supply
board card
interface
electrically connected
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202122258024.5U
Other languages
Chinese (zh)
Inventor
樊小波
谢玥
李德冲
黄晨
奚立达
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Ximu Semiconductor Technology Co ltd
Original Assignee
Beijing Simm Computing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Simm Computing Technology Co ltd filed Critical Beijing Simm Computing Technology Co ltd
Priority to CN202122258024.5U priority Critical patent/CN215642686U/en
Application granted granted Critical
Publication of CN215642686U publication Critical patent/CN215642686U/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Power Sources (AREA)

Abstract

The present disclosure provides an acceleration board and a computing system. The integrated circuit board accelerates includes: the board card main body is rectangular; the interface is arranged on the long edge of the board card main body and close to the short edge of one side, the controller and the power supply connector are arranged on the board card main body, and the AI chip, the power supply unit and the storage unit are arranged on the board card main body and are positioned above the interface; the interface is electrically connected with the AI chip and the controller respectively, and the AI chip is electrically connected with the controller, the storage unit and the power supply unit; the power supply unit is electrically connected with the power supply connector, the AI chip and the storage unit respectively. The accelerating board card disclosed by the invention is separated from the traditional system framework, the AI chip, the storage unit and the power supply unit are all arranged on one side close to the interface by optimizing the overall layout of the board card, the center of the mechanism is deviated to one side of the interface, and the interface shares the weight of part of the board card, so that the reliability of the accelerating board card is improved, and the radiator can conveniently dissipate heat in a centralized manner.

Description

Acceleration board card and computing system
Technical Field
The disclosure belongs to the technical field of computer equipment, and particularly relates to an acceleration board card and a computing system.
Background
Currently, an AI (artificial intelligence) acceleration mode is mainly implemented by using hardware devices such as a GPU (Graphics Processing Unit), an FPGA (Field Programmable Gate Array), or an Application Specific Integrated Circuit (ASIC), and products in the Field of AI chips are continuously updated and iterated.
In fact, there are two different directions for the development of AI chips: first, a special accelerator, i.e., "AI acceleration chip", is added to an existing computing architecture, and it is a specific algorithm or task that is accelerated deterministically, so as to meet the requirements of the target application field on speed, power consumption, memory usage, and deployment cost. Secondly, the method is completely re-developed, and a brand new architecture simulating a human brain neural network, namely an intelligent chip, is created. The chip can use different AI algorithms to learn and deduce, process a series of tasks including perception, understanding, analysis, decision and action and has the capability of adapting to scene changes.
The method mainly used at present is the first one. The development of the first AI acceleration chip is divided into two main ways: one is to optimize software and hardware by using the existing GPU, many-core processor, DSP (Digital Signal processor) and FPGA chip; the other is to design a dedicated chip, i.e., an ASIC. The integrated board card of the AI acceleration chip can complete all calculation functions.
Based on the increase of the operation amount demand of the AI acceleration chip, the power consumption and the design complexity of the acceleration board carrying the AI acceleration chip are high, and the design cannot meet the operation demand and the performance demand, so that a new acceleration board needs to be provided.
SUMMERY OF THE UTILITY MODEL
The present disclosure is directed to at least one of the technical problems of the prior art, and provides an acceleration board and a computing system.
One aspect of the present disclosure provides an acceleration board card, including: the board card main body is rectangular; the interface is arranged on the long side of the board card body and close to the short side of one side, the controller and the power supply connector are arranged on the board card body, and the AI chip, the power supply unit and the storage unit are arranged on the board card body and are positioned above the interface; wherein the content of the first and second substances,
the interface is electrically connected with the AI chip and the controller respectively, and the AI chip is electrically connected with the controller, the storage unit and the power supply unit; the power supply unit is electrically connected with the power connector, the AI chip and the storage unit respectively.
In some embodiments, the storage unit includes a plurality of first memories that are uniformly distributed on the AI chip peripheral side.
In some embodiments, the power supply unit includes a first power supply module and a plurality of second power supply modules; wherein the content of the first and second substances,
the first power supply module and the second power supply module are respectively and electrically connected with the AI chip;
each second power supply module is electrically connected with the corresponding first memory.
In some embodiments, the plurality of second power supply modules are evenly distributed on the AI chip peripheral side.
In some embodiments, each of the second power supply modules corresponds to two of the first memories.
In some embodiments, the memory further includes a second memory disposed on the long side of the board card body and near the short side of one side, and the second memory is electrically connected to the AI chip.
In some embodiments, the card further comprises a clock module disposed on the long side of the card body and close to the short side of one side, and the clock module is electrically connected to the AI chip.
In some embodiments, an air inlet and an air outlet are respectively arranged at two ends of the board card main body along the length direction of the board card main body, and the air inlet is communicated with the air outlet; wherein the content of the first and second substances,
the power connector is positioned at one side close to the air inlet, and the AI chip is positioned at one side close to the air outlet.
In another aspect of the present disclosure, a computing system is provided, which includes the acceleration board described above.
In some embodiments, the computing system further comprises a host provided with a power supply and a processor, the power supply is electrically connected with the power supply unit through the power supply connector, and the processor is electrically connected with the controller and the AI chip through the interface; wherein the content of the first and second substances,
the processor is used for controlling the power-on and power-off state of the AI chip and adjusting the working voltage of the AI chip through the controller; and the number of the first and second groups,
the processor is further configured to control a working mode of the AI chip through the interface.
The utility model provides an accelerating board card breaks away from system architecture in the past, through optimizing the overall arrangement of the whole focus of board card, all set up AI chip, memory cell, power supply unit in the position that the board card surface is located the interface top, make mechanism focus be partial to interface one side for the weight of part board card main part is shared to the interface, improves the reliability and the job stabilization nature of accelerating board card, and the radiator of being convenient for concentrates the heat dissipation, makes the wholeness ability of accelerating board card exert to the biggest.
Drawings
Fig. 1 is a schematic structural diagram of an acceleration board card according to an embodiment of the present disclosure;
fig. 2 is a schematic circuit connection diagram of an acceleration board card according to another embodiment of the disclosure.
Detailed Description
For a better understanding of the technical aspects of the present disclosure, reference is made to the following detailed description taken in conjunction with the accompanying drawings. It is to be understood that the described embodiments are only a few embodiments of the present disclosure, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the described embodiments of the disclosure without any inventive step, are within the scope of the disclosure.
Unless otherwise specifically stated, technical or scientific terms used in the present disclosure shall have the ordinary meaning as understood by those of ordinary skill in the art to which the present disclosure belongs. The use of "including" or "comprising" and the like in this disclosure does not limit the referenced shapes, numbers, steps, actions, operations, members, elements and/or groups thereof, nor does it preclude the presence or addition of one or more other different shapes, numbers, steps, actions, operations, members, elements and/or groups thereof. Furthermore, the terms "first" and "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number and order of the indicated features.
AI accelerators are a specialized class of hardware accelerators or computer systems intended to accelerate the application of artificial intelligence, particularly artificial neural networks, machine vision, and machine learning. Typical application scenarios include: robotics, internet of things, other data intensive or sensor driven task computing. In these application scenarios, requirements are placed on multi-core design, low-precision arithmetic, novel dataflow architectures, and computational power in memory.
Therefore, based on the increase of the calculation amount demand of the AI chip at present, the overall design requirement of the acceleration board is relatively high, so that a new acceleration board needs to be provided to meet the calculation demand and performance requirement of the AI chip.
As shown in fig. 1 and fig. 2, in one aspect of the present disclosure, an acceleration board is provided, which includes: the card body 110 is rectangular, the interface 120 is disposed on a long side of the card body 110 and is close to a short side of one side, the controller 130 and the power connector 140 are disposed on the card body 110, and the AI chip 150, the storage unit and the power supply unit are disposed on the card body 110 and are located above the interface 120. The interface 120 is electrically connected to the AI chip 150 and the controller 130, and the AI chip is electrically connected to the controller, the storage unit, and the power supply unit. The storage unit may be used to store operation data or configuration parameters of the AI chip, and the like.
Further, as shown in fig. 1 and fig. 2, an external device (e.g., a host) may be connected through the interface 120, so as to implement signal transmission between the AI chip and the external device, and signal transmission between the controller 130 and the external device, where the controller 130 is used to perform operations such as management, voltage adjustment, or reboot on the AI chip.
Further, as shown in fig. 1 and 2, the power connector 140, the AI chip 150, and the storage unit are electrically connected to the power supply unit, wherein the power connector 140 is used for connecting to a power source of an external device (e.g., a host) to maintain a power-on state of the power supply unit, and the power supply unit can respectively supply power to the AI chip 150 and the storage unit.
It should be noted that the board card is designed as a standard PCIe card, and the mechanism is a single wide card, so the overall layout of the board card directly affects the reliability and heat dissipation efficiency of the mechanism. The utility model provides an it breaks away from components and parts layout structure in the past to accelerate the integrated circuit board, through optimizing the overall gravity's of integrated circuit board overall arrangement, all set up AI chip, memory cell, power supply unit and lie in the position of interface top on integrated circuit board main part surface, make the mechanism focus be partial to interface one side for the weight of part integrated circuit board main part is shared to the interface, the reliability and the job stabilization nature of integrated circuit board are accelerated in the improvement, and the radiator of being convenient for concentrates the heat dissipation, make the wholeness ability performance of accelerating the integrated circuit board exert to the biggest. And according to the design and structure of the AI chip, adjusting the position relationship between the central axis of the AI chip and the central axis of the interface to achieve the best gravity center layout.
Optionally, the AI chip in the acceleration board provided by the present disclosure integrates an AI optimized RISC-V (Reduced Instruction Set Computing), and on the basis of keeping the chip flexible and small, basic mathematical operations common in the AI field such as matrix operation can be optimized, for example, there is a generalization support for machine learning and deep learning, and the characteristics of simplification and free expansion can be kept.
It should be further noted that, as shown in fig. 1 and fig. 2, the interface 120 of the present example is a PCIe 4.0 × 16 golden finger standard interface, and there are 16 sets of PCIe lanes between the interface and the AI chip. After the board card is connected with the external device through the interface, the external device can provide a PCIe signal and a 100MHz clock to the AI chip based on the PCIe channel. The controller 130 is an MCU controller. After the board card is connected to the external device through the interface, the external device may also read related information (e.g., standard specification information such as temperature and configuration parameters) from the MCU through the interface 120 and the SMbus (System Management Bus). Further, the external device may also transmit a system restart (reset) signal and provide a 3.3V voltage to the MCU.
It should be noted that the board card used in this example is a standard PCIe card, the design specification of the PCIe mechanism is 266.7mm (long) × 111.15mm (wide), the single slot PCIe card, the board card design that maximally supports 150W power consumption, and the passive heat dissipation design that the ambient temperature supports the range of 0 ℃ to 45 ℃.
In some embodiments, the storage unit includes a plurality of first storages, and the plurality of first storages are uniformly distributed on the peripheral side of the AI chip, for example, the plurality of first storages may be symmetrically distributed on both sides of the AI chip along the length direction of the board main body, and the plurality of first storages are used for storing relevant data of the AI chip.
Illustratively, as shown in fig. 1 and 2, the storage unit includes eight first storages 161, and the eight first storages 161 are symmetrically distributed on both sides of the AI chip 150 along the length direction of the board main body 110, that is, four first storages 161 are arranged in a first group on the left side of the board main body 110, and the other four first storages 161 are arranged in a second group on the right side of the board main body 110.
It should be noted that the first memory of this embodiment is a DDR (double data rate) particle, LPDDR4X is adopted, and the eight first memories are electrically connected to the AI chip through the LPDDR4X bus, so that fast data transmission can be realized between the AI chip and the first memory. The second power supply module 172 is closer to the DDR, so as to supply power to the DDR and the AI chips.
In some embodiments, as shown in fig. 1 and 2, the power supply unit includes a first power supply module 171 and a plurality of second power supply modules 172. Each of the second power supply modules 172 is electrically connected to the corresponding first memory 161 to supply power to the first memory 161, and the first power supply module 171 and the second power supply module 172 are respectively electrically connected to the AI chip 150 to supply power to the AI chip 150. Further, the AI chip 150 includes a power supply part inside for receiving power supplied to the AI chip 150, wherein the first power supply module 171 is disposed at a side close to the power supply part of the AI chip 150, for example, the power supply part is disposed at the right side inside the AI chip 150, and the first power supply module 171 is disposed at the right side outside the AI chip 150 for supplying power to the AI chip 150, so that the copper spreading between the first power supply module and the AI chip is wide enough, and the voltage drop and interference during power supply are reduced.
Further, as shown in fig. 1 and 2, the plurality of second power supply modules 172 are uniformly distributed on the circumferential side of the AI chip 150, for example, the plurality of second power supply modules 172 are symmetrically distributed on both sides of the AI chip 150 along the length direction of the board main body 110. Each second power supply module 172 corresponds to two first memories 161. That is, the present example includes four second power supply modules 172, wherein two second power supply modules 172 are disposed at the left side of the first group first storage 161 to supply power to the first group first storage 161. Two other second power supply modules 172 are disposed at the right side of the second group first memory 161 to supply power to the second group first memory 161.
In addition, the first memory is configured with three sets of voltages (VDD1/VDD2/VDDQ), and correspondingly, as shown in fig. 1 and fig. 2, the second power module 172 outputs the three sets of voltages to the first memory 161, i.e., the second power module 172 is a VDD1/VDD2/VDDQ power module. The voltage VDD1 is used to supply power to the core1 of the first memory, the voltage VDD2 is used to supply power to the core2 and the Input Buffer of the first memory 161, and the voltage VDDQ is used to supply power to the I/O Buffer of the first memory 161. In this embodiment, the voltage value range of VDD1 is: 1.70V-1.95V, and the voltage value of VDD2 is in the range of: 1.06V-1.17V, the range of DDQ voltage value is: 0.57V-0.65V.
The voltages of the AI chip configuration include: VDD/VDDL/1V8/VDD 2/VDDQ. Correspondingly, as shown in fig. 1 and 2, the first power supply module 171 provides VDD2/VDDQ to the AI chip 150, the second power supply module 172 provides VDD/VDDL to the AI chip 150, and a 1V8 voltage module 141, where the 1V8 voltage module 141 converts the 12V voltage received by the power connector 140 into 1.8V voltage to provide 1.8V voltage to the AI chip 150. That is, the second power supply module is electrically connected to the AI chip in addition to the first memory. Specifically, the voltage VDD is used for supplying power to the SOC of the AI chip, the voltage VDDL is used for supplying power to the MAC of the AI chip, the voltage 1V8 is used for supplying power to the GPIO of the AI chip, the voltage VDD2 is used for supplying power to the DDR PHY of the AI chip, and the voltage VDDQ is used for supplying power to the DDR PHY of the AI chip. In this embodiment, the voltage value of VDD ranges from: 0.72V-0.88V, the range of voltage values of VDDL is: 0.50V-0.60V, and the voltage value range of VDD2 is as follows: 1.06V-1.17V, the range of DDQ voltage value is: 0.57V-0.65V. The specific value of the voltage VDD/VDDL required to be configured by the AI chip is related to the AI chip design and packaging. The AI chip in the embodiment of the present disclosure is closer to the first power supply module, and then the AI chip is closer to the trace of the first power supply module, and the power supply balls packaged in the AI chip are concentrated, so that the value of the voltage VDD/VDDL that the AI chip needs to be configured is easier to adjust and more stable, and then the value range of the voltage VDD/VDDL is smaller, which is convenient to implement.
The power connector of the disclosed embodiment is a 2 × 4 power connector. The power connector is used for connecting an external power supply with the acceleration board card so as to enable the external power supply to provide 12V voltage for the first power supply module and the second power supply module.
In addition, as shown in fig. 2, the 5V voltage module 142 is configured to convert the 12V voltage received by the power connector 140 into a 5V voltage, and the 5V voltage provides a driving voltage for the first power module 171 and the second power module 172.
It should be further noted that the board-level power control is implemented by the MCU controller, that is, the controller controls each power supply module to supply power to the first memory and the AI chip. In addition, the MCU controller is connected with an external device through a PCIe interface, so that the external device supplies a voltage of 3.3V to the MCU controller. The power supply of the system is more stable through the board-level power supply control mode, and the intelligent management and control of each power supply module are convenient to realize so as to reduce the power consumption.
It should be noted that the MCU controller of this example also controls the power-up and power-down timing and the restart of the board main body, and also dynamically adjusts the threshold of the VDD/VDDL voltage according to the actual situation.
It should be understood that, in some embodiments, the board main body is further provided with a monitoring module, for example, a power monitor module (power monitor), an FRU (Field Replace Unit), etc., the power monitor module is used to monitor voltage data on the AI chip, the FRU is used to monitor board main body information on the AI chip and record board main body status, and the MCU controller reads monitoring data from the power monitor or FRU to control the AI chip.
In addition, in some embodiments, as shown in fig. 2, a UART interface (not shown) is further disposed on the board main body 110, the MCU controller 130 communicates with the AI chip 150 through the UART interface to implement an in-band management function, and the AI chip 150 also reserves the debug UART interface 190 to output a working state of the AI chip according to actual needs.
In some embodiments, as shown in fig. 1 and 2, the acceleration board of this example further includes a second memory 162 disposed on the long side of the board main body 110 and near the short side of one side, and the second memory 162 is electrically connected to the AI chip 150.
It should be noted that the second memory of this embodiment is a Flash memory, and is used for storing the configuration information and the configuration parameters of the AI chip.
In some embodiments, as shown in fig. 1 and 2, the acceleration board of this example further includes a clock module 180 disposed on the long side of the board main body 110 and near the short side of one side, and the clock module 180 is electrically connected to the AI chip 150 for internally generating a clock.
It should be noted that the clock module in this embodiment is not particularly limited, for example, a 25M clock module.
Because the acceleration board card of the embodiment is separated from the traditional AI chip architecture, a neuroscale NPC architecture is adopted, and processor instructions are created based on a RISC-V instruction set system, wherein the instructions comprise a RISC-V scalar instruction, a RISC-V vector instruction, a RISC-V-based customized instruction and the like. The chip instruction system not only retains the flexibility of general software, but also has the specificity of customized instructions, thereby meeting the multi-level requirements of AI software, being used for constructing various different neural networks, greatly improving the performance and the process thereof, and getting rid of monopoly and bottleneck in the field of AI.
In some embodiments, the speed-up board of the present example supports multiple cases of low power modes, such as a stop mode, a power-saving mode, a standby mode, and a DVFS mode (dynamic voltage frequency adjustment mode).
Specifically, the host computer transmits information to the AI chip through the PCIe bus, and meanwhile, the AI chip controls the AI chip to be in different low power consumption modes through the MCU controller on the UART interface management board card main body. For example, the AI chip is controlled to be in a power-off state, i.e. to enter a stop mode;
further, the host transmits information to the AI chip through the PCIe bus, and meanwhile, the AI chip manages the MCU controller on the board card main body through the UART interface, so that the working voltage of the AI chip can be reduced under the low-voltage application scene, thereby reducing the power consumption of the acceleration board card, or the host can adjust the working dominant frequency of the AI chip through the PCIe interface to reduce the power consumption of the board card, wherein the currently supported dominant frequency is 624MHz/800MHz/900MHz/1 GHz.
Specifically, the AI chip of this example is internally provided with a plurality of independent logic banks, for example, the chip is composed of four independent logic banks, the working modes of the logic banks are relatively independent, and each logic Bank includes a plurality of computation cores. And controlling whether each logic Bank works or not by using a host externally connected with a PCIe interface, and when only 1-3 logic banks work, enabling the board card to be in a power saving mode. When the 4 logic banks do not work, the board card is in a standby mode, and only static power consumption is lost at the moment.
Further, in order to improve the heat dissipation performance of the acceleration board, in some embodiments, an air inlet and an air outlet are respectively arranged at two ends of the board main body along the length direction of the board main body, and the air inlet is communicated with the air outlet; the power connector is located on one side close to the air inlet, and the AI chip is located on one side close to the air outlet. That is to say, the board card main part left and right sides of this example all has the vent, and the wind direction that the heat dissipation supported is two-way, and wind can flow from left to right or from right to left promptly, makes the adaptation and the heat dissipation scheme of accelerating the board card more nimble.
Optionally, based on the overall layout shown in fig. 1, the wind direction is unidirectional from right to left, that is, the heat-dissipating wind first goes to the power controller on the right side and then goes to the AI chip on the left side. Because the temperature of the power supply controller is lower, the temperature of the wind can not be higher after the wind passes through the power supply controller, the influence on the temperature of the AI chip is lower, and the heat dissipation effect is better.
In another aspect of the present disclosure, a computing system is provided, which includes the aforementioned acceleration board, and the detailed structure of the acceleration board is referred to the aforementioned description and is not described herein again.
In some embodiments, the computing system further includes a host provided with a power supply electrically connected to the power supply unit through a power supply connector and a processor electrically connected to the controller and the AI chip through an interface. The processor is used for controlling the power-on and power-off state of the AI chip and adjusting the working voltage of the AI chip through the controller; and the processor is also used for controlling the working mode of the AI chip through the interface.
It should be understood that, based on the specific structure of the acceleration board, the chip in this example includes four independent logic banks, that is, the processor in the host controls the operating states of the four independent logic banks through the interfaces to achieve the control of the AI chip operating mode. Specifically, when a processor in the host controls 1 to 3 logic banks to work through an interface, the corresponding board card is in a power saving mode. When the processor in the host controls the 4 logic banks to be out of work through the interface, the board card is in a standby mode.
It should be noted that the host of this example further includes an SMbus management module, a clock source, a motherboard control module, and the like, where the SMbus management module is configured to manage an SMbus signal through an interface, where the SMbus signal is transmitted to the MCU of the acceleration board through the interface of the acceleration board and the SMbus, so that the MCU controls the power-off state of the AI chip, and the like; the clock source is used for controlling the clock module through the interface; and the mainboard control module is used for being connected with the controller through the interface to control the reset of the AI chip.
The present disclosure provides an acceleration board and a computing system, which have the following beneficial effects compared to the prior art:
first, this disclosure optimizes the overall layout of integrated circuit board main part, all sets up AI chip, memory cell, power supply unit in the position that the integrated circuit board surface is located the interface top, makes mechanism's focus be partial to interface one side for the weight of part integrated circuit board main part is shared to the interface, improves the reliability and the job stabilization nature of accelerating the integrated circuit board, and the radiator of being convenient for concentrates the heat dissipation, makes the wholeness performance of accelerating the integrated circuit board exert to the biggest.
Secondly, the self-developed AI chip is designed based on a RISC-V instruction set and a brand-new chip architecture, a chip instruction system not only keeps the flexibility of general software, but also has the specificity of a customized instruction, thereby meeting the multi-level requirement of the AI software, being used for constructing various different neural networks, supporting low power consumption modes of various conditions, and exceeding a same-line chip in performance and process.
Third, the present disclosure is based on a single-width full-height 3/4 full-length standard PCIe card mechanism that can be more flexibly adapted to standard machines.
It is to be understood that the above embodiments are merely exemplary embodiments that are employed to illustrate the principles of the present disclosure, and that the present disclosure is not limited thereto. It will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the disclosure, and these are to be considered as the scope of the disclosure.

Claims (10)

1. An acceleration board card, comprising: the board card main body is rectangular; the interface is arranged on the long side of the board card body and close to the short side of one side, the controller and the power supply connector are arranged on the board card body, and the AI chip, the power supply unit and the storage unit are arranged on the board card body and are positioned above the interface; wherein the content of the first and second substances,
the interface is electrically connected with the AI chip and the controller respectively, and the AI chip is electrically connected with the controller, the storage unit and the power supply unit; the power supply unit is electrically connected with the power connector, the AI chip and the storage unit respectively.
2. The acceleration board of claim 1, wherein the storage unit includes a plurality of first memories, and the plurality of first memories are evenly distributed on the AI chip peripheral side.
3. The acceleration board card of claim 2, wherein the power supply unit comprises a first power supply module and a plurality of second power supply modules; wherein the content of the first and second substances,
the first power supply module and the second power supply module are respectively and electrically connected with the AI chip;
each second power supply module is electrically connected with the corresponding first memory.
4. The acceleration board card of claim 3, wherein the plurality of second power supply modules are evenly distributed around the AI chip.
5. The acceleration board card of claim 4, wherein each of the second power supply modules corresponds to two of the first memories.
6. The acceleration board card of any one of claims 1 to 5, further comprising a second memory disposed on the long side of the card body and near the short side of one side, the second memory being electrically connected to the AI chip.
7. The acceleration board card of any one of claims 1 to 5, further comprising a clock module disposed on the long side of the card body and near the short side of one side, the clock module being electrically connected to the AI chip.
8. The acceleration board card of any one of claims 1 to 5, wherein an air inlet and an air outlet are respectively provided at both ends of the board card body along a length direction thereof, the air inlet being communicated with the air outlet; wherein the content of the first and second substances,
the power connector is positioned at one side close to the air inlet, and the AI chip is positioned at one side close to the air outlet.
9. A computing system comprising the acceleration board of any of claims 1 to 8.
10. The computing system according to claim 9, further comprising a host provided with a power supply and a processor, the power supply being electrically connected to the power supply unit through the power supply connector, the processor being electrically connected to the controller and the AI chip through the interface; wherein the content of the first and second substances,
the processor is used for controlling the power-on and power-off state of the AI chip and adjusting the working voltage of the AI chip through the controller; and the number of the first and second groups,
the processor is further configured to control a working mode of the AI chip through the interface.
CN202122258024.5U 2021-09-17 2021-09-17 Acceleration board card and computing system Active CN215642686U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202122258024.5U CN215642686U (en) 2021-09-17 2021-09-17 Acceleration board card and computing system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202122258024.5U CN215642686U (en) 2021-09-17 2021-09-17 Acceleration board card and computing system

Publications (1)

Publication Number Publication Date
CN215642686U true CN215642686U (en) 2022-01-25

Family

ID=79917250

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202122258024.5U Active CN215642686U (en) 2021-09-17 2021-09-17 Acceleration board card and computing system

Country Status (1)

Country Link
CN (1) CN215642686U (en)

Similar Documents

Publication Publication Date Title
US7490254B2 (en) Increasing workload performance of one or more cores on multiple core processors
US10739836B2 (en) System, apparatus and method for handshaking protocol for low power state transitions
US7020040B2 (en) Utilizing an ACPI to maintain data stored in a DRAM
US10444813B2 (en) Multi-criteria power management scheme for pooled accelerator architectures
US11940855B2 (en) Apparatus and method for dynamic reallocation of processor power by throttling processor to allow an external device to operate
CN107346170A (en) A kind of FPGA Heterogeneous Computings acceleration system and method
WO2019133283A1 (en) Energy-aware power sharing control
CN107070548A (en) A kind of device and method for automatically adjusting QSFP+ optical module power grades
US10595446B2 (en) Optimized and intelligent fan control mechanism inside rack system
US20160179156A1 (en) Hybrid power management approach
CN215642686U (en) Acceleration board card and computing system
CN114239806A (en) RISC-V structured multi-core neural network processor chip
CN211979512U (en) Unmanned aerial vehicle flight control system
KR20230073977A (en) Role detection for usb-based charging
CN105988945A (en) Heterogeneous multiprocessor system and driving control method thereof
WO2022139907A1 (en) Power management for universal serial bus (usb) type-c port
EP3770727B1 (en) Technology for managing per-core performance states
CN104765420A (en) Memory board scheme design method based on Power platform
CN214474812U (en) High-speed data processing device based on FPGA
WO2022252042A1 (en) Memory management apparatus and method, and electronic device
CN218298998U (en) Miniaturized general AI calculates structure, circuit board and device
CN213399574U (en) Extensible storage FPGA board card based on configurable memory group
CN211787085U (en) Double-circuit DRAM storage FPGA board card sharing cache
US8199601B2 (en) System and method of selectively varying supply voltage without level shifting data signals
WO2024021657A1 (en) Storage control method and apparatus, device, storage medium, chip, and memory

Legal Events

Date Code Title Description
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: Room 201, No. 6 Fengtong Heng Street, Huangpu District, Guangzhou City, Guangdong Province, 510799

Patentee after: Guangzhou Ximu Semiconductor Technology Co.,Ltd.

Country or region after: China

Address before: 100020 801, 802, 803, 808, 8th floor, building 5, courtyard 5, Laiguangying West Road, Chaoyang District, Beijing

Patentee before: Beijing SIMM Computing Technology Co.,Ltd.

Country or region before: China

CP03 Change of name, title or address