CN114179824B - Unmanned computing system - Google Patents

Unmanned computing system Download PDF

Info

Publication number
CN114179824B
CN114179824B CN202111442544.XA CN202111442544A CN114179824B CN 114179824 B CN114179824 B CN 114179824B CN 202111442544 A CN202111442544 A CN 202111442544A CN 114179824 B CN114179824 B CN 114179824B
Authority
CN
China
Prior art keywords
vehicle
module
interface
gpu acceleration
deserializer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111442544.XA
Other languages
Chinese (zh)
Other versions
CN114179824A (en
Inventor
张晶威
刘铁军
董培强
韩大峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Inspur Smart Computing Technology Co Ltd
Original Assignee
Guangdong Inspur Smart Computing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Inspur Smart Computing Technology Co Ltd filed Critical Guangdong Inspur Smart Computing Technology Co Ltd
Priority to CN202111442544.XA priority Critical patent/CN114179824B/en
Publication of CN114179824A publication Critical patent/CN114179824A/en
Application granted granted Critical
Publication of CN114179824B publication Critical patent/CN114179824B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2420/00Indexing codes relating to the type of sensors based on the principle of their operation
    • B60W2420/40Photo, light or radio wave sensitive means, e.g. infrared sensors
    • B60W2420/403Image sensing, e.g. optical camera

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application discloses an unmanned computing system, comprising: an unmanned computing system comprises a sensing and AI accelerator, an AI acceleration and computation co-processing unit, a computing unit, a data exchange unit and a functional safety module; the sensing and AI accelerator comprises a plurality of first vehicle-mounted GPU acceleration modules, and the AI acceleration and calculation co-processing unit comprises a plurality of second vehicle-mounted GPU acceleration modules; the data exchange unit comprises a PCIe cross switch and an Ethernet switch, the PCIe cross switch is connected with the first vehicle-mounted GPU acceleration module, the second vehicle-mounted GPU acceleration module and the computing unit, and the Ethernet switch is connected with the first vehicle-mounted GPU acceleration module, the second vehicle-mounted GPU acceleration module and the computing unit; the functional safety module is connected with the first vehicle-mounted GPU acceleration module, the second vehicle-mounted GPU acceleration module and the computing unit. The application meets the iteration requirement of the unmanned software algorithm and meets the calculation force requirement of unmanned.

Description

Unmanned computing system
Technical Field
The application relates to the technical field of unmanned driving, in particular to an unmanned computing system.
Background
Unmanned is an emerging subject, is still in the stage of theoretical research and actual road test, experiment, technology accumulation, and in the automobile industry with high safety, high reliability and high requirement on equipment environmental adaptability, the unified architecture and standard of a computing platform are not available at present.
Compared with the iterative evolution speed of algorithms such as unmanned perception and decision, the development, debugging and testing period of the hardware computing platform is longer, and the hardware computing platform is difficult to adapt to continuous updating of software and algorithms. Taking two main sensor equipment of a sensing platform as a vehicle-mounted camera and a laser radar, manufacturers have almost no unified standard on camera configuration and application, the parameter configuration and the number of cameras are also greatly different, in addition, the iteration of camera technology is faster, 2Mega (Mega pixels) of mainstream application in the current unmanned field is developed to 5Mega and 8Mega, and corresponding requirements on image data link hardware and a computing platform for processing mass data are brought correspondingly. Similar in configuration and application of lidar, for example, the parameter configuration of the resolution of lidar and the number of radar applications are also changing.
Because of the lack of a system platform and standards for sensing peripherals, the evolution of the unmanned equipment is 'patched', namely, when the peripherals are required to be added based on the traditional industrial personal computer platform, a distributed system or a high-speed I/O bus expansion interface box is considered to be used and a sensor is externally connected. The system platform is described at the beginning of planning, and the evolution change condition of the algorithm and the perceived peripheral is not considered.
In view of the above-mentioned problems, there is a need for a computing platform that can satisfy the current software algorithm update iteration and meet the computing power requirements, and at the same time, can also meet the requirements of vehicle-mounted environment adaptability and power consumption.
Disclosure of Invention
The application aims to provide an unmanned computing system, which meets the iteration requirement of an unmanned software algorithm, meets the calculation force requirement of unmanned, and meets the requirements of vehicle-mounted environment adaptability and power consumption.
In order to achieve the above object, the present application provides an unmanned computing system, which includes a sensing and AI accelerator, an AI acceleration and computation co-processing unit, a computing unit, a data exchange unit, and a functional security module;
the sensing and AI accelerator comprises a plurality of first vehicle-mounted GPU acceleration modules, wherein each first vehicle-mounted GPU acceleration module is connected with 4 image signal chain deserializers, and each image signal chain deserializer is connected with 2 vehicle-mounted cameras;
The AI acceleration and computation co-processing unit comprises a plurality of second vehicle-mounted GPU acceleration modules, and each second vehicle-mounted GPU acceleration module is connected with 4 image signal chain deserializers;
The data exchange unit comprises a PCIe cross switch and an Ethernet switch, the PCIe cross switch is connected with the first vehicle-mounted GPU acceleration module, the second vehicle-mounted GPU acceleration module and the computing unit, and the Ethernet switch is connected with the first vehicle-mounted GPU acceleration module, the second vehicle-mounted GPU acceleration module and the computing unit;
The functional safety module is connected with the first vehicle-mounted GPU acceleration module, the second vehicle-mounted GPU acceleration module and the computing unit.
The image sensor of the vehicle-mounted camera is connected with the serializer through an MIPI interface, the image signal chain deserializer is connected with 2 serializers, and the image signal chain deserializer is connected with the sensing and AI accelerator and the AI acceleration and calculation co-processing unit through the MIPI interface.
The image signal chain deserializer comprises a main I2C interface and a transparent I2C interface, the first vehicle-mounted GPU acceleration module accesses and configures a register of the image signal chain deserializer through the main I2C interface, and the first vehicle-mounted GPU acceleration module controls a CMOS sensor in a vehicle-mounted camera connected with the image signal chain deserializer through the transparent I2C interface.
The image signal chain deserializer comprises a GPIO interface, and the first vehicle-mounted GPU acceleration module triggers the image signal chain deserializer to send GPIO signals through the GPIO interface so as to control a CMOS sensor in a vehicle-mounted camera connected with the image signal chain deserializer.
Wherein the perception and AI accelerator further comprises any one or a combination of several of HDMI interface, UART interface, JTAG debug interface and USB interface.
The Ethernet switch is used for accessing laser radar data and transmitting the laser radar data to the sensing and AI accelerator, the AI acceleration and calculation co-processing unit and the calculation unit in parallel, and the sensing and AI accelerator receives the laser radar data through a CAN-FS interface.
The Ethernet switch is used for interacting management data of the first vehicle-mounted GPU acceleration module, the second vehicle-mounted GPU acceleration module and the computing unit.
And the plurality of the first vehicle-mounted GPU acceleration modules perform DMA operation through the data exchange unit, and the plurality of the second vehicle-mounted GPU acceleration modules perform DMA operation through the data exchange unit.
The functional safety module is used for making a safety strategy according to the module state of the first vehicle-mounted GPU acceleration module, the second vehicle-mounted GPU acceleration module or the computing unit and the key data, and sending the safety strategy to a domain controller of the vehicle through a CAN bus interface.
The functional safety module acquires module states and key data of the first vehicle-mounted GPU acceleration module and the second vehicle-mounted GPU acceleration module through an SPI interface;
The module state and key data of the computing unit are sent to the CPLD through an LPC interface, and the functional safety module accesses the CPLD through an SPI interface to acquire the module state and key data of the computing unit.
According to the scheme, the unmanned computing system provided by the application comprises: an unmanned computing system comprises a perception and AI (ARTIFICIAL INTELLIGENCE ) accelerator, an AI acceleration and computation co-processing unit, a computing unit, a data exchange unit and a functional security module; the sensing and AI accelerator comprises a plurality of first vehicle-mounted GPU (Graphics Processing Unit, image processor) acceleration modules, wherein each first vehicle-mounted GPU acceleration module is connected with 4 image signal chain deserializers, and each image signal chain deserializer is connected with 2 vehicle-mounted cameras; the AI acceleration and computation co-processing unit comprises a plurality of second vehicle-mounted GPU acceleration modules, and each second vehicle-mounted GPU acceleration module is connected with 4 image signal chain deserializers; the data exchange unit comprises a PCIe (PERIPHERAL COMPONENT INTERCONNECT EXPRESS, a high-speed serial computer expansion bus standard) cross switch and an Ethernet switch, wherein the PCIe cross switch is connected with the first vehicle-mounted GPU acceleration module, the second vehicle-mounted GPU acceleration module and the computing unit, and the Ethernet switch is connected with the first vehicle-mounted GPU acceleration module, the second vehicle-mounted GPU acceleration module and the computing unit; the functional safety module is connected with the first vehicle-mounted GPU acceleration module, the second vehicle-mounted GPU acceleration module and the computing unit.
The unmanned computing system provided by the application provides an integrated computing platform for unmanned operation, is beneficial to miniaturization and low power consumption of a system platform, can be conveniently installed in a vehicle-mounted land, and reduces the cost of a physical wire harness. Meanwhile, the requirements of updating and iterating the software algorithm can be met through the increase of the number of the vehicle-mounted GPU acceleration modules and the upgrading of the platform iteration. Therefore, the unmanned computing system provided by the application meets the iteration requirement of an unmanned software algorithm, meets the unmanned calculation force requirement, and meets the requirements of vehicle-mounted environment adaptability and power consumption.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application as claimed.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art. The accompanying drawings are included to provide a further understanding of the disclosure, and are incorporated in and constitute a part of this specification, illustrate the disclosure and together with the description serve to explain, but do not limit the disclosure. In the drawings:
FIG. 1 is a block diagram of an unmanned computing system shown according to an exemplary embodiment;
FIG. 2 is a framework diagram of a sense and AI accelerator shown in accordance with an exemplary embodiment;
FIG. 3 is a block diagram illustrating an image data link transmission according to an exemplary embodiment;
FIG. 4 is a block diagram illustrating a control strategy according to an exemplary embodiment;
FIG. 5 is a flow chart illustrating a parameter configuration for a deserializer according to an exemplary embodiment;
FIG. 6 is a schematic diagram illustrating a PCIe communication bus relationship between a PCIe crossbar and a computing unit in accordance with an exemplary embodiment;
FIG. 7 is a diagram illustrating a functional security module and verification data interface with a computing unit according to an exemplary embodiment;
fig. 8 is a functional implementation block diagram of a PPS (Pulse Per Second, satellite navigation output) HUB (integrator) implemented by a CPLD (Complex Programmable Logic Device ) according to an exemplary embodiment.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application. It will be apparent that the described embodiments are only some, but not all, embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application. In addition, in the embodiments of the present application, "first", "second", etc. are used to distinguish similar objects and are not necessarily used to describe a particular order or precedence.
The embodiment of the application discloses an unmanned computing system, which meets the iteration requirement of an unmanned software algorithm, meets the calculation force requirement of unmanned, and meets the requirements of vehicle-mounted environment adaptability and power consumption.
Referring to FIG. 1, a flow chart of a travel supervision method is shown according to an exemplary embodiment, including a sense and AI accelerator, an AI acceleration and computation co-processing unit, a computing unit, a data exchange unit, and a functional security module, as shown in FIG. 1;
the sensing and AI accelerator comprises a plurality of first vehicle-mounted GPU acceleration modules, wherein each first vehicle-mounted GPU acceleration module is connected with 4 image signal chain deserializers, and each image signal chain deserializer is connected with 2 vehicle-mounted cameras;
The AI acceleration and computation co-processing unit comprises a plurality of second vehicle-mounted GPU acceleration modules, and each second vehicle-mounted GPU acceleration module is connected with 4 image signal chain deserializers;
The data exchange unit comprises a PCIe cross switch and an Ethernet switch, the PCIe cross switch is connected with the first vehicle-mounted GPU acceleration module, the second vehicle-mounted GPU acceleration module and the computing unit, and the Ethernet switch is connected with the first vehicle-mounted GPU acceleration module, the second vehicle-mounted GPU acceleration module and the computing unit;
The functional safety module is connected with the first vehicle-mounted GPU acceleration module, the second vehicle-mounted GPU acceleration module and the computing unit.
In a specific implementation, the sensing and AI accelerator includes a plurality of first vehicle GPU acceleration modules, as PCIe peripheral devices of the high-speed communication link, may be an autopilot series module of inflight (Nvidia), as shown in fig. 1, may include 2 first vehicle GPU acceleration modules, and may be specifically an Xavier module (a module for autopilot), which may, of course, also be an upgrade Orin module. The sensing and AI accelerator realizes the data acquisition of the vehicle-mounted camera, the millimeter wave data acquisition and the fusion of the laser radar data and the image data. The sense and AI accelerators support virtualization of computing resources and peripherals and are capable of dynamically DMA (Direct Memory Access ) data to a computing co-processing unit or receiving data from other modules, additionally containing low speed bus and debug interface resources.
The framework diagram of the perception and AI accelerator is shown in fig. 2, wherein the image data transmission chain comprises two main functional modules, one is the image data transmission chain, and the other is the control of the GPU module on each functional unit in the image chain.
In the image data transmission chain, MAX9296 may be used as a deserializer of the image signal, supporting a high-speed image link and a front-end image module of GMSL (Gigabit Multimedia SERIAL LINKS, multimedia serial link) or GMSL 2. The block diagram of the image data link transmission is shown in fig. 3, the image sensor of the vehicle-mounted camera is connected with a serializer through an MIPI (Mobile Industry Processor Interface, an image transmission interface), the image signal chain deserializer is connected with 2 serializers through a Fakra connector, and the image signal chain deserializer is connected with the sensing and AI accelerator and the AI acceleration and computation co-processing unit through an MIPI interface. In a specific implementation, the front-end interface of the deserializer supports a 2-way camera, and the back-end includes two sets of x 4-lane-bandwidth MIPI data interfaces, i.e., CSI (CMOS SERIAL INTERFACE, camera serial interface) -2-4-lane MIPI in fig. 1 and 2, and the data stream can be concentrated on one set of x 4-lane-bandwidth MIPI interfaces and uploaded to the GPU module. Because the GPU module Xavier employed in this embodiment supports MIPI signals for a maximum of 4 sets of x 4lane bandwidths, one Xavier module supports 8 cameras. The sensing and AI accelerators comprising 2 Xavier modules support 16 camera image access, so that the current mainstream vehicle-mounted image configuration requirements can be met. Meanwhile, the data of the 16 cameras are backed up to a module of an AI acceleration and computation co-processing unit through the other group of MIPI interfaces of the deserializer, namely the AI acceleration and computation co-processing unit can be understood as redundant backup of the sensing and AI accelerators, the data are copied in parallel to the sensing and AI accelerators, the AI acceleration and computation co-processing unit, and the two acceleration units can be conveniently and respectively processed with data tasks, for example, one acceleration unit can be used for sensing data fusion, and the other acceleration unit can be used for high-precision map comparison and the like. The parallel copying of the data is redundancy to the vehicle-mounted data, and the safety of the system can be effectively improved according to a strategy of functional safety.
As a possible implementation manner, the image signal chain deserializer includes a main I2C (Inter-INTEGRATED CIRCUIT) interface and a transparent I2C interface, the first onboard GPU acceleration module accesses and configures a register of the image signal chain deserializer through the main I2C interface, and the first onboard GPU acceleration module controls a CMOS (Complementary Metal-Oxide-Semiconductor) sensor in a vehicle-mounted camera connected with the image signal chain deserializer through the transparent I2C interface.
During the initialization of the image data link, each module on the link (including CMOS sensor, serializer, deserializer) needs to be configured by the back-end controller (i.e., the first on-board GPU acceleration module) through the control bus. In addition, in a practical application scenario, that is, in a process of transmitting an image in real time, a front-end sensor may need to be controlled in real time, for example, triggering exposure. The control strategy adopted in the embodiment of the application is shown in fig. 4, and the deserializer comprises 3I 2C interfaces, including a Main I2C interface (Main I2C INTERFACE) and two transparent interfaces (Pass through I2C INTERFACE). The main I2C interface is a fully functional I2C interface, i.e. the registers of the deserializer may be accessed and configured through the main I2C interface, such as the I2C A and I2C B interfaces in fig. 4. The initialization parameter configuration is carried out on the deserializer through the main I2C interface, because the I2C controllers of the first vehicle-mounted GPU acceleration module are limited, for example, the Xavier has 5I 2C controllers, the application adopts the mode of I2C MUX (data selector) which manages the initialization configuration and control (I2C A-D) of the 4-piece deserializer, and the GPU acceleration module sequentially gates 4-piece deserializer routes needing to be configured to carry out the parameter configuration in the initialization process.
In addition, the hardware configuration pins of the deserializer may support 4I 2C slave address configurations. The GPU module may be configured to the 4 different addresses described above by MUX and configuring the strobe address of the deserializer. Because the parameter configuration of the deserializer corresponds to the front-end camera application. Therefore, the dual address addressing of the route address difference and the target device address difference is adopted, so that the correctness and the safety of the software configuration deserializer are improved. As shown in fig. 5, the GPU module configures parameters of the deserializer, gates the MUX routing address of the target deserializer, closes the MUX switch, i.e., routes on, addresses the I2C address of the target deserializer, and configures parameters of the deserializer.
The two I2C interfaces are transparent interfaces (Pass through I2C INTERFACE), which can not access the internal resources of the deserializer, but can map the two I2C interfaces of the deserializer into the I2C interface of the serializer to the sensor when the serializer and the deserializer are initialized, so that the vehicle-mounted GPU module can control the I2C1 and the I2C2, and the control message is converted into the mapping of the serializer to the I2C1 and the I2C2, and the CMOS sensor in the front-end camera is controlled. In this way, the GPU module indirectly controls the exposure (triggering) of the camera, which is a mapping of hardware peripherals and a software message control mode.
The mode of the software message control register is to support all deserializers in the embodiment, and can control the hardware of the front-end camera, so that the method has the advantages of saving resources of I2C and GPIO (General-purpose input/output) interfaces, but only gating one communication route at a time due to the physical characteristics of the I2C MUX, and ensuring that the software message has low real-time performance, namely poor parallelism (simultaneous triggering exposure) and speed. In the configuration strategy of the camera, the camera for identifying traffic sign lines, traffic lights and the like can be adopted in this way.
Further, the image signal chain deserializer comprises a GPIO interface, and the first vehicle-mounted GPU acceleration module triggers the image signal chain deserializer to send GPIO signals through the GPIO interface so as to control the CMOS sensor in the vehicle-mounted camera connected with the image signal chain deserializer. In a specific implementation, for a ring-looking camera (composed of 4 cameras), the embodiment further includes a configuration mode triggered by GPIO mapping. As in the configuration of GPIOs 1-4 in the block diagram of fig. 4, the first on-board GPU acceleration module implements control of the CMOS sensor by the serializer GPIOs in the camera by triggering the GPIOs signals of the deserializers. In this embodiment, because the GPIO signal resources of the first vehicle-mounted GPU acceleration module are limited, a GPIO control policy, such as control of a look-around camera, is implemented according to an application scenario of the camera, one GPIO signal of the first vehicle-mounted GPU acceleration module is connected to one driver, and the driver signal output is connected to each deserializer GPIO 1-4, so that the simultaneous exposure triggering of hardware signals is implemented, and compared with the manner of triggering by an I2C message, the timeliness is higher, and the parallel characteristic of simultaneous triggering is ensured by hardware.
Preferably, the sensing and AI accelerator further comprises any one or a combination of any several of HDMI (high definition multimedia interface (High Definition Multimedia Interface) interface, UART (universal asynchronous receiver Transmitter, universal Asynchronous Receiver/Transmitter) interface, JTAG (Joint Test Action Group, joint test working group, an international standard test protocol) debug interface and USB (Universal Serial Bus ) interface.
The data interaction unit comprises a PCIe cross switch and an Ethernet switch, and the embodiment adopts PCIE SWITCH (PCIe cross switch) to connect with each GPU peripheral module, namely a first vehicle-mounted GPU acceleration module and a second vehicle-mounted GPU acceleration module, and supports DMA operation of data among the modules, namely, the data exchange unit is used for performing DMA operation among the first vehicle-mounted GPU acceleration modules, and the data exchange unit is used for performing DMA operation among the second vehicle-mounted GPU acceleration modules. Further, the PCIe communication bus relationship is shown in fig. 6, and the PCIe communication bus relationship is uploaded to the X86 computing platform unit through the PCIe transparent bridge.
The Ethernet switch is used for accessing laser radar data and transmitting the laser radar data to the sensing and AI accelerator, the AI acceleration and calculation co-processing unit and the calculation unit in parallel, and the sensing and AI accelerator receives the laser radar data through a CAN-FS interface. In the embodiment, the Ethernet switch is used as the data access of the laser radar, and can send the data to the sensing and AI accelerator, the AI acceleration and calculation co-processing unit and the calculation unit in parallel, so that the point cloud data of the laser radar are distributed to each calculation module in parallel, thereby being beneficial to improving the efficiency of data processing and balancing the calculation force load of each calculation module. The sensing and AI accelerators are used as millimeter wave radar interfaces through the CAN buses of the first vehicle-mounted GPU acceleration modules, wherein each first vehicle-mounted GPU acceleration module supports 2 CAN-FS interfaces. In this embodiment, the ethernet switch and the first vehicle GPU acceleration module are connected by using a 1000Base-T ethernet interface, that is, as shown in fig. 2, the first vehicle GPU acceleration module is connected to a PHY (Physical layer) through an RGMII (Reduced Gigabit MEDIA INDEPENDENT INTERFACE) interface, and then the 1000Base-T ethernet interface is connected to the ethernet switch.
Further, the ethernet switch is configured to interact management data of the first on-board GPU acceleration module, the second on-board GPU acceleration module, and the computing unit. The embodiment realizes management data interaction among a plurality of modules through the Ethernet switch. The X86 computing platform comprises an access interface through wireless communication (4G/5G/Wifi), and can be communicated to each module of the system terminal through an Ethernet switch, and can manage (e.g. upgrade) and access the system in real time through the communication path. The Ethernet switch and the PHY of each module support IEEE1588 protocol, so as to realize time synchronization of each computing unit. This time synchronization network is parallel to the one in which the CPLD is used to implement PPS signal HUB in the embodiment of the present application.
The computing unit may be specifically an X86 computing unit, and adopts COM Express (Computer on Module Express, a computer module and standard) Type7, and includes SATA (serial advanced technology attachment ), USB 2.0&3.0, UART and GPRMC (recommended positioning information) in addition to the above access interface for wireless communication, and connects to BMC (Baseboard Management Controller ) through PCIe NCSI (Network Connectivity Status Indicator, network connection status indicator) GPIO. At present, the X86 computer and the vehicle-mounted GPU acceleration module of the computing platform are both provided with smoothly-updated hardware interfaces, namely standardized module forms, and the upgrading requirements of algorithms can be met through the increase of the number of modules and the iterative upgrading of the platform. For example, the COM Express of the computer module is basically consistent in interface standard and form, and is convenient to upgrade smoothly according to the requirements of a platform and an application. The heterogeneous computing GPU modules also have the same interface morphology and can be upgraded according to the technical evolution of upstream manufacturers.
Furthermore, the functional security module is configured to formulate a security policy according to the module status of the first vehicle-mounted GPU acceleration module, the second vehicle-mounted GPU acceleration module, or the computing unit and the key data, and send the security policy to a domain controller of the vehicle through a CAN bus interface. The functional safety module acquires the module states and key data of the first vehicle-mounted GPU acceleration module and the second vehicle-mounted GPU acceleration module through an SPI (serial peripheral interface ) interface; the module state and key data of the computing unit are sent to the CPLD through a LPC (Low Pin Count) interface, and the functional safety module accesses the CPLD through an SPI interface to acquire the module state and key data of the computing unit.
The check data interface and block diagram of the functional security module and the computing unit are shown in fig. 7, and the functional security module manages and accesses the system through the ethernet and the switch. In a specific implementation, the computing unit needs to upload the module state and the key data calculation verification result to the functional security module for verification. The functional safety module formulates a safety strategy according to the state of each calculation module of the system, and sends the safety strategy to the domain controllers such as the automobile chassis control domain, the network connection domain and the like through the CAN bus interface of the functional safety module. In this embodiment, the module state of the computing unit and the calculation and verification result of the key data are connected to the functional security module through the SPI interface, where the GPU modules support the SPI slave mode, that is, the functional security module initiates access to each module as a master device. The X86 computing platform does not have an SPI slave interface, the module state of the X86 computing platform is connected to CPLD logic through an LPC interface, the function safety state is stored in the CPLD logic, in addition, the CPLD realizes the SPI slave interface, and the function safety module indirectly accesses the CPLD to realize the function safety management of the X86 computing platform.
It should be noted that, the various sensors connected to the platform sensing system have application limitations, for example, the image data is 2-dimensional and limited by the camera lens, and a certain proportion of distortion occurs in the image. Lidar data is subject to significant environmental interference and, in addition, the data is devoid of color information. The data of the multiple sensors need to be fused, and one of the conditions of data fusion is that the collected valid data is time-information-bearing. From the design point of view of the platform, that is, the time information of the integrated navigation needs to be transmitted to the calculation module of each unit module.
As a possible implementation, the PPS HUB of the platform is implemented by a CPLD, and a functional implementation block diagram thereof is shown in fig. 8. The synchronous clock requirement of the system mainly comes from each sensor of the sensing domain, wherein the laser radar is provided with a signal interface of PPS, and the CPLD receives the PPS signal from the integrated navigation system and outputs the PPS signal to the connector through the driving circuit and outputs the PPS signal to the laser radar through the coaxial cable. For each calculation module on the platform, the CPLD output signal is connected with the GPIO of the module, namely, the time information is brought into the acquired and processed data. The delay of the link meets the system requirements. It is considered that the automobile with 120km/h per hour has a displacement of about 3.4cm in 1ms, and the clock precision meets the requirement in the ms level. As another possible implementation manner, clock synchronization may also be performed based on the IEEE1588 protocol, where the two synchronization manners are mutually redundant.
The unmanned computing system provided by the embodiment of the application provides an integrated computing platform for unmanned operation, is beneficial to miniaturization and low power consumption of the system platform, can be conveniently installed in a vehicle-mounted land, and reduces the cost of a physical wire harness. Meanwhile, the requirements of updating and iterating the software algorithm can be met through the increase of the number of the vehicle-mounted GPU acceleration modules and the upgrading of the platform iteration. Therefore, the unmanned computing system provided by the application meets the iteration requirement of an unmanned software algorithm, meets the unmanned calculation force requirement, and meets the requirements of vehicle-mounted environment adaptability and power consumption.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (9)

1. An unmanned computing system is characterized by comprising a sensing and AI accelerator, an AI acceleration and computation co-processing unit, a computing unit, a data exchange unit and a functional safety module; the perception and AI accelerator and the AI acceleration and calculation co-processing unit are redundant backups, and data are copied to the perception and AI accelerator and the AI acceleration and calculation co-processing unit in parallel so that the perception and AI accelerator and the AI acceleration and calculation co-processing unit respectively process different data tasks;
the sensing and AI accelerator comprises a plurality of first vehicle-mounted GPU acceleration modules, wherein each first vehicle-mounted GPU acceleration module is connected with 4 image signal chain deserializers, and each image signal chain deserializer is connected with 2 vehicle-mounted cameras;
The AI acceleration and computation co-processing unit comprises a plurality of second vehicle-mounted GPU acceleration modules, and each second vehicle-mounted GPU acceleration module is connected with 4 image signal chain deserializers;
the image sensor of the vehicle-mounted camera is connected with a serializer through an MIPI interface, the image signal chain deserializer is connected with 2 serializers through a Fakra connector, and the image signal chain deserializer is connected with the sensing and AI accelerator and the AI acceleration and calculation co-processing unit through the MIPI interface;
The data exchange unit comprises a PCIe cross switch and an Ethernet switch, the PCIe cross switch is connected with the first vehicle-mounted GPU acceleration module, the second vehicle-mounted GPU acceleration module and the computing unit, and the Ethernet switch is connected with the first vehicle-mounted GPU acceleration module, the second vehicle-mounted GPU acceleration module and the computing unit;
The functional safety module is connected with the first vehicle-mounted GPU acceleration module, the second vehicle-mounted GPU acceleration module and the computing unit.
2. The unmanned computing system of claim 1, wherein the image signal chain deserializer comprises a main I2C interface and a pass-through I2C interface, the first onboard GPU acceleration module accesses and configures registers of the image signal chain deserializer through the main I2C interface, and the first onboard GPU acceleration module controls CMOS sensors in an onboard camera connected to the image signal chain deserializer through the pass-through I2C interface.
3. The unmanned computing system of claim 1, wherein the image signal chain deserializer comprises a GPIO interface, the first onboard GPU acceleration module triggering the image signal chain deserializer to send a GPIO signal through the GPIO interface to control a CMOS sensor in an onboard camera to which the image signal chain deserializer is connected.
4. The unmanned computing system of claim 1, wherein the sense and AI accelerator further comprises any one or a combination of any of an HDMI interface, a UART interface, a JTAG debug interface, and a USB interface.
5. The unmanned computing system of claim 1, wherein the ethernet switch is configured to access lidar data and send the lidar data in parallel to the sense and AI accelerator, the AI acceleration and computation co-processing unit, and the computing unit, the sense and AI accelerator receiving the lidar data over a CAN-FS interface.
6. The unmanned computing system of claim 1, wherein the ethernet switch is configured to interact management data of the first onboard GPU acceleration module, the second onboard GPU acceleration module, and the computing unit.
7. The unmanned computing system of claim 1, wherein DMA operations are performed between the plurality of first onboard GPU acceleration modules via the data exchange unit, and DMA operations are performed between the plurality of second onboard GPU acceleration modules via the data exchange unit.
8. The unmanned computing system of claim 1, wherein the functional security module is configured to formulate a security policy based on the module status of the first or second vehicle-mounted GPU acceleration module or the computing unit and the critical data and send the security policy to a domain controller of the vehicle via a CAN bus interface.
9. The unmanned computing system of claim 8, wherein the functional security module obtains module status and key data of the first and second onboard GPU acceleration modules via an SPI interface;
The module state and key data of the computing unit are sent to the CPLD through an LPC interface, and the functional safety module accesses the CPLD through an SPI interface to acquire the module state and key data of the computing unit.
CN202111442544.XA 2021-11-30 2021-11-30 Unmanned computing system Active CN114179824B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111442544.XA CN114179824B (en) 2021-11-30 2021-11-30 Unmanned computing system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111442544.XA CN114179824B (en) 2021-11-30 2021-11-30 Unmanned computing system

Publications (2)

Publication Number Publication Date
CN114179824A CN114179824A (en) 2022-03-15
CN114179824B true CN114179824B (en) 2024-05-07

Family

ID=80603043

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111442544.XA Active CN114179824B (en) 2021-11-30 2021-11-30 Unmanned computing system

Country Status (1)

Country Link
CN (1) CN114179824B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114793196B (en) * 2022-06-21 2022-09-13 国汽智控(北京)科技有限公司 Firmware upgrading method, device, equipment and storage medium
CN115098416B (en) * 2022-06-30 2023-07-14 苏州浪潮智能科技有限公司 COMe module, PCIe mode switching method and computer

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108170126A (en) * 2017-12-27 2018-06-15 湖北汽车工业学院 Control system and automobile
CN109557997A (en) * 2018-12-07 2019-04-02 济南浪潮高新科技投资发展有限公司 A kind of automatic Pilot high reliability vehicle computing devices, systems, and methods based on artificial intelligence
CN111587407A (en) * 2017-11-10 2020-08-25 辉达公司 System and method for safe and reliable autonomous vehicle
US10803324B1 (en) * 2017-01-03 2020-10-13 Waylens, Inc. Adaptive, self-evolving learning and testing platform for self-driving and real-time map construction
CN112429012A (en) * 2020-10-30 2021-03-02 北京新能源汽车技术创新中心有限公司 Automobile electric control system, automatic driving control method and automobile

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11656670B2 (en) * 2018-12-12 2023-05-23 Insitu Inc. Common unmanned system architecture
US20190207868A1 (en) * 2019-02-15 2019-07-04 Intel Corporation Processor related communications

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10803324B1 (en) * 2017-01-03 2020-10-13 Waylens, Inc. Adaptive, self-evolving learning and testing platform for self-driving and real-time map construction
CN111587407A (en) * 2017-11-10 2020-08-25 辉达公司 System and method for safe and reliable autonomous vehicle
CN108170126A (en) * 2017-12-27 2018-06-15 湖北汽车工业学院 Control system and automobile
CN109557997A (en) * 2018-12-07 2019-04-02 济南浪潮高新科技投资发展有限公司 A kind of automatic Pilot high reliability vehicle computing devices, systems, and methods based on artificial intelligence
CN112429012A (en) * 2020-10-30 2021-03-02 北京新能源汽车技术创新中心有限公司 Automobile electric control system, automatic driving control method and automobile

Also Published As

Publication number Publication date
CN114179824A (en) 2022-03-15

Similar Documents

Publication Publication Date Title
US11874662B2 (en) Sharing sensor data between multiple controllers to support vehicle operations
CN114179824B (en) Unmanned computing system
US9684583B2 (en) Trace data export to remote memory using memory mapped write transactions
CN106789496B (en) 1553B communication interface circuit of optical fiber inertial measurement unit for carrier rocket
CN109542817B (en) Universal electronic countermeasure equipment control framework
CN113347273B (en) Vehicle-mounted Ethernet data conversion method, device, equipment and medium
CN110865958A (en) LRM-based integrated switching management module design method
CN114179817A (en) Vehicle controller, vehicle and vehicle control method
CN113163108B (en) Image acquisition system
KR20210075878A (en) I3c hub promoting backward compatibility with i2c
CN217048605U (en) Vehicle controller and vehicle
CN219554988U (en) Vehicle-mounted domain control system for Ethernet
CN110989416B (en) Whole vehicle control system based on real-time Ethernet bus
CN113442938A (en) Vehicle-mounted computing system, electronic equipment and vehicle
CN114143415A (en) Multi-channel video signal processing board and processing method
CN112181874A (en) Data acquisition platform and unmanned system
CN112232004B (en) System-on-chip design scheme test method and system-on-chip
CN116279208B (en) Data processing subsystem, domain controller and vehicle
CN217880048U (en) Data analysis system for vehicle-mounted camera
CN218276683U (en) Sensor multiport transceiver based on FPGA
CN219544757U (en) Domain controller and automatic driving automobile
CN217279314U (en) Vehicle-mounted data processing system
CN218181562U (en) Single module system for vehicle-mounted DVR
KR102438788B1 (en) Autonomous driving data logging system
CN113961502B (en) Switch interface management system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant