CN116901875A - Perception fusion system, vehicle and control method - Google Patents

Perception fusion system, vehicle and control method Download PDF

Info

Publication number
CN116901875A
CN116901875A CN202310877510.6A CN202310877510A CN116901875A CN 116901875 A CN116901875 A CN 116901875A CN 202310877510 A CN202310877510 A CN 202310877510A CN 116901875 A CN116901875 A CN 116901875A
Authority
CN
China
Prior art keywords
vehicle
service
data
layer
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310877510.6A
Other languages
Chinese (zh)
Inventor
张定萍
廖治强
李岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Changan Automobile Co Ltd
Original Assignee
Chongqing Changan Automobile Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Changan Automobile Co Ltd filed Critical Chongqing Changan Automobile Co Ltd
Priority to CN202310877510.6A priority Critical patent/CN116901875A/en
Publication of CN116901875A publication Critical patent/CN116901875A/en
Pending legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R16/00Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for
    • B60R16/02Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements
    • B60R16/023Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements for transmission of signals between vehicle parts or subsystems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/42Loop networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Abstract

The application provides a perception fusion system, a vehicle and a control method, and relates to the technical field of vehicles. The perception fusion system comprises a signal transmission layer, wherein the signal transmission layer comprises an intelligent driving controller, an intelligent cabin controller and at least one whole vehicle controller which are in ring network type communication connection, and one or more controllers of the intelligent cabin controller, the intelligent driving controller and the at least one whole vehicle controller are used for receiving and transmitting data sets acquired by sensors on a vehicle and carrying out fusion processing on the data sets. In the embodiment of the application, the ring network type communication network is utilized for data collection and fusion processing, which is beneficial to simplifying the deployment of the system communication line and reducing the development and maintenance cost of the system.

Description

Perception fusion system, vehicle and control method
Technical Field
The application relates to the technical field of vehicles, in particular to a perception fusion system, a vehicle and a control method.
Background
With the rapid development and great popularization of vehicle electronic products, the trend of vehicle intellectualization, networking and sharing is that the expected value of the user for the vehicle functions is also higher and higher. The vehicle is more prone to improving user experience while realizing high-level automatic driving/auxiliary driving functions, and personalized, humanized and differentiated functions, services and the like can be provided. Along with the improvement of the automatic driving degree of the vehicle, the requirement on environment perception is also continuously improved, and the traditional single sensor cannot meet the accurate identification of the complex traffic state. The environment sensing information is fused by a plurality of sensors, so that the accurate reconstruction of the surrounding environment in the digital world is realized.
At present, the whole vehicle electronic and electric architecture in the market is mostly a distributed architecture and a functional domain architecture. The traditional architecture is a signal-oriented architecture, and in intelligent networking vehicles, a large number of functions are realized by coordination processing among various controllers. The vehicle interior controllers are connected through a conventional bus, thereby realizing communication interaction. As vehicles have more and more electronic devices, the deployment of the physical communication lines of the system is more and more complex, so that the development and maintenance cost is high.
Disclosure of Invention
Accordingly, an object of the embodiments of the present application is to provide a sensing fusion system, a vehicle and a control method, which can simplify the deployment of a system communication line and reduce the development and maintenance costs of the system.
In order to achieve the technical purpose, the application adopts the following technical scheme:
in a first aspect, an embodiment of the present application provides a perception fusion system, where the system includes a signal transmission layer created according to a service oriented architecture SOA, where the signal transmission layer includes an intelligent driving controller, an intelligent cabin controller, and at least one vehicle controller, where the intelligent driving controller, the intelligent cabin controller, and the at least one vehicle controller are connected in a ring network type communication;
the intelligent driving controller, the intelligent cabin controller and one or more controllers in the at least one whole vehicle controller are used for receiving a data set acquired and transmitted by a sensor on a vehicle and carrying out fusion processing on the data set.
With reference to the first aspect, in some optional embodiments, the awareness fusion system further includes a device abstraction layer, an atomic service layer, and a functional application layer created according to the SOA;
the signal transmission layer is in communication connection with the device abstraction layer;
the device abstraction layer is used for packaging hardware differences of the vehicle, providing a first service API which characterizes standardization in a service mode through the characteristics of the hardware of the vehicle by an Ethernet gateway of the vehicle, and mapping the first service API into an SOA of the vehicle so as to enable the atomic service layer to carry out service call;
the atomic service layer is used for creating a second service API for representing standardization for the software function module of the vehicle, receiving the call of the function application layer through the Ethernet gateway and receiving the access of the equipment abstraction layer;
the function application layer is used for calling the second service API provided by the atomic service layer, so that the software function module executes corresponding operations, and the operations comprise at least one of active safety control, parking control and driving control of the vehicle.
With reference to the first aspect, in some optional embodiments, the signal transmission layer further includes at least one of a sensor assembly for acquiring vehicle information, a DMS sensor for acquiring driver status information, a lidar for acquiring point cloud data, a millimeter wave radar for acquiring motion data, a camera for acquiring video or image, and a positioning module for acquiring positioning data, wherein the vehicle information includes at least one of vehicle body data, chassis data, and power data, and the motion data includes at least one of a distance of a vehicle from a target obstacle, a vehicle speed, and a relative orientation of the vehicle from the target obstacle.
With reference to the first aspect, in some optional embodiments, the number of the whole vehicle controllers is two, and the number is respectively a first whole vehicle controller and a second whole vehicle controller, the sensor assembly is electrically connected with the first whole vehicle controller and/or the second whole vehicle controller, the DMS sensor is electrically connected with the intelligent cabin controller, and the laser radar, the millimeter wave radar, the camera and the positioning module are all electrically connected with the intelligent driving controller.
With reference to the first aspect, in some optional embodiments, the device abstraction layer is further configured to:
based on the characteristics of the sensor assembly, packaging to obtain a first abstract API corresponding to the sensor assembly;
based on the characteristics of the DMS sensor, packaging to obtain a second abstract API corresponding to the DMS sensor;
based on the characteristics of the laser radar, packaging to obtain a third abstract API corresponding to the laser radar;
based on the characteristics of the millimeter wave radar, packaging to obtain a fourth abstract API corresponding to the millimeter wave radar;
based on the characteristics of the camera, packaging to obtain a fifth abstract API corresponding to the camera;
and packaging to obtain a sixth abstract API corresponding to the positioning module based on the characteristics of the positioning module, wherein the first service API comprises the first abstract API, the second abstract API, the third abstract API, the fourth abstract API, the fifth abstract API and the sixth abstract API.
With reference to the first aspect, in some optional embodiments, the device abstraction layer is further configured to convert an electrical signal output from the signal transmission layer into an analog signal, and to analyze and isolate the hardware differences, wherein the hardware differences include electrical parameter differences of hardware modules in the vehicle.
With reference to the first aspect, in some optional embodiments, the atomic service layer includes a data service subsystem;
the data service subsystem is used for receiving a data set output by the signal transmission layer, wherein the data set comprises at least one of vehicle body data, power data, chassis data, positioning data, millimeter wave radar data, laser radar data, driver state data and camera data.
With reference to the first aspect, in some optional embodiments, the atomic service layer further includes a perception service subsystem for detecting a target, the target including at least one of a lane line, a zebra line, a sign, a traffic light, an obstacle, and a location.
With reference to the first aspect, in some optional embodiments, the atomic service layer further includes a fusion service subsystem, where the fusion service subsystem is configured to perform a fusion operation on the target, the raw data output by the signal transmission layer, and the feature data, where the fusion operation includes at least one of target fusion, positioning information fusion, travelable area fusion, road information fusion, and vehicle information fusion.
With reference to the first aspect, in some optional embodiments, the fusion service subsystem is further configured to determine state information of the target based on target attribute features sent by the camera and the radar module in the signal transmission layer and vehicle positioning data sent by the positioning module.
With reference to the first aspect, in some optional embodiments, the atomic service layer further includes a prediction service subsystem, where the prediction service subsystem is configured to predict according to the state information of the target and the vehicle information output by the signal transmission layer, and the prediction includes behavior prediction and trajectory prediction.
With reference to the first aspect, in some optional embodiments, the atomic service layer further includes a decision service subsystem for performing at least one of path planning and behavior decision.
In a second aspect, an embodiment of the present application further provides a vehicle, where the vehicle includes a vehicle body and the above-mentioned sensing fusion system, and the sensing fusion system is disposed on the vehicle body.
In a third aspect, an embodiment of the present application further provides a control method, which is applied to the above-mentioned sensing fusion system, where the method includes:
one or more controllers of the intelligent cabin controller, the intelligent driving controller and the at least one whole vehicle controller in the perception fusion system receive a data set acquired and transmitted by a sensor on a vehicle, and fusion processing is carried out on the data set.
With reference to the third aspect, in some optional embodiments, the method further comprises:
the equipment abstraction layer in the perception fusion system encapsulates the hardware difference of the vehicle, provides a first service API which characterizes standardization in a service mode through the characteristics of the hardware of the vehicle, and maps the first service API into the SOA of the vehicle so as to be used by the atomic service layer for service call;
an atomic service layer in the perception fusion system creates a second service API for representing standardization for a software function module of the vehicle, and receives the call of the function application layer and the access of the equipment abstraction layer through the Ethernet gateway;
and the function application layer in the perception fusion system calls the second service API provided by the atomic service layer, so that the software function module executes corresponding operations, and the operations comprise at least one of active safety control, parking control and driving control of the vehicle.
The application adopting the technical scheme has the following advantages:
in the technical scheme provided by the application, the perception fusion system comprises a signal transmission layer, wherein the signal transmission layer comprises an intelligent driving controller, an intelligent cabin controller and at least one whole vehicle controller which are in ring network type communication connection. One or more controllers of the intelligent cabin controller, the intelligent driving controller and the at least one whole vehicle controller are used for receiving a data set acquired and transmitted by a sensor on the vehicle and carrying out fusion processing on the data set. The ring network type communication network is utilized for data collection and fusion processing, so that the deployment of a system communication line is simplified, and the development and maintenance cost of the system is reduced.
Drawings
The application may be further illustrated by means of non-limiting examples given in the accompanying drawings. It is to be understood that the following drawings illustrate only certain embodiments of the application and are therefore not to be considered limiting of its scope, for the person of ordinary skill in the art may admit to other equally relevant drawings without inventive effort.
Fig. 1 is a schematic diagram of a framework structure of a perception fusion system according to an embodiment of the present application.
Fig. 2 is a schematic diagram of a communication architecture of a perceptive fusion system according to an embodiment of the present application.
Fig. 3 is a second schematic diagram of a communication architecture of a perceptive fusion system according to an embodiment of the present application.
Fig. 4 is a schematic diagram of a specific structure of a perception fusion system according to an embodiment of the present application.
Fig. 5 is a schematic diagram of a camera abstract API reporting information according to an embodiment of the present application.
Fig. 6 is a schematic diagram of reporting information by a lidar abstract API according to an embodiment of the present application.
Fig. 7 is a schematic block diagram of data processing in a perceptual fusion system according to an embodiment of the present application.
Fig. 8 is a service schematic diagram of a perception service subsystem and a converged service subsystem according to an embodiment of the present application.
Detailed Description
The present application will be described in detail below with reference to the drawings and the specific embodiments, wherein like or similar parts are designated by the same reference numerals throughout the drawings or the description, and implementations not shown or described in the drawings are in a form well known to those of ordinary skill in the art. In the description of the present application, the terms "first," "second," and the like are used merely to distinguish between descriptions and are not to be construed as indicating or implying relative importance.
First embodiment
Referring to fig. 1, fig. 2 and fig. 3 in combination, an embodiment of the application provides a sensing fusion system. The perceptually fused system may include a signal transport layer created from an SOA (Service-Oriented Architecture) architecture. The perception fusion system can also comprise a device abstraction layer, an atomic service layer and a functional application layer which are created based on the SOA. The signal transmission layer can be understood as a physical communication architecture of the perception fusion system. The device abstraction layer, the atomic service layer and the functional application layer, and the three services can be understood as the software architecture of the perception fusion system.
In this embodiment, the signal transmission layer is configured to collect a data set by a sensor on the vehicle and is communicatively coupled to the device abstraction layer. The collected data set may include, but is not limited to, vehicle body data, power data, chassis data, positioning data, millimeter wave radar data, laser radar data, driver status data, camera data, and the like.
Understandably, the body data may include, but is not limited to, data related to the body such as a gear position of the vehicle, a door open/close state, an in-vehicle temperature, and the like. The power data may include, but is not limited to, data related to the power of the vehicle, such as vehicle speed, torque, remaining power of the power battery, and the like. Chassis data may include, but is not limited to, data related to the chassis such as steering angle, wheelbase, etc. of the vehicle. The positioning data may include GNSS (Global Navigation Satellite System ) data, IMU (Inertial Measurement Unit, inertial measurement unit) data, and the like. The driver status data may refer to the mental state of the driver during driving, such as the driver is dozing off, indicating that the driver's status is poor. Millimeter wave radar data is data acquired through millimeter wave radar, and laser radar data is data acquired through laser radar. The camera data can be image, video and other data shot by the camera.
In this embodiment, the signal transmission layer may include an intelligent driving controller, an intelligent cockpit controller, and at least one VIU (Vehicle Controller Unit, whole vehicle controller). The intelligent cabin controller, the intelligent driving controller and at least one VIU are in ring network type communication connection, one or more controllers (the intelligent cabin controller, the intelligent driving controller, the VIU and the like) in the ring network are used for receiving data sets acquired and transmitted by sensors on the vehicle and carrying out fusion processing on the data sets, so that the deployment of a system communication line is simplified, and the communication line is shortened.
It will be appreciated that the number of VIUs in a ring network is typically small in order to reduce the hardware cost of the system. For example, the number of VIUs of the signal transmission layer may be 1, 2, 3, 4, etc.
Referring to fig. 3, from the aspect of ensuring that the operation performance meets the normal driving requirement and the standby calculation requirement (or redundancy calculation), and reducing the hardware cost as much as possible, the number of VIUs in the signal transmission layer may be 2, which are the first VIU and the second VIU respectively. Therefore, under the condition that the system has normal operation capability, the development and maintenance cost of the system and the system hardware cost of the vehicle are reduced.
Referring again to fig. 3, in this embodiment, communication connection may be established between adjacent controllers (including each VIU, intelligent driving controller, and intelligent cockpit controller) forming a ring network through a hundred megavehicle ethernet (100 BASE-T1). The first VIU can be used as a right VIU of the vehicle, and the second VIU can be used as a left VIU of the vehicle; alternatively, the first VIU may be the left VIU of the vehicle and the second VIU may be the right VIU of the vehicle. At least one controller among the first VIU, the second VIU, the intelligent cabin controller and the intelligent driving controller is used for receiving the original data acquired and transmitted by the sensor and carrying out fusion processing on the original data.
The operation of the fusion process may be an operation performed by a controller controlling a fusion service subsystem in an atomic service layer described below. For example, the fusion process may include target fusion, positioning information fusion, travelable region fusion, road information fusion, vehicle information fusion, and the like.
In this embodiment, the signal transmission layer may further include at least one of a sensor assembly for acquiring vehicle information including at least one of body data, chassis data, and power data, a DMS (Driver Monitoring System ) sensor for acquiring driver status information including at least one of a distance of a vehicle from a target obstacle, a vehicle speed, and a relative orientation of the vehicle from the target obstacle, a lidar for acquiring point cloud data, a millimeter wave radar for acquiring motion data, a camera for acquiring video or image, and a positioning module for acquiring positioning data.
It will be appreciated that the sensor assembly is typically a conventional type of sensor that collects body class data and power class data set chassis class data. In fig. 2, the chassis sensor actuator, the power sensor actuator, and the body sensor actuator can be understood as sensors of the sensor assembly for acquiring corresponding data.
As an example, the sensing assembly may include a temperature sensor for acquiring a cabin temperature, a speedometer for acquiring a vehicle speed, a direction detection sensor for acquiring a steering angle of the vehicle, etc., and may be flexibly set according to actual conditions. In addition, the number of the laser radars, the millimeter wave radars and the cameras can be flexibly set according to practical situations, and the number is not particularly limited.
The sensor assembly (i.e., a conventional sensor) may send the collected data information to either one of the first VIU and the second VIU via CANFD communication. The DMS sensor and the display screen can send the collected data information to the intelligent cabin controller in an LVDS communication mode. The collected data information can be sent to the intelligent driving controller through the ETH communication mode by the radars such as the laser radar, the millimeter wave radar and the like; the camera and the positioning module can send the collected data information to the intelligent driving controller in an LVDS communication mode. Wherein, CANFD communication, ETH communication and LVDS communication are all conventional communication methods, and are not described herein.
In this embodiment, the laser radar may calculate information such as a relative distance, a position, a motion state, and the like between the target obstacle and itself according to a turn-back time, a strength degree, and the like of the laser after encountering the obstacle by emitting the laser beam outward, and these information form point cloud data and draw a 3D environment map. The millimeter wave radar can calculate the distance, speed and angle information of a target obstacle through signal processing, data analysis, target identification and tracking processing, and then output the position, speed, size, type and other information of the target through corresponding spatial transformation. The camera shoots road conditions (such as lane lines, zebra lines, signboards, traffic lights and the like), then the controller performs target detection, classification and identification based on the shot videos, and outputs the obtained identification information to the driving system, so that a basis is provided for safe driving control of the driving system.
The positioning module can be a GNSS sensor, and the GNSS sensor can provide high-precision safe positioning for intelligent driving.
The device abstraction layer is used for packaging hardware differences of the vehicle, providing a first service API (Application Programming Interface, application program interface) for representing standardization in a service mode through the characteristics of the hardware of the vehicle by an Ethernet gateway of the vehicle, and mapping the first service API into an SOA of the vehicle for an atomic service layer to make service call.
In addition, the device abstraction layer may also be used to convert the electrical signal output from the signal transmission layer into an analog signal and to analyze and isolate hardware differences, including differences in electrical parameters of hardware modules in the vehicle.
The hardware differences are understood to mean differences in electrical parameters of the hardware modules caused by manufacturer differences or functional designs. The device abstraction layer may be used for signal processing and resource abstraction. For example, after the device abstraction layer receives the electrical signal output by the signal transmission layer, the device abstraction layer may perform signal processing of analog-to-digital conversion on the electrical signal to obtain an analog signal, analyze and isolate the variability of each hardware resource in the vehicle, and package the variability into a standardized service API corresponding to each hardware module. The hardware module abstracts and extracts original signals of the sensor and the actuator, such as pixel points, voltage, PWM signals, digital signals, frequency and the like, and provides service interfaces (such as point cloud, pressure, quality, temperature and the like) for upper software, namely, the device abstracts and completes conversion from voltage values, digital signals, point cloud and the like to physical values. The converted physical values form a basic service layer component, and have the highest reusability and combinability.
In this embodiment, the standardized first service API/second service API: the method can be used for realizing the repeated use of the software across vehicle types, platforms and vehicle-crossing enterprises for the software developer/developer, and supports the rapid development and continuous release of the application; hardware-oriented component suppliers can realize that external hardware such as an executor, a sensor and the like can be inserted and used.
After the API interface is standardized, the perception fusion system can support hardware multiplexing of cross-vehicle type and cross-part suppliers, and the difficulty and complexity of software and hardware development in the software-defined automobile are reduced.
Understandably, the device abstraction layer can standardize interfaces (such as interfaces of hardware modules in the signal transmission layer) between layers, and different whole factories, tier1 and platform suppliers define the same set of service interfaces, so that software between different whole factories and different Tier1 can be mutually called, thereby increasing the reusability of the software and shortening the vehicle development period.
As an alternative embodiment, the device abstraction layer may also be used to:
based on the characteristics of the sensor assembly, packaging to obtain a first abstract API corresponding to the sensor assembly;
based on the characteristics of the DMS sensor, packaging to obtain a second abstract API corresponding to the DMS sensor;
based on the characteristics of the laser radar, packaging to obtain a third abstract API corresponding to the laser radar;
based on the characteristics of the millimeter wave radar, packaging to obtain a fourth abstract API corresponding to the millimeter wave radar;
based on the characteristics of the camera, packaging to obtain a fifth abstract API corresponding to the camera;
and based on the characteristics of the positioning module, packaging to obtain a sixth abstract API corresponding to the positioning module, wherein the first service API comprises a first abstract API, a second abstract API, a third abstract API, a fourth abstract API, a fifth abstract API and a sixth abstract API.
In this embodiment, the characteristics of each hardware module (such as a laser radar and a camera) may be flexibly determined according to actual situations. In other words, when the abstract API design is performed, the characteristics of key attributes, data types, numerical units, input and output of a value range and the like can be functionally described by combining model information of a hardware module, and the abstract API design can be performed subsequently based on the requirements by combining service use scenes.
Referring to fig. 5, it can be understood that a camera abstract API (referred to as the fifth abstract API) reports information. The camera abstract API provides camera information service, and the service provides original image information, coded data information, camera parameter information and camera fault state information, and reports the information to the atomic service layer through an API interface.
Referring to fig. 6, it can be understood that the laser radar abstract API (refer to the third abstract API) reports information. The laser radar abstract API provides laser radar information service, and the service provides laser radar echo information, radar attribute parameters, laser radar point cloud information and laser radar fault states, and reports the laser radar echo information, the radar attribute parameters, the laser radar point cloud information and the laser radar fault states to the atomic service layer through an API interface.
Referring to fig. 4 and fig. 7 in combination, electrical signals sent by various sensors such as a radar, a camera and the like are converted into digital signals through analog-to-digital conversion and sent to a perception fusion system, and the converted digital signals are subjected to feature extraction to obtain feature data, and the feature data are identified as target data through an object and sent to an atomic service layer. The system comprises a perception service subsystem and a fusion service subsystem (namely a data fusion module) in an atomic service layer, wherein the fusion service subsystem is used for carrying out fusion processing on surrounding environments to reconstruct an environment model, then the prediction service subsystem and the decision service subsystem carry out planning and decision on the running route of the current vehicle, and finally the planning decision module is used for driving an execution system of the vehicle to realize the intelligent driving function of the whole vehicle.
Referring to fig. 8, operations that may be performed by the perception service subsystem include, but are not limited to, image processing information, object detection and tracking, driver status information, moving objects, road markings, traffic signs, and the like. Operations that the fusion service subsystem may perform include, but are not limited to, target fusion, location information fusion, travelable area fusion, road information fusion, vehicle information fusion.
The atomic service layer is used for creating a second service API for representing standardization for the software function module of the vehicle, receiving the call of the function application layer through the Ethernet gateway and receiving the access of the device abstraction layer.
In this embodiment, the atomic service layer includes a data service subsystem. The data service subsystem is used for receiving a data set output by the signal transmission layer, and the data set can be traditional car body power chassis data, GNSS positioning data, millimeter wave radar data, laser radar data, driver state data, camera data and the like.
In this embodiment, the atomic service layer further includes a perception service subsystem for detecting a target, where the target includes at least one of a lane line, a zebra crossing, a sign, a traffic light, an obstacle, and a location.
Understandably, the perception service subsystem can detect and respond to targets and time. The camera may provide detection information and characteristic information for targets and conventional traffic signs (e.g., lane lines, zebra crossings, signs, traffic lights, etc.). The detection information refers to a classification result obtained by detecting traffic signs, such as detected lane lines, zebra crossings, and the like. The feature information refers to extracted image features, and is conventional data in image processing. The laser radar and the millimeter wave radar can provide detection information of target obstacles, and the positioning module can provide vehicle positioning information.
In this embodiment, the atomic service layer further includes a fusion service subsystem, where the fusion service subsystem is configured to perform fusion operations on the original data and the feature data output by the target and the signal transmission layer, where the fusion operations include at least one of target fusion, positioning information fusion, drivable area fusion, road information fusion, and vehicle information fusion, and the various fusion operations are conventional manners and are not described herein again.
In this embodiment, the fusion service subsystem is further configured to determine state information of the target based on the target attribute features sent by the camera and the radar module in the signal transmission layer and the vehicle positioning data sent by the positioning module.
Understandably, the fusion service subsystem can fuse individual target detections, raw data, feature data, and so forth. And determining the state information of the target based on the target attribute characteristics sent by the camera and the radar and the high-precision positioning information of the vehicle sent by the positioning module. The status information may represent safety information between the target and the host vehicle, such as whether the host vehicle is at risk of collision with the target.
In this embodiment, the atomic service layer further includes a prediction service subsystem, where the prediction service subsystem is configured to predict according to the state information of the target and the vehicle information output by the signal transmission layer, and the prediction includes behavior prediction and trajectory prediction.
Wherein the behavior prediction is used to predict future behavior of the traffic participant. Wherein traffic participants may include vehicles, pedestrians, non-motor vehicles, etc., and behaviors may include turning left, turning right, turning around, changing lanes, passing through roads, etc. That is, behavior prediction may include traffic participant intent decisions including crossing, lane changing, left turning, right turning, turning around, cruising, accelerating, decelerating, etc. for pedestrians, vehicles, and the like.
Track prediction refers to predicting specific track information of all tracking targets for a few seconds next, and describing the possibility of each track with probability. Trajectory predictions may include, but are not limited to, future n-second trajectory predictions for traffic participants, including pedestrians, vehicles.
In this embodiment, the atomic service layer further includes a decision service subsystem for performing at least one of path planning and behavior decision.
Understandably, path planning refers to: and under the current dynamic driving task, track planning is carried out within n seconds of the future running of the vehicle, wherein n is a numerical value larger than 0.
Behavior decision means: under the current dynamic driving task, the self-vehicle driving decision is made, such as movement control of cruising, lane changing, left turning, right turning, turning around, parking, overtaking, accelerating, decelerating, obstacle avoidance and the like.
In this embodiment, the service functions that may be implemented by the software functional module may include, but are not limited to, a sensor abstract function, a sense fusion function, a positioning function, a prediction function, a decision planning function, an actuator abstract function, and the like.
The atomic service layer can provide two types of service interfaces aiming at the sensor abstract function, one is an information service interface of an intelligent sensor, the sensor type comprises one or more of a camera, a laser radar, a millimeter wave radar and an ultrasonic radar, and the provided information comprises one or more of original data, characteristic data and target data. These data are primarily dependent on the output of the individual sensors, while also providing sensor performance and status information; the other vehicle-mounted sensor comprises sensors commonly used in automobiles, such as a traditional wheel speed sensor, an acceleration sensor, an odometer, a vehicle running state and other sensors for reflecting the state of the vehicle, a global positioning system, an inertial navigation system and other positioning sensors, and a V2X communication module for carrying out information interaction through V2X, other vehicles, road side facilities, cloud and the like.
The perception fusion function is based on the input of various sensors, completes the identification of dynamic traffic participants and static traffic environment information, outputs information services such as movable objects, road structures, drivable spaces, static targets, traffic signs and the like, and can also comprehensively output complete environment model information.
For example, the atomic service layer may provide a target recognition and tracking service API, a road structure service API, a traffic sign service API, a space available service API, a vehicle status service API, an environment model service API, and the like as the second service API for the awareness convergence function.
The target recognition and tracking service API mainly provides detection and tracking services for dynamic and static targets, including traffic participant information, road signs, static targets, and the like.
The road structure service API mainly provides identification information of road marking lines and stop lines, and the information comprises information such as lane line types, confidence, curve representation, characteristic point sets and the like.
The traffic sign service API mainly provides identification information of traffic lights and traffic signs, mainly indicating information in the traffic lights and the traffic signs, including dynamic and static semantic information, and constraint information of current traffic rules. The interface needs to contain at least information of one traffic light or traffic sign.
The travelable space service API mainly provides free space information available for travel of the host vehicle in the vicinity of the host vehicle, represented by travelable feature points and grid information in the vehicle coordinate system.
The vehicle state service API is used for collecting the current state of the vehicle and meeting the requirements of the follow-up vehicle on realizing dynamic intelligent driving business.
The environment model service API is used for providing environment information of the current vehicle and state information of the vehicle, wherein the environment information is a set of the service, is aligned in time and reflects the internal and external environment information of the vehicle on a time axis section.
The function application layer is used for calling a second service API provided by the atomic service layer, so that the software function module executes corresponding operations, and the operations can comprise at least one of active safety control, parking control and driving control of the vehicle.
Understandably, the function application layer may implement motion control such as cruising, lane changing, left turning, right turning, turning around, parking, overtaking, accelerating, decelerating, obstacle avoidance, etc. based on the API provided by the atomic service layer. In addition, the function application layer can also realize functions related to parking, such as anti-collision reminding during automatic parking and manual parking.
Second embodiment
The application also provides a control method which can be applied to the sensing fusion system. The control method may include the steps of:
step 110, one or more controllers of the intelligent cabin controller, the intelligent driving controller and the at least one whole vehicle controller in the perception fusion system receive a data set acquired and transmitted by a sensor on the vehicle, and fusion processing is carried out on the data set.
The operation of the fusion process may be an operation performed by the controller to control the fusion service subsystem in the atomic service layer. For example, the fusion process may include target fusion, positioning information fusion, travelable region fusion, road information fusion, vehicle information fusion, and the like.
In this embodiment, the method may further include:
step 120, the device abstraction layer encapsulates the hardware difference of the vehicle, provides the first service API which characterizes the standardization in a service mode through the characteristics of the hardware of the vehicle by the Ethernet gateway of the vehicle, and maps the first service API into the SOA of the vehicle for the atomic service layer to carry out service call;
step 130, the atomic service layer creates a second service API for representing standardization for the software function module of the vehicle, and receives the call of the function application layer and the access of the device abstraction layer through the Ethernet gateway;
in step 140, the function application layer invokes a second service API provided by the atomic service layer, so that the software function module performs a corresponding operation, where the operation includes at least one of active safety control, parking control, and driving control of the vehicle.
It should be noted that, for convenience and brevity of description, a specific working process of the control method described above may refer to a processing process corresponding to each service layer in the foregoing perceptive fusion system, which is not described in detail herein.
Third embodiment
The embodiment of the application also provides a vehicle, which can comprise an automobile body and the sensing fusion system in the first embodiment, wherein the sensing fusion system is deployed on the automobile body. The vehicle body may be an electric vehicle, an electric-fuel hybrid vehicle, or the like. When the vehicle has the above sensing fusion system, the hardware can be adjusted without changing the external interface of the system software when the vehicle hardware function is updated later, so that the problem that all systems related to functions are changed due to the increase/decrease/change of individual signals in the traditional signal-oriented communication architecture can be effectively solved, and the flexibility and the expandability of the software and hardware stripping development are improved.
From the foregoing description of the embodiments, it will be apparent to those skilled in the art that the present application may be implemented in hardware, or by means of software plus a necessary general hardware platform, and based on this understanding, the technical solution of the present application may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disc, a mobile hard disk, etc.), and includes several instructions for causing a computer device (may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective implementation scenario of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed system and method may be implemented in other manners as well. The above described system and method embodiments are merely illustrative, for example, of the flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. In addition, functional modules in the embodiments of the present application may be integrated together to form a single part, or each module may exist alone, or two or more modules may be integrated to form a single part.
The above description is only an example of the present application and is not intended to limit the scope of the present application, and various modifications and variations will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (15)

1. The perception fusion system is characterized by comprising a signal transmission layer which is created according to a service-oriented architecture SOA, wherein the signal transmission layer comprises an intelligent driving controller, an intelligent cabin controller and at least one whole vehicle controller, and the intelligent driving controller, the intelligent cabin controller and the at least one whole vehicle controller are in ring network type communication connection;
the intelligent driving controller, the intelligent cabin controller and one or more controllers in the at least one whole vehicle controller are used for receiving a data set acquired and transmitted by a sensor on a vehicle and carrying out fusion processing on the data set.
2. The perceptive fusion system of claim 1, further comprising a device abstraction layer, an atomic service layer, and a functional application layer created from the SOA;
the signal transmission layer is in communication connection with the device abstraction layer;
the device abstraction layer is used for packaging hardware differences of the vehicle, providing a first service API which characterizes standardization in a service mode through the characteristics of the hardware of the vehicle by an Ethernet gateway of the vehicle, and mapping the first service API into an SOA of the vehicle so as to enable the atomic service layer to carry out service call;
the atomic service layer is used for creating a second service API for representing standardization for the software function module of the vehicle, receiving the call of the function application layer through the Ethernet gateway and receiving the access of the equipment abstraction layer;
the function application layer is used for calling the second service API provided by the atomic service layer, so that the software function module executes corresponding operations, and the operations comprise at least one of active safety control, parking control and driving control of the vehicle.
3. The sensory fusion system of claim 2, wherein the signal transmission layer further comprises at least one of a sensor assembly for acquiring vehicle information, a DMS sensor for acquiring driver status information, a lidar for acquiring point cloud data, a millimeter wave radar for acquiring motion data, a camera for acquiring video or images, and a positioning module for acquiring positioning data, wherein the vehicle information comprises at least one of body data, chassis data, and power data, and the motion data comprises at least one of a distance of a vehicle from a target obstacle, a vehicle speed, and a relative orientation of the vehicle from the target obstacle.
4. The perception fusion system according to claim 3, wherein the number of the whole vehicle controllers is two, namely a first whole vehicle controller and a second whole vehicle controller, the sensor assembly is electrically connected with the first whole vehicle controller and/or the second whole vehicle controller, the DMS sensor is electrically connected with the intelligent cabin controller, and the laser radar, the millimeter wave radar, the camera and the positioning module are electrically connected with the intelligent driving controller.
5. A perceptually fused system as claimed in claim 3, wherein said device abstraction layer is further operative to:
based on the characteristics of the sensor assembly, packaging to obtain a first abstract API corresponding to the sensor assembly;
based on the characteristics of the DMS sensor, packaging to obtain a second abstract API corresponding to the DMS sensor;
based on the characteristics of the laser radar, packaging to obtain a third abstract API corresponding to the laser radar;
based on the characteristics of the millimeter wave radar, packaging to obtain a fourth abstract API corresponding to the millimeter wave radar;
based on the characteristics of the camera, packaging to obtain a fifth abstract API corresponding to the camera;
and packaging to obtain a sixth abstract API corresponding to the positioning module based on the characteristics of the positioning module, wherein the first service API comprises the first abstract API, the second abstract API, the third abstract API, the fourth abstract API, the fifth abstract API and the sixth abstract API.
6. The perception fusion system of claim 2, wherein the device abstraction layer is further configured to convert an electrical signal output from the signal transmission layer to an analog signal and to analyze and isolate the hardware differences, wherein the hardware differences include electrical parameter differences of hardware modules in the vehicle.
7. The awareness fusion system of claim 2 wherein the atomic service layer comprises a data service subsystem;
the data service subsystem is used for receiving a data set output by the signal transmission layer, wherein the data set comprises at least one of vehicle body data, power data, chassis data, positioning data, millimeter wave radar data, laser radar data, driver state data and camera data.
8. The sensory fusion system of any of claims 2-7, wherein the atomic service layer further comprises a sensory service subsystem for detecting a target comprising at least one of a lane line, a zebra line, a sign, a traffic light, an obstacle, a location.
9. The perception fusion system of claim 8, wherein the atomic service layer further comprises a fusion service subsystem, the fusion service subsystem is configured to perform a fusion operation on the target, the raw data output by the signal transmission layer, and the feature data, the fusion operation including at least one of target fusion, positioning information fusion, travelable area fusion, road information fusion, and vehicle information fusion.
10. The perception fusion system of claim 9, wherein the fusion service subsystem is further configured to determine the state information of the target based on the target attribute characteristics sent by the camera and radar modules in the signal transmission layer and the vehicle positioning data sent by the positioning module.
11. The perception fusion system of claim 10, wherein the atomic service layer further comprises a prediction service subsystem, the prediction service subsystem configured to predict according to the state information of the target and the vehicle information output by the signal transmission layer, the prediction comprising behavior prediction and trajectory prediction.
12. The perceptive fusion system of any of claims 2-7, wherein the atomic service layer further comprises a decision service subsystem for performing at least one of path planning, behavioral decision making.
13. A vehicle comprising a vehicle body and a sensory fusion system according to any of claims 1-12 disposed on the vehicle body.
14. A control method applied to the sensory fusion system of any one of claims 1-12, the method comprising:
one or more controllers of the intelligent cabin controller, the intelligent driving controller and the at least one whole vehicle controller in the perception fusion system receive a data set acquired and transmitted by a sensor on a vehicle, and fusion processing is carried out on the data set.
15. The method of claim 14, wherein the method further comprises:
the equipment abstraction layer in the perception fusion system encapsulates the hardware difference of the vehicle, provides a first service API which characterizes standardization in a service mode through the characteristics of the hardware of the vehicle, and maps the first service API into the SOA of the vehicle so as to be used for the atomic service layer in the perception fusion system to carry out service call;
the atomic service layer creates a second service API for representing standardization for a software function module of the vehicle, and receives the call of a function application layer in the perception fusion system and the access of the device abstraction layer through the Ethernet gateway;
the function application layer calls the second service API provided by the atomic service layer, so that the software function module executes corresponding operations, and the operations comprise at least one of active safety control, parking control and driving control of the vehicle.
CN202310877510.6A 2023-07-18 2023-07-18 Perception fusion system, vehicle and control method Pending CN116901875A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310877510.6A CN116901875A (en) 2023-07-18 2023-07-18 Perception fusion system, vehicle and control method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310877510.6A CN116901875A (en) 2023-07-18 2023-07-18 Perception fusion system, vehicle and control method

Publications (1)

Publication Number Publication Date
CN116901875A true CN116901875A (en) 2023-10-20

Family

ID=88359851

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310877510.6A Pending CN116901875A (en) 2023-07-18 2023-07-18 Perception fusion system, vehicle and control method

Country Status (1)

Country Link
CN (1) CN116901875A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117459557A (en) * 2023-12-22 2024-01-26 广州晟能电子科技有限公司 Fusion method of low-code Internet of things multidimensional data
CN117508234A (en) * 2024-01-04 2024-02-06 安徽中科星驰自动驾驶技术有限公司 Safety guarantee system applied to automatic driving vehicle

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117459557A (en) * 2023-12-22 2024-01-26 广州晟能电子科技有限公司 Fusion method of low-code Internet of things multidimensional data
CN117459557B (en) * 2023-12-22 2024-03-15 广州晟能电子科技有限公司 Fusion method of low-code Internet of things multidimensional data
CN117508234A (en) * 2024-01-04 2024-02-06 安徽中科星驰自动驾驶技术有限公司 Safety guarantee system applied to automatic driving vehicle

Similar Documents

Publication Publication Date Title
JP7255782B2 (en) Obstacle avoidance method, obstacle avoidance device, automatic driving device, computer-readable storage medium and program
CN111123933B (en) Vehicle track planning method and device, intelligent driving area controller and intelligent vehicle
CN110349405B (en) Real-time traffic monitoring using networked automobiles
CN111382768B (en) Multi-sensor data fusion method and device
CN110356412B (en) Method and apparatus for automatic rule learning for autonomous driving
CN116901875A (en) Perception fusion system, vehicle and control method
US20230039658A1 (en) In-vehicle operation of simulation scenarios during autonomous vehicle runs
WO2021241189A1 (en) Information processing device, information processing method, and program
US20230415753A1 (en) On-Vehicle Driving Behavior Modelling
Jia et al. Real-time control systems
CN117015493A (en) automatic emergency braking system
Chai et al. Autonomous driving changes the future
US11727694B2 (en) System and method for automatic assessment of comparative negligence for one or more vehicles involved in an accident
DE112022003364T5 (en) COMPLEMENTARY CONTROL SYSTEM FOR AN AUTONOMOUS VEHICLE
EP4170450A1 (en) Method and system for switching between local and remote guidance instructions for an autonomous vehicle
US20230256999A1 (en) Simulation of imminent crash to minimize damage involving an autonomous vehicle
US20230289980A1 (en) Learning model generation method, information processing device, and information processing system
CN115240470A (en) NR-V2X-based weak traffic participant collision early warning system and method
Pechinger et al. Cyclist Safety in Urban Automated Driving-Sub-Microscopic HIL Simulation
Yeo Autonomous Driving Technology through Image Classfication and Object Recognition Based on CNN
US20230399008A1 (en) Multistatic radar point cloud formation using a sensor waveform encoding schema
Leilabadi et al. Systematic test case design for autonomous vehicles
EP4209805A1 (en) Radar multipath filter with track priors
Deemantha et al. Autonomous Car: Current Issues, Challenges and Solution: A Review
WO2022259621A1 (en) Information processing device, information processing method, and computer program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination