CN110083099B - Automatic driving architecture system meeting automobile function safety standard and working method - Google Patents

Automatic driving architecture system meeting automobile function safety standard and working method Download PDF

Info

Publication number
CN110083099B
CN110083099B CN201910367428.2A CN201910367428A CN110083099B CN 110083099 B CN110083099 B CN 110083099B CN 201910367428 A CN201910367428 A CN 201910367428A CN 110083099 B CN110083099 B CN 110083099B
Authority
CN
China
Prior art keywords
unit
signal
data
radar
millimeter wave
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910367428.2A
Other languages
Chinese (zh)
Other versions
CN110083099A (en
Inventor
吕林泉
蒲紫光
陈涛
张强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Automotive Engineering Research Institute Co Ltd
Original Assignee
China Automotive Engineering Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Automotive Engineering Research Institute Co Ltd filed Critical China Automotive Engineering Research Institute Co Ltd
Priority to CN201910367428.2A priority Critical patent/CN110083099B/en
Publication of CN110083099A publication Critical patent/CN110083099A/en
Application granted granted Critical
Publication of CN110083099B publication Critical patent/CN110083099B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/04Programme control other than numerical control, i.e. in sequence controllers or logic controllers
    • G05B19/042Programme control other than numerical control, i.e. in sequence controllers or logic controllers using digital processors
    • G05B19/0423Input/output
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/20Pc systems
    • G05B2219/26Pc applications
    • G05B2219/2603Steering car

Abstract

The invention provides an automatic driving architecture system and a working method according with an automobile function safety standard, which comprises the following steps: the system comprises a sensor unit, a data acquisition unit, an intelligent interaction unit and an action execution unit; the sensor unit data sending end is connected with the data receiving end of the data acquisition unit, the drain electrode of the data sending end of the data acquisition unit is connected with the data receiving end of the intelligent interaction unit, and the data sending end of the intelligent interaction unit is connected with the data receiving end of the action execution unit. The problems of high power consumption and large occupied space in installation in the use process of hardware equipment are solved; the real-time problem in the data processing and responding process is improved; optimizing timeliness in decision execution.

Description

Automatic driving architecture system meeting automobile function safety standard and working method
Technical Field
The invention relates to the field of data transmission circuits, in particular to an automatic driving architecture system meeting the automobile functional safety standard and a working method.
Background
As the research of the traditional automobile industry and information technology in the aspect of automobile automatic driving technology is highly emphasized by the nation and the industry along with the rapid development of artificial intelligence technology in recent years, the laws and regulations and moral specifications related to automatic driving are gradually improved and matured, the technical research of all aspects of automatic driving by all parts in the industry is steadily deepened and broken through, the automatic driving is heading to the next stage of splendid, but the research of the automatic driving technology is lean, the automatic driving comprises three levels of research, namely environment perception, task decision and decision execution, the research of the environment perception part in the market at the present stage is gradually matured, a complete environment perception system is realized by combining laser radar, millimeter wave radar, high-definition cameras and the like with a global positioning system, and a plurality of L2-level automatic driving auxiliary systems face the market.
Since the international standard release of ISO 26262 road vehicle function safety, the standards of product function safety are highly valued by various automobile manufacturers and component design manufacturers at home and abroad; naturally, we must also perform strictly in terms of functional safety regulations when designing the system of the autonomous driving part.
Embedded systems have computer functionality but are not referred to as computers. The computer system is a special computer system which takes application as a center and can cut software and hardware, and is suitable for application systems with strict requirements on the comprehensiveness of functions, reliability, cost, volume, power consumption and the like. The embedded system integrates application software and hardware of the system, has the characteristics of small software code, high automation, high response speed and the like, and is particularly suitable for a system requiring real time and multiple tasks. The embedded system mainly comprises an embedded processor, related supporting hardware, an embedded operating system set application software system and the like, and is a device capable of working independently.
The automatic driving automobile depends on the cooperation of artificial intelligence, visual calculation, radar, monitoring device and global positioning system, so that the computer can operate the motor vehicle automatically and safely without any active operation of human. At present, an automatic driving hardware architecture based on a computer platform is available; the computer platform has the defects of inevitable large volume, high power consumption and the like; in view of the current situation, an autopilot hardware architecture conforming to functional safety based on an embedded system is proposed.
As shown in fig. 1, the prior art solution is a basic framework for implementing automatic driving environment perception and decision based on a computer platform; after various external sensor modules collect environmental data, the environmental data are directly transmitted to a computer by an Ethernet interface, a CAN interface and an RS232 interface; the computer performs data fusion and operation processing; automatic driving decisions such as road planning, obstacle avoidance and motion planning are realized by a depth complex algorithm; and finally, sending decision data to a Vehicle Control Unit (VCU) through CANFD or USB communication to execute a vehicle control decision. In the existing scheme, a computer is used as an important data fusion and operation decision role; the hardware of the computer determines that the power consumption is relatively high; moreover, since too many sensors such as automobile radar, camera and visual sensing are hung outside the computer, the computer receives huge data volume from the modules, which may cause problems of heavy data processing task, low real-time performance, low system reliability, poor communication stability and the like of a core operation board of the computer.
Disclosure of Invention
The invention aims to at least solve the technical problems in the prior art, and particularly creatively provides an automatic driving architecture system and a working method which accord with the automobile functional safety standard.
In order to achieve the above object, the present invention provides an automatic driving architecture system conforming to an automobile functional safety standard, comprising: the system comprises a sensor unit, a data acquisition unit, an intelligent interaction unit and an action execution unit;
the sensor unit data sending end is connected with the data receiving end of the data acquisition unit, the drain electrode of the data sending end of the data acquisition unit is connected with the data receiving end of the intelligent interaction unit, and the data sending end of the intelligent interaction unit is connected with the data receiving end of the action execution unit.
Preferably, the data acquisition unit includes:
the processor fast access signal end is connected with an SSD memory unit signal end and a DDR memory unit signal end, the processor network signal end is connected with a network port RJ45, the processor USB signal end is connected with a USB interface, the processor 4 port PCIe signal end is connected with an Intel signal end, the processor single port PCI signal end is connected with a random access unit DRAM, the DRAM signal input end is connected with a single-chip microcomputer instant communication signal end EIM, the single-chip microcomputer SPI signal end is connected with a controller MCP2517, the controller signal output end is connected with a CANFD signal end, the single-chip microcomputer network signal end is connected with a network cable interface RJ45, the single-chip microcomputer USB signal end is connected with a USB interface, the single-chip microcomputer UART serial interface is connected with an RS232 signal end, and the single-chip microcomputer.
Preferably, the sensor unit includes:
an OBD unit, a V2X unit, a vehicle-mounted camera component, a millimeter wave laser radar, a GPS and a detection radar; the OBD unit signal transmitting terminal is connected with the OBD signal receiving terminal of the data acquisition unit, the V2X unit signal transmitting terminal is connected with the V2X signal receiving terminal of the data acquisition unit, the vehicle-mounted camera shooting assembly signal transmitting terminal is connected with the video assembly signal receiving terminal of the video assembly, the video assembly signal transmitting terminal is connected with the video receiving terminal of the data acquisition unit, the GPS signal transmitting terminal is connected with the GPS signal receiving terminal of the data acquisition unit, the millimeter wave laser radar signal transmitting terminal is connected with the millimeter wave radar signal receiving terminal of the data acquisition unit, and the detection radar signal transmitting terminal is connected with.
Preferably, the intelligent interaction unit includes:
the data fusion unit is used for collecting and arranging the collected data of the OBD unit, the V2X unit, the vehicle-mounted camera shooting assembly, the millimeter wave laser radar, the GPS and the detection radar through the data collection unit, and then performing fusion operation on the data, so that further processing of the data is realized.
Preferably, the intelligent interaction unit further comprises:
the deep learning unit is used for carrying out neural network learning on data of the OBD unit, the V2X unit, the vehicle-mounted camera component, the millimeter wave laser radar, the GPS and the detection radar, and continuously training and optimizing corresponding data, so that the intelligent automobile works more stably;
preferably, the intelligent interaction unit further comprises:
the path planning unit is used for carrying out path optimization selection on a driver after collecting GPS data, radar data and vehicle-mounted camera data, and planning a more reasonable driving route.
Preferably, the intelligent interaction unit further comprises:
the behavior selection unit is used for executing the work of the corresponding components of the vehicle and transmitting the work to the action execution unit for intelligent control of the vehicle.
Preferably, the action execution unit includes: the intelligent automobile instrument data display method has the advantages that the working data sent by the intelligent interaction unit are acted to execute operation, the engine is enabled to operate well, the gear operation is reasonable, the operation is smooth, the brake and the brake can be timely carried out when the automobile instrument data display method encounters obstacles, the steering wheel operation system is reasonably adjusted, the air bag is timely popped up when the automobile instrument data display method encounters an emergency, and the instrument data of the intelligent automobile are correctly displayed.
The invention also discloses a working method of the automatic driving framework according with the automobile function safety standard, which comprises the following steps:
s1, collecting sensing data corresponding to the sensor units, namely an OBD unit, a V2X unit, a vehicle-mounted camera module, a millimeter wave laser radar, a GPS and a detection radar; the system comprises an OBD unit signal sending end, a V2X unit signal sending end, a data acquisition unit V2X signal receiving end, a vehicle-mounted camera assembly signal sending end, a video assembly signal receiving end, a data acquisition unit video receiving end, a GPS signal sending end, a millimeter wave laser radar signal sending end, a data acquisition unit millimeter wave radar signal receiving end and a detection radar signal sending end, wherein the OBD unit signal sending end is connected with the data acquisition unit OBD signal receiving end, the V2X unit signal sending end is connected with the data acquisition unit V2X signal receiving end, the vehicle-mounted camera assembly signal sending;
s2, sending sensing data to an intelligent interaction unit through a data acquisition unit, connecting a processor fast access signal end with a signal end of an SSD memory unit and a signal end of a DDR memory unit, connecting a processor network signal end with a network port RJ45, connecting a processor USB signal end with a USB interface, connecting a processor 4-port PCIe signal end with an Intel signal end, connecting a processor single-port PCI signal end with a DRAM, connecting a signal input end with a single-chip microcomputer instant communication signal end EIM, connecting a single-chip microcomputer SPI signal end with a controller MCP2517, connecting a controller signal output end with a CANFD signal end, connecting a single-chip microcomputer network signal end with a network cable interface RJ45, connecting a single-chip microcomputer USB signal end with a USB interface, connecting a single-chip microcomputer UART serial interface with an RS232 signal end, and connecting a single-chip microcomputer flash memory signal end with an F L;
s3, the intelligent interaction unit sends the information to the action execution unit to control the operation of the intelligent automobile;
the data fusion unit is used for collecting and sorting the collected data of the OBD unit, the V2X unit, the vehicle-mounted camera component, the millimeter wave laser radar, the GPS and the detection radar through the data collection unit and then carrying out data fusion operation so as to realize further processing of the data, and the deep learning unit is used for carrying out neural network learning on the data of the OBD unit, the V2X unit, the vehicle-mounted camera component, the millimeter wave laser radar, the GPS and the detection radar, continuously training and optimizing corresponding data so as to enable the intelligent automobile to work more stably; the path planning unit is used for carrying out path optimization selection on a driver after collecting GPS data, radar data and vehicle-mounted camera data, and planning a more reasonable driving route; the behavior selection unit is used for executing the work of the corresponding components of the vehicle and transmitting the work to the action execution unit for intelligent control of the vehicle, and the action execution units comprise: the intelligent automobile instrument data display method has the advantages that the working data sent by the intelligent interaction unit are acted to execute operation, the engine is enabled to operate well, the gear operation is reasonable, the operation is smooth, the brake and the brake can be timely carried out when the automobile instrument data display method encounters obstacles, the steering wheel operation system is reasonably adjusted, the air bag is timely popped up when the automobile instrument data display method encounters an emergency, and the instrument data of the intelligent automobile are correctly displayed.
In summary, due to the adoption of the technical scheme, the invention has the beneficial effects that:
the problems of high power consumption and large occupied space in installation in the use process of hardware equipment are solved; the real-time problem in the data processing and responding process is improved; optimizing timeliness in decision execution.
The whole power consumption of the automatic driving system is reduced; secondly, the occupation of the installation space of the hardware equipment of the automatic driving system can be reduced; moreover, the speed of transmitting the perception data to the decision center can be increased by using a communication technology based on 1000base/T, and the real-time performance of data processing and control response is greatly improved; the sensitivity of automatic driving control is enhanced.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The above and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a prior art schematic;
FIG. 2 is a schematic diagram of the circuit architecture of the present invention;
FIG. 3 is a schematic diagram of the circuit architecture of the present invention;
FIG. 4 is a schematic diagram of a circuit architecture of a data acquisition unit according to the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
As shown in fig. 2, the present invention integrates all signal data of the external sensing device through the embedded device, and unifies and summarizes the data; transmitting the data to an AI algorithm unit by applying a high-speed 1000base/T communication technology; meanwhile, the arithmetic unit and the VCU are only linked through a single-path CANFD; the purpose of simplifying the interface of the operation unit is realized, and the real-time performance and the effectiveness of communication are enhanced;
as shown in fig. 3, firstly, the laser radar, the millimeter wave radar, the high definition camera and the vision sensor, the global positioning system, the V2X and the self data interface of the autonomous vehicle, which are equipped on the body of the autonomous vehicle, acquire the external environmental factors of the autonomous vehicle and the self data of the autonomous vehicle and form data of corresponding formats, and access the data to the interface data conversion unit to realize the extraction of the sensing data through the data communication mode (such as CAN network, gigabit ethernet, RS232, and the like) supported by the sensing equipment.
And secondly, the data generated by the interface data conversion unit is transmitted to the embedded automatic driving decision main control unit in a communication mode based on 1000bast/T technology. The automatic driving main control unit performs data fusion according to the received data by using an algorithm based on deep learning, a real-time algorithm and AI, and then performs control decision operation such as motion trend judgment, behavior selection, motion planning and the like.
And finally, the automatic driving main control unit pushes the control decision data to a Vehicle Control Unit (VCU) in a CANFD communication mode, the vehicle control unit drives each electric control unit of the vehicle to execute decision actions according to the received control data, and mechanisms such as a direction, an accelerator and a brake of the automatic driving vehicle are controlled to realize automatic driving.
The system sensing part uses four lateral millimeter wave radars and two forward millimeter wave radars to detect obstacles around the vehicle; the system comprises a 32-line and two 16-line fast transpiration laser radars, a global positioning system, a data acquisition module, a data processing module and a data processing module, wherein the 32-line and two 16-line fast transpiration laser radars are combined with the global positioning system to construct a high-precision map and monitor and track information such as a multi-moving and static target state, a motion trend; the high-definition cameras in the four directions are used for monitoring the surrounding environment around the vehicle body and can be used for recognizing lane lines and traffic signboards or other functions; the V2X module is used for carrying out information interaction among vehicles, vehicles and a road peripheral internet website; the vehicle body network bus data provides information such as the position and the speed of the vehicle; and the data information acquired by all the sensing equipment is summarized and sent through the data conversion unit.
The decision algorithm part is formed by adopting a mode that a plurality of AI cores and an image signal processing module run in an embedded system; the image signal processing module processes visual information according to deep visual learning and assists the AI core in automatically driving vehicle path planning, track planning and the like
The execution part controls each execution mechanism of the automatic driving vehicle to execute decision through a CANFD data bus.
The embedded system has the advantages of strong specificity, small system kernel, high real-time performance, architecture supporting openness and scalability, strong stability, long life cycle and the like; firstly, the scheme mainly replaces the original computer platform with an embedded system platform, applies the embedded system to an automatic driving hardware architecture, integrates the data of all sensors into a module, simplifies the data interface of an automatic driving sensing part and a decision part, improves the data acquisition accuracy and real-time performance of a decision plate, relieves the task processing pressure of the decision plate and improves the stability of the system; moreover, all hardware designs in the scheme accord with the functional safety standard.
As shown in fig. 4, the millimeter wave radar signal, the V2X and the OBD data of the whole vehicle are accessed to the data acquisition unit through the CAN or CANFD interface, and the audio/video signal, the laser radar signal and the GPS signal are accessed to the data acquisition unit through the ethernet interface; the whole vehicle external sensing data and the whole vehicle internal vehicle state information are summarized and preprocessed by a data acquisition unit and then forwarded to a decision operation part in a mode based on 1000base/T technology; the data acquisition unit shares the data acquisition task of the original decision operation unit; the response speed of decision operation processing can be improved.
The processor is an Intel atom processor, and the singlechip is ARM series Cortex-A9;
the fast access signal end of the processor is connected with the signal end of the SSD storage unit and the signal end of the DDR storage unit, the network signal end of the processor is connected with a network port RJ45, the USB signal end of the processor is connected with a USB interface, the PCIe signal end of the processor 4 is connected with an Intel signal end, the single port PCI signal end of the processor is connected with a random access unit DRAM (IDT70V261), the signal input end of the DRAM is connected with an instant communication signal end EIM of a single chip microcomputer, the SPI signal end of the single chip microcomputer is connected with a controller MCP2517, the signal output end of the controller is connected with a CANFD signal end, the network signal end of the single chip microcomputer is connected with a network line interface RJ45, the USB signal end of the single chip microcomputer is connected with a USB interface, the UART serial interface of the single chip microcomputer is connected with an RS232 signal end, and the flash signal end of the single chip microcomputer is connected.
VCU-Vehicle Control Unit (Vehicle Control Unit) for realizing Vehicle Control decision-making
CAN, Controller Area Network (CAN), is an ISO internationally standardized serial communication protocol
CANFD-variable rate CAN (CAN with Flexible Data rate)
USB-Universal Serial Bus (Universal Serial Bus)
RS 232-asynchronous transmission universal interface
V2X-exchange of information between Vehicle and outside (Vehicle To Everything)
The sensor unit includes: an OBD unit, a V2X unit, a vehicle-mounted camera component, a millimeter wave laser radar, a GPS and a detection radar; the OBD unit signal transmitting terminal is connected with the OBD signal receiving terminal of the data acquisition unit, the V2X unit signal transmitting terminal is connected with the V2X signal receiving terminal of the data acquisition unit, the vehicle-mounted camera shooting assembly signal transmitting terminal is connected with the video assembly signal receiving terminal of the video assembly, the video assembly signal transmitting terminal is connected with the video receiving terminal of the data acquisition unit, the GPS signal transmitting terminal is connected with the GPS signal receiving terminal of the data acquisition unit, the millimeter wave laser radar signal transmitting terminal is connected with the millimeter wave radar signal receiving terminal of the data acquisition unit, and the detection radar signal transmitting terminal is connected with. The corresponding signal receiving end of the data acquisition unit collects data through a specific signal receiving interface of the data acquisition unit and sends the collected data to the intelligent interaction unit for data processing.
The intelligent interaction unit comprises: the data fusion unit is used for collecting and sorting the collected data of the OBD unit, the V2X unit, the vehicle-mounted camera component, the millimeter wave laser radar, the GPS and the detection radar through the data collection unit, then carrying out data fusion operation, thereby realizing the further processing of the data,
the deep learning unit is used for carrying out neural network learning on data of the OBD unit, the V2X unit, the vehicle-mounted camera component, the millimeter wave laser radar, the GPS and the detection radar, and continuously training and optimizing corresponding data, so that the intelligent automobile works more stably;
the path planning unit is used for carrying out path optimization selection on a driver after collecting GPS data, radar data and vehicle-mounted camera data, and planning a more reasonable driving route;
the behavior selection unit is used for executing the work of the corresponding components of the vehicle and transmitting the work to the action execution unit for the intelligent control of the vehicle,
the action execution units include: the intelligent automobile instrument data display method has the advantages that the working data sent by the intelligent interaction unit are acted to execute operation, the engine is enabled to operate well, the gear operation is reasonable, the operation is smooth, the brake and the brake can be timely carried out when the automobile instrument data display method encounters obstacles, the steering wheel operation system is reasonably adjusted, the air bag is timely popped up when the automobile instrument data display method encounters an emergency, and the instrument data of the intelligent automobile are correctly displayed.
The method utilizes a 77GHz millimeter wave radar and an industrial-grade high-definition camera to detect the traffic condition of the intersection. The millimeter wave radar outputs the type, width, length, existence probability, relative position from the road surface origin and relative speed of the target to be detected, and the camera outputs the image information of the road junction to be detected.
The path planning unit further comprises: s1, a first intersection detection system is arranged, a first millimeter wave radar signal output end is connected with a first single-chip microcomputer radar signal receiving end, a first high-definition camera signal output end is connected with a first embedded GPU camera signal receiving end, a first single-chip microcomputer signal output end is connected with a first switch radar signal receiving end, and a first embedded GPU signal output end is connected with a first switch camera signal receiving end; a second intersection detection system is arranged, a second millimeter wave radar signal output end is connected with a second single-chip microcomputer radar signal receiving end, a second high-definition camera signal output end is connected with a second embedded GPU camera signal receiving end, a second single-chip microcomputer signal output end is connected with a second switch radar signal receiving end, and a second embedded GPU signal output end is connected with a second switch camera signal receiving end; an Nth intersection detection system is arranged, an Nth millimeter wave radar signal output end is connected with an Nth singlechip radar signal receiving end, an Nth high-definition camera signal output end is connected with an Nth embedded GPU camera signal receiving end, an Nth singlechip signal output end is connected with an Nth switch radar signal receiving end, and an Nth embedded GPU signal output end is connected with an Nth switch camera signal receiving end; the signal output end of the first switch is connected with a first signal receiving end of a main switch, the signal output end of the second switch is connected with a second signal receiving end of the main switch, the signal output end of an Nth switch is connected with an Nth signal receiving end of the main switch, and the signal output end of the main switch is connected with a signal receiving end of a database server; setting N intersection detection systems, collecting and collecting data of vehicles and pedestrians at the intersections, storing the data through a database server and performing deep learning;
s2, collecting data of vehicles and pedestrians at an intersection by a database server, collecting data collected by all millimeter wave radars, outputting the type of a detected target through the millimeter wave radars, judging the type of the detected target, determining the type of the detected target, scanning the width and the length of the detected target according to the type, calculating the probability of the detected target appearing at the corresponding intersection, calculating the relative position of the detected target from a road original point through a high-definition camera and millimeter wave radar fusion method, calculating the relative speed according to the moving time of the detected target in a road coordinate system, and outputting image information of the detected intersection through the high-definition camera;
s2-1, real-time screening the information of the target to be measured output by the millimeter wave radar according to the actual motion condition of the target to be measured;
s2-2, a radar signal emitted by the millimeter wave radar is accompanied by a false target, wherein the false target screens and rejects tree information, fence information and telegraph pole information which are fixed and unchangeable through the tree information, the fence information and the telegraph pole information which are acquired by the millimeter wave radar and the high-definition camera and through radar data and image data which are acquired in real time;
s2-3, performing first round screening on the detected target according to the width, length, position and confidence information of the detected target, which is detected by the millimeter wave radar and is fixed, such as tree information, fence information and telegraph pole information; then tracking and filtering the continuously detected target by using a Kalman filtering algorithm; carrying out target life cycle management according to an estimation result of Kalman filtering;
s2-4, after obtaining the detected targets of the first intersection first millimeter wave radar and the first high-definition camera and the second intersection second millimeter wave radar and the second high-definition camera which are not related to each other and obtain corresponding detection results under the mutually independent state, fusing the information of the two unrelated detected targets by utilizing the Elman neural network, and rejecting the detected targets under the unmatched state.
S3, carrying out target recognition on continuous image frames of images acquired by the high-definition camera by using the trained deep neural network, and meanwhile, calculating the position and speed parameters of a detected target by combining the calibration parameters of the high-definition camera; tracking and filtering the state of a target to be measured in a motion state by a Kalman filtering method; and then carrying out target life cycle management according to the estimation result of Kalman filtering. Firstly, carrying out target identification on continuous image frames of the image by using a trained deep neural network, and simultaneously calculating the position and speed parameters of a measured target object by combining the calibration parameters of a camera; then tracking and filtering the state of the moving target based on a Kalman filtering method; and finally, carrying out target life cycle management according to the estimation result of Kalman filtering.
The target life cycle management is to modify the missed detection (or false detection) state in the video and millimeter wave radar detection by adopting the life cycle mode so as to weaken the jump of the target state value in the detection result. In the preset life cycle of the target, if the target is missed (or mistakenly detected), the original target still exists, and the state value of the target is predicted (or corrected) by using Kalman filtering; if the life cycle of the target is exceeded, the original target is considered to be disappeared, and an ID is endowed to the target again. It should be noted that, when the difference between the kalman filtered estimation value and the sensor detection value exceeds a certain threshold, the life cycle of the measured object is reduced by 1.
The method screens target information output by the millimeter wave radar according to actual conditions. It should be noted that signals given by the radar are often accompanied with some false targets, generally including trees, fences, electric poles, and the like, so that first round screening is performed on detected targets according to target width, length, position and confidence information given by the radar; then tracking and filtering the continuously detected target by using a Kalman filtering algorithm; and finally, carrying out target life cycle management according to the estimation result of Kalman filtering.
And after target level detection results of the two types of sensors are obtained, fusing two target information by utilizing an Elman neural network.
Vgg16 based SSD network (image recognition):
s3-1, inputting a frame of image (300 x 300) collected in each high-definition camera into a deep learning SSD model of a corresponding embedded GPU, wherein the core of the model is a modified and trained detection network VGG 16; (1. Here the detection model is placed in the embedded GPU for each camera, not in the background; 2. the detection model is SSD, VGG16 is part of SSD)
The S3-1 comprises:
S3-A, converting the full link layers FC6 and FC7 of the detection network VGG16 into convolutional layers Conv6 and Conv 7;
S3-B, removing the Droprout layer and the full link layer FC8 which are used for preventing overfitting of the detection network VGG 16;
S3-C, adopting extended convolution or convolution with holes Atrous algorithm;
and S3-D, changing the convolution kernel step size Stride of the pooling layer Pool5 of the detection network VGG16 from 2x 2-S2 to 3 x 3-S1, wherein S2 is a second convolution kernel step size, and S1 is a first convolution kernel step size.
(2 x 2-S2, meaning a 2x 2 convolution kernel with a shift step size of 2; 3 x 3-S1, meaning a 3 x 3 convolution kernel with a shift step size of 1)
S3-2, extracting feature mapping feature maps of convolutional layers Conv4_3, Conv7, Conv8_2, Conv9_2, Conv10_2 and Conv11_2 in the detection network, constructing 6 bounding boxes Boundingbox with different sizes at each feature point of the convolutional layer feature mapping feature maps, and detecting and classifying the bounding boxes constructed by the feature points to generate a plurality of bounding boxes Boundingbox;
and S3-3, performing combination operation on the boundary box generated by the different feature mapping feature maps, and inhibiting a part of overlapped boundary box detected targets or incorrect boundary box detected targets after matching through a non-maximum value inhibition method NMS (network management system), so as to obtain the final detection result of the vehicle and pedestrian data detected targets.
Elman neural network (target fusion):
the input layer is the state parameters (horizontal, vertical and speed) of the target obtained by different sensors, and the output layer is the final state parameters of the target.
The system architecture design is carried out, the detection and analysis of vehicles and pedestrians at the intersection are realized,
when a frame of image and a frame of radar message are captured simultaneously, on one hand, all existing target positions in the image are framed out and target types are given out by an SSD neural network model based on deep learning, and therefore pixel coordinates of the detected target in the image are obtained; calibrating parameters by using a camera, converting pixel coordinates of each target object into geodetic plane coordinates, and calculating the speed of each target object according to the position change of the target in the continuous image; and (2) performing optimal estimation on the state parameters of the target by adopting a Kalman filtering method based on the predicted value of the image of the previous frame of the target and the detected value of the image of the current frame, considering that the current detection result is unreliable for the target with the difference value between the predicted value and the detected value larger than a certain threshold value, directly outputting the result by taking the predicted value as the result, and reducing the life cycle of the target by 1.
On the other hand, objects with width, length and position obviously not conforming to the objective parameter range of the detected object and with confidence values lower than a certain set threshold value are firstly removed from a plurality of object information given by the radar; and then, calculating a predicted value of a previous frame message parameter of the target by adopting a Kalman filtering method, comparing the predicted value with a detected value of a current frame message, directly outputting the current frame message parameter of the radar as a result if the difference between the predicted value and the detected value is less than a certain threshold value, considering that the current detection result of the radar is unreliable if the predicted value and the detected value are more than the certain threshold value, obtaining an optimal estimated value by combining the predicted value and the detected value, and reducing the life cycle of the target by 1.
And transmitting the target level state information obtained by the sensors back to a background terminal through a wired or wireless network, and performing association matching (considering as the same target) on targets with approximate position and speed parameters in different sensors. And for the type of the target object, taking the image detection result as final output, inputting the state parameters of the target object obtained by image detection and radar detection into an Elman neural network input layer, and taking the result of the neural network output layer as the final state parameter of the target.
Although embodiments of the present invention have been shown and described, many changes may be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the claims and their equivalents.

Claims (8)

1. An autopilot architecture system that complies with functional safety standards for automobiles, comprising: the system comprises a sensor unit, a data acquisition unit, an intelligent interaction unit and an action execution unit;
the sensor unit data sending end is connected with the data acquisition unit data receiving end, the data acquisition unit data sending end is connected with the intelligent interaction unit data receiving end, and the intelligent interaction unit data sending end is connected with the action execution unit data receiving end;
the intelligent interaction unit comprises:
the path planning unit is used for carrying out path optimization selection on a driver after collecting GPS data, radar data and vehicle-mounted camera data, and planning a more reasonable driving route;
the path planning unit includes: a, a first intersection detection system is arranged, a first millimeter wave radar signal output end is connected with a first single-chip microcomputer radar signal receiving end, a first high-definition camera signal output end is connected with a first embedded GPU camera signal receiving end, a first single-chip microcomputer signal output end is connected with a first switch radar signal receiving end, and a first embedded GPU signal output end is connected with a first switch camera signal receiving end; a second intersection detection system is arranged, a second millimeter wave radar signal output end is connected with a second single-chip microcomputer radar signal receiving end, a second high-definition camera signal output end is connected with a second embedded GPU camera signal receiving end, a second single-chip microcomputer signal output end is connected with a second switch radar signal receiving end, and a second embedded GPU signal output end is connected with a second switch camera signal receiving end; an Nth intersection detection system is arranged, an Nth millimeter wave radar signal output end is connected with an Nth singlechip radar signal receiving end, an Nth high-definition camera signal output end is connected with an Nth embedded GPU camera signal receiving end, an Nth singlechip signal output end is connected with an Nth switch radar signal receiving end, and an Nth embedded GPU signal output end is connected with an Nth switch camera signal receiving end; the signal output end of the first switch is connected with a first signal receiving end of a main switch, the signal output end of the second switch is connected with a second signal receiving end of the main switch, the signal output end of an Nth switch is connected with an Nth signal receiving end of the main switch, and the signal output end of the main switch is connected with a signal receiving end of a database server; setting N intersection detection systems, collecting and collecting data of vehicles and pedestrians at the intersections, storing the data through a database server and performing deep learning;
b, a database server collects vehicle and pedestrian data at an intersection, collects data collected by all millimeter wave radars, outputs the type of a detected target through the millimeter wave radars, judges the type of the detected target, determines the type of the detected target, scans the width and the length of the detected target according to the type, and calculates the relative position of the detected target from a road original point through a high-definition camera and millimeter wave radar fusion method, calculates the relative speed according to the moving time of the detected target in a road coordinate system, and outputs image information of the detected intersection through the high-definition camera;
b-1, screening the information of the target to be measured output by the millimeter wave radar in real time according to the actual motion condition of the target to be measured;
b-2, radar signals transmitted by the millimeter wave radar are accompanied by false targets, wherein the false targets screen and remove fixed tree information, fence information and telegraph pole information through the tree information, fence information and telegraph pole information which are acquired by the millimeter wave radar and the high-definition camera and through radar data and image data which are acquired in real time;
b-3, performing first-round screening on the detected target according to the width, length, position and confidence information of the detected target, which is detected by the millimeter wave radar and is invariable, such as tree information, fence information and telegraph pole information; then tracking and filtering the continuously detected target by using a Kalman filtering algorithm; carrying out target life cycle management according to an estimation result of Kalman filtering;
b-4, after obtaining that the detected targets of the first intersection first millimeter wave radar and the first high-definition camera and the second intersection second millimeter wave radar and the second high-definition camera are not associated with each other and are independent from each other, obtaining corresponding detection results, fusing information of the two unassociated detected targets by using an Elman neural network, and rejecting the detected targets in a mismatched state;
c, carrying out target identification on continuous image frames of the images collected by the high-definition camera by using the trained deep neural network, and meanwhile, calculating the position and speed parameters of the measured target by combining the calibration parameters of the high-definition camera; tracking and filtering the state of a target to be measured in a motion state by a Kalman filtering method; then, carrying out target life cycle management according to the estimation result of Kalman filtering; firstly, carrying out target identification on continuous image frames of the image by using a trained deep neural network, and simultaneously calculating the position and speed parameters of a measured target object by combining the calibration parameters of a camera; then tracking and filtering the state of the moving target based on a Kalman filtering method; and finally, carrying out target life cycle management according to the estimation result of Kalman filtering.
2. The vehicle functional safety standard compliant autopilot architecture system of claim 1 wherein the data collection unit comprises:
the processor fast access signal end is connected with an SSD memory unit signal end and a DDR memory unit signal end, the processor network signal end is connected with a network port RJ45, the processor USB signal end is connected with a USB interface, the processor 4 port PCIe signal end is connected with an Intel signal end, the processor single port PCI signal end is connected with a random access unit DRAM, the DRAM signal input end is connected with a single-chip microcomputer instant communication signal end EIM, the single-chip microcomputer SPI signal end is connected with a controller MCP2517, the controller signal output end is connected with a CANFD signal end, the single-chip microcomputer network signal end is connected with a network cable interface RJ45, the single-chip microcomputer USB signal end is connected with a USB interface, the single-chip microcomputer UART serial interface is connected with an RS232 signal end, and the single-chip microcomputer.
3. The vehicle functional safety standard compliant autopilot architecture system of claim 1 wherein the sensor unit comprises:
an OBD unit, a V2X unit, a vehicle-mounted camera component, a millimeter wave laser radar, a GPS and a detection radar; the OBD unit signal transmitting terminal is connected with the OBD signal receiving terminal of the data acquisition unit, the V2X unit signal transmitting terminal is connected with the V2X signal receiving terminal of the data acquisition unit, the vehicle-mounted camera shooting assembly signal transmitting terminal is connected with the video assembly signal receiving terminal of the video assembly, the video assembly signal transmitting terminal is connected with the video receiving terminal of the data acquisition unit, the GPS signal transmitting terminal is connected with the GPS signal receiving terminal of the data acquisition unit, the millimeter wave laser radar signal transmitting terminal is connected with the millimeter wave radar signal receiving terminal of the data acquisition unit, and the detection radar signal transmitting terminal is connected with.
4. The vehicle functional safety standard compliant autopilot architecture system of claim 1 wherein the intelligent interactive element comprises:
the data fusion unit is used for collecting and arranging the collected data of the OBD unit, the V2X unit, the vehicle-mounted camera shooting assembly, the millimeter wave laser radar, the GPS and the detection radar through the data collection unit, and then performing fusion operation on the data, so that further processing of the data is realized.
5. The vehicle functional safety standard compliant autopilot architecture system of claim 1 wherein the intelligent interactive element further comprises:
the deep learning unit is used for carrying out neural network learning on OBD unit, V2X unit, vehicle-mounted camera component, millimeter wave laser radar, GPS and detection radar data, and continuously trains and optimizes corresponding data, so that the intelligent automobile works more stably.
6. The vehicle functional safety standard compliant autopilot architecture system of claim 1 wherein the intelligent interactive element further comprises:
the behavior selection unit is used for executing the work of the corresponding components of the vehicle and transmitting the work to the action execution unit for intelligent control of the vehicle.
7. The vehicle functional safety standard compliant autopilot architecture system of claim 1 wherein the action execution unit comprises: the intelligent automobile instrument data display method has the advantages that the working data sent by the intelligent interaction unit are acted to execute operation, the engine is enabled to operate well, the gear operation is reasonable, the operation is smooth, the brake and the brake can be timely carried out when the automobile instrument data display method encounters obstacles, the steering wheel operation system is reasonably adjusted, the air bag is timely popped up when the automobile instrument data display method encounters an emergency, and the instrument data of the intelligent automobile are correctly displayed.
8. An operating method of an automatic driving architecture system according with the automobile functional safety standard based on claim 1 is characterized by comprising the following steps:
s1, collecting sensing data corresponding to the sensor units, namely an OBD unit, a V2X unit, a vehicle-mounted camera module, a millimeter wave laser radar, a GPS and a detection radar; the system comprises an OBD unit signal sending end, a V2X unit signal sending end, a data acquisition unit V2X signal receiving end, a vehicle-mounted camera assembly signal sending end, a video assembly signal receiving end, a data acquisition unit video receiving end, a GPS signal sending end, a millimeter wave laser radar signal sending end, a data acquisition unit millimeter wave radar signal receiving end and a detection radar signal sending end, wherein the OBD unit signal sending end is connected with the data acquisition unit OBD signal receiving end, the V2X unit signal sending end is connected with the data acquisition unit V2X signal receiving end, the vehicle-mounted camera assembly signal sending;
s2, sending sensing data to an intelligent interaction unit through a data acquisition unit, connecting a processor fast access signal end with a signal end of an SSD memory unit and a signal end of a DDR memory unit, connecting a processor network signal end with a network port RJ45, connecting a processor USB signal end with a USB interface, connecting a processor 4-port PCIe signal end with an Intel signal end, connecting a processor single-port PCI signal end with a DRAM, connecting a signal input end with a single-chip microcomputer instant communication signal end EIM, connecting a single-chip microcomputer SPI signal end with a controller MCP2517, connecting a controller signal output end with a CANFD signal end, connecting a single-chip microcomputer network signal end with a network cable interface RJ45, connecting a single-chip microcomputer USB signal end with a USB interface, connecting a single-chip microcomputer UART serial interface with an RS232 signal end, and connecting a single-chip microcomputer flash memory signal end with an F L;
s3, the intelligent interaction unit sends the information to the action execution unit to control the operation of the intelligent automobile;
the data fusion unit is used for collecting and sorting the collected data of the OBD unit, the V2X unit, the vehicle-mounted camera component, the millimeter wave laser radar, the GPS and the detection radar through the data collection unit and then carrying out data fusion operation so as to realize further processing of the data, and the deep learning unit is used for carrying out neural network learning on the data of the OBD unit, the V2X unit, the vehicle-mounted camera component, the millimeter wave laser radar, the GPS and the detection radar, continuously training and optimizing corresponding data so as to enable the intelligent automobile to work more stably; the path planning unit is used for carrying out path optimization selection on a driver after collecting GPS data, radar data and vehicle-mounted camera data, and planning a more reasonable driving route; the behavior selection unit is used for executing the work of the corresponding components of the vehicle and transmitting the work to the action execution unit for intelligent control of the vehicle, and the action execution units comprise: the intelligent automobile instrument data display method has the advantages that the working data sent by the intelligent interaction unit are acted to execute operation, the engine is enabled to operate well, the gear operation is reasonable, the operation is smooth, the brake and the brake can be timely carried out when the automobile instrument data display method encounters obstacles, the steering wheel operation system is reasonably adjusted, the air bag is timely popped up when the automobile instrument data display method encounters an emergency, and the instrument data of the intelligent automobile are correctly displayed.
CN201910367428.2A 2019-05-05 2019-05-05 Automatic driving architecture system meeting automobile function safety standard and working method Active CN110083099B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910367428.2A CN110083099B (en) 2019-05-05 2019-05-05 Automatic driving architecture system meeting automobile function safety standard and working method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910367428.2A CN110083099B (en) 2019-05-05 2019-05-05 Automatic driving architecture system meeting automobile function safety standard and working method

Publications (2)

Publication Number Publication Date
CN110083099A CN110083099A (en) 2019-08-02
CN110083099B true CN110083099B (en) 2020-08-07

Family

ID=67418496

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910367428.2A Active CN110083099B (en) 2019-05-05 2019-05-05 Automatic driving architecture system meeting automobile function safety standard and working method

Country Status (1)

Country Link
CN (1) CN110083099B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110568852A (en) * 2019-10-12 2019-12-13 深圳市布谷鸟科技有限公司 Automatic driving system and control method thereof
CN110962865A (en) * 2019-12-24 2020-04-07 国汽(北京)智能网联汽车研究院有限公司 Automatic driving safety computing platform
CN213007637U (en) * 2020-07-08 2021-04-20 深圳技术大学 Edge-cloud computing system of pure electric vehicle
CN112289027A (en) * 2020-10-27 2021-01-29 上海埃维汽车技术股份有限公司 Automatic driving architecture system meeting automobile function safety standard

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101438238A (en) * 2004-10-15 2009-05-20 伊塔斯公司 Method and system for anomaly detection
CN105190352A (en) * 2013-01-21 2015-12-23 专利实验室有限公司 Drive assistance device for motor vehicles
KR20180040020A (en) * 2016-10-11 2018-04-19 주식회사 만도 Driving assistant apparatus and driving assistant method

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107031600A (en) * 2016-10-19 2017-08-11 东风汽车公司 Automated driving system based on highway
KR20190006418A (en) * 2017-07-10 2019-01-18 박석진 Safety sign lamp for vehicle
CN207473410U (en) * 2017-11-16 2018-06-08 尹新通 A kind of automobile intelligent servomechanism
CN108407808A (en) * 2018-04-23 2018-08-17 安徽车鑫保汽车销售有限公司 A kind of running car intelligent predicting system
CN108845577A (en) * 2018-07-13 2018-11-20 武汉超控科技有限公司 A kind of embedded auto-pilot controller and its method for safety monitoring

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101438238A (en) * 2004-10-15 2009-05-20 伊塔斯公司 Method and system for anomaly detection
CN105190352A (en) * 2013-01-21 2015-12-23 专利实验室有限公司 Drive assistance device for motor vehicles
KR20180040020A (en) * 2016-10-11 2018-04-19 주식회사 만도 Driving assistant apparatus and driving assistant method

Also Published As

Publication number Publication date
CN110083099A (en) 2019-08-02

Similar Documents

Publication Publication Date Title
CN110083099B (en) Automatic driving architecture system meeting automobile function safety standard and working method
CN111133447B (en) Method and system for object detection and detection confidence for autonomous driving
Possatti et al. Traffic light recognition using deep learning and prior maps for autonomous cars
CN103901895B (en) Target positioning method based on unscented FastSLAM algorithm and matching optimization and robot
JP6571545B2 (en) Object detection apparatus and object detection method
WO2018047115A1 (en) Object recognition and classification using multiple sensor modalities
CN107031623A (en) A kind of road method for early warning based on vehicle-mounted blind area camera
US20220373353A1 (en) Map Updating Method and Apparatus, and Device
CN112149550A (en) Automatic driving vehicle 3D target detection method based on multi-sensor fusion
CN112771858A (en) Camera assessment techniques for automated vehicles
CN109840454B (en) Target positioning method, device, storage medium and equipment
CN111221342A (en) Environment sensing system for automatic driving automobile
CN111461048B (en) Vision-based parking lot drivable area detection and local map construction method
CN113378741B (en) Auxiliary sensing method and system for aircraft tractor based on multi-source sensor
CN111967396A (en) Processing method, device and equipment for obstacle detection and storage medium
Faisal et al. Object detection and distance measurement using AI
CN114818819A (en) Road obstacle detection method based on millimeter wave radar and visual signal
CN114627409A (en) Method and device for detecting abnormal lane change of vehicle
CN108663368A (en) A kind of system and method for real-time monitoring freeway network night entirety visibility
CN113611008B (en) Vehicle driving scene acquisition method, device, equipment and medium
CN116022657A (en) Path planning method and device and crane
CN114972731A (en) Traffic light detection and identification method and device, moving tool and storage medium
Wang et al. A system of automated training sample generation for visual-based car detection
CN112883846A (en) Three-dimensional data acquisition imaging system for detecting vehicle front target
CN112529011A (en) Target detection method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant