CN112101128B - Unmanned formula racing car perception planning method based on multi-sensor information fusion - Google Patents

Unmanned formula racing car perception planning method based on multi-sensor information fusion Download PDF

Info

Publication number
CN112101128B
CN112101128B CN202010849407.7A CN202010849407A CN112101128B CN 112101128 B CN112101128 B CN 112101128B CN 202010849407 A CN202010849407 A CN 202010849407A CN 112101128 B CN112101128 B CN 112101128B
Authority
CN
China
Prior art keywords
image
racing car
unmanned
point cloud
algorithm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010849407.7A
Other languages
Chinese (zh)
Other versions
CN112101128A (en
Inventor
殷国栋
柏硕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN202010849407.7A priority Critical patent/CN112101128B/en
Publication of CN112101128A publication Critical patent/CN112101128A/en
Application granted granted Critical
Publication of CN112101128B publication Critical patent/CN112101128B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/10Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to vehicle motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/251Fusion techniques of input or preprocessed data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Automation & Control Theory (AREA)
  • Mathematical Physics (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention provides a perception planning method of an unmanned formula racing car based on multi-sensor information fusion based on a double-motor driven unmanned formula racing car, and an arithmetic control unit is taken as a key signal coordination center of a system. The method comprises the steps of completing obstacle identification based on extraction of point cloud characteristics of the laser radar and projection dimension reduction, filtering discrete noise points according to outlier characteristics, detecting and identifying a cone barrel based on an image-enhanced MSER method and geometric characteristics of a maximum stable extremum region, clustering by adopting a DBSCAN algorithm, and fitting region boundaries. And detecting lane marking lines by using Hough transform, and dividing the road surface area to generate a triangular self-adaptive region of interest. And obtaining the optimal path of the road space by adopting a target deviation type bidirectional rapid search random tree algorithm. The multi-sensor information fusion perception planning strategy provided by the invention can quickly and accurately identify the track environment and realize unmanned control under the specific track environment.

Description

Unmanned formula racing car perception planning method based on multi-sensor information fusion
Technical Field
The invention relates to an unmanned formula car perception planning technology, and belongs to the technical field of unmanned driving.
Background
The unmanned formula competition of college students in China is sponsored by the society of automotive engineering in China, and is a design and manufacturing competition of automobiles which are participated in by college automotive engineering or automobile-related specialties in college students in college teams. The Chinese college student unmanned equation competition integrates the top technology of unmanned vehicles, wherein dynamic events comprise a linear acceleration test, an 8-shaped loop winding test, a controllability test (with people) and a high-speed tracking test, the functions of perception, planning, decision making, control and the like of the participating vehicles are mainly tested, and the key technologies of multi-sensor information fusion, point cloud obstacle recognition, image target recognition, lane line detection, vehicle path planning, tracking and the like are covered. The key technologies of multi-sensor information fusion and the like of the formula-free racing car can be applied to the field of unmanned driving, have important significance on a driving assistance system, and promote the development of new energy automobile technology and intelligent automobile technology.
Therefore, aiming at the key technical problem of the perception planning of the formula race car, the invention provides a perception planning method of the formula race car based on multi-sensor information fusion, which is based on a point cloud obstacle recognition environment sensing system, a redundant cone bucket detection algorithm, a target deviation type bidirectional rapid search random tree (RRT) algorithm, lane marking line detection, a drive-by-wire chassis system and a multi-sensing fusion technology of combined navigation, can quickly and accurately recognize the corresponding race track environment, realizes advanced unmanned control under the race track environment, and meets the real-time requirement of the formula race car.
Disclosure of Invention
The technical problem is as follows:
at present, the unmanned technology is not mature, and the sensing planning strategy of the formula-free racing car is poor in stability due to the fact that the formula-free racing car needs to run on a preset track and uncertainty of surrounding environments such as road factors and weather factors is large. In addition, the unmanned equation competition covers key technologies such as multi-sensor information fusion, point cloud obstacle identification, image target identification, lane line detection, vehicle path planning and tracking and the like, and the existing perception planning system is poor in robustness of coordination control. Aiming at the defects of the prior art, the invention provides a perception planning method of the unmanned formula racing car based on multi-sensor fusion, which meets the real-time requirement of the unmanned formula racing car and realizes unmanned control of the unmanned formula racing car under different track environments.
The technical scheme is as follows:
the unmanned formula racing car adopts a double-motor driving control technical scheme, combines a CPU and a GPU to carry out accelerated calculation, plans a driving path in real time, drives a bottom layer controller to realize vehicle driving, steering and braking, adopts a redundant cone barrel detection algorithm to ensure the accuracy of cone barrel identification, utilizes a target deviation type fast search random tree algorithm to optimize path planning, and accordingly realizes advanced unmanned control under a track environment through a corresponding track environment quickly and accurately. The finished automobile perception planning method specifically comprises the following contents:
1. perception planning method of unmanned formula racing car based on multi-sensor information fusion
The perception planning method of the unmanned formula racing car based on the multi-sensor fusion is shown in fig. 1, and the overall perception planning method takes an operation control unit as a key signal coordination center of a system. The unmanned sensor system reads GPS/IMU signals, laser radar signals and camera signals, the vehicle-mounted receiving end reads 'GO' signals and remote control emergency stop signals of a remote control transmitting end, the vehicle-mounted receiving end of the unmanned formula racing car consists of a wireless serial port, a TTL-to-RS 232 module and a relay module, the read signals are analyzed and processed by the operation control unit, signals such as turning-off/activating signals of a steering system, turning-off/activating signals of a driving system, disengaging/activating signals of a braking system and an emergency braking system are sent to control corresponding executing mechanisms, and finally safe and stable running of the racing car is achieved.
The overall perception planning strategy of the unmanned formula racing car based on multi-sensor fusion information fusion takes an operation control unit as a key signal coordination center of a system, and mainly comprises a laser radar, a monocular camera, a GPS/IMU sensor and operation unit controller hardware. And (3) based on the initial point cloud data of the laser radar, completing the identification of the obstacle through processing such as feature extraction, projection dimension reduction and the like. And recognizing the track characteristic points by adopting a monocular camera based on the color characteristics to complete the perception and recognition of the surrounding environment. The positioning precision is enhanced by fusing GPS and inertial sensor data, and the GPS/IMU-based integrated navigation is mainly used for judging the position and the posture of the unmanned formula racing car. The operation unit controller mainly comprises a main controller unit, a data processing unit and an acceleration image calculation unit, wherein Micro AutoBox II is selected as an unmanned control main controller, an ARK-3520P industrial personal computer is selected to process radar point cloud data, NVIDIA TX2 is selected to accelerate image calculation, and the operation control unit adopts a CPU and a GPU for parallel calculation, so that the real-time requirements of perception data processing and planning algorithm can be met.
Environmental perception algorithm based on laser radar point cloud obstacle recognition
The invention provides an environment perception algorithm based on point cloud obstacle identification, which comprises the following specific steps:
(1) compiling a UDP (user Datagram protocol) protocol drive according to the data of the laser radar Pandar40 in the standing crop, and obtaining initial point cloud data of a laser radar three-dimensional target scene by using the drive;
(2) carrying out projection dimensionality reduction on original three-dimensional point cloud data of the laser radar, carrying out orthographic projection on three-dimensional target point cloud to obtain a target surface projection image, and limiting projection in a point cloud interesting area;
(3) searching road edges by adopting Hough transform to obtain road candidate points, extracting lane line edges by adopting a Canny edge detection operator to obtain balance of noise suppression and edge detection, filtering irrelevant areas and road surface point cloud data according to the road edges and the road surface characteristics, and extracting point cloud data of a three-dimensional target;
(4) filtering discrete noise points according to the outlier characteristics, eliminating abnormal values in the radar point cloud data, and adopting an MAD method as a judgment standard of the outlier;
(5) clustering is carried out based on a DBSCAN algorithm, pile cylinders are identified, the boundary of the area is fitted, finally, a local map of the unmanned formula racing car is established, and the passable area is determined.
Maximum stable extremum region MSER method based on image enhancement and cone bucket detection algorithm of geometric features
Carrying out cone bucket identification based on machine vision, and providing a MSER method based on image enhancement and cone bucket detection of geometric characteristics, wherein the method comprises the following specific steps:
(1) segmenting the cone-barrel image based on the color characteristics, extracting a white segmentation image and a cone-barrel color segmentation image, and fusing the two template images to obtain a binary image of the cone barrel;
(2) carrying out color enhancement on the color of the track cone barrel, carrying out binarization processing on the gray level image by taking a threshold value, sequentially increasing the threshold value from 0 to 255, and sequencing pixel points;
(3) carrying out binarization on the image by using different gray thresholds to obtain a most stable region;
(4) reversing the original image, performing binarization processing by taking a threshold value again, detecting a white area of the gray level image, extracting a maximum stable extremum area MSER to obtain an ROI area of the cone barrel, and extracting a contour;
(5) carrying out cone bucket detection based on geometric characteristics to remove the interference of a complex background;
(6) the method comprises the steps of resolving and positioning cone bucket coordinates based on monocular vision, obtaining pixel points in cone bucket pixels, wherein the pixel points are in contact with the ground, and judging the dimensionality of an object where the pixels are located by taking the world coordinate positions of the pixel points as the world coordinates of an actual cone bucket.
Lane sign line detection based on Hough transform
The invention provides a lane marking line detection method of an unmanned formula racing car based on Hough transform, which reduces noise interference and meets the real-time requirement of the unmanned racing car, and comprises the following specific steps:
(1) carrying out image preprocessing on the acquired unmanned equation racetrack video, and extracting lane line edges by adopting a Canny edge detection operator to obtain the balance of noise suppression and edge detection;
(2) dividing a road surface area to generate a triangular self-adaptive region of interest, and reducing a search area;
(3) dividing the road preprocessing image into a left part and a right part, and sequentially carrying out Hough transformation on the left image and the right image to obtain a corresponding left lane line and a corresponding right lane line;
(4) and respectively averaging the polar diameters and polar angles of all straight lines of the left image and the right image, outputting two lane lines, acquiring the lane lines and superposing the lane lines into the original image.
Target-biased bidirectional rapid search random tree (RRT) algorithm
The target deviation type bidirectional fast search random tree algorithm is improved by applying methods such as state space preprocessing, bidirectional search, target deviation, dynamic step length, path optimization and the like on the basis of a basic RRT algorithm, so that the search of a road space and the optimal path under specific precision are realized, and the specific improvement strategy is as follows:
(1) preprocessing a state space: performing state space preprocessing according to the volume of the formula-free racing car, expanding the edge of the state space, and reserving a safe area to prevent collision of the racing car;
(2) bidirectional search: the random trees are respectively searched by taking the initial point and the target point as starting points, and compared with the unidirectional random tree search by taking the initial point as the starting point, the search efficiency and the search speed are improved;
(3) target bias: when a new random tree node is generated, the node is expanded towards a target direction with a certain probability, so that the random tree is continuously approached to a terminal point during expansion, and the iteration frequency of RRT is effectively reduced;
(3) dynamic step length: in the process of searching the random tree, a dynamic step algorithm is adopted, the dynamic change of the step is adopted to expand a new node of the RRT random number, the searching direction can be effectively adjusted when an obstacle is encountered, and the obstacle avoidance capability is enhanced;
(4) path optimization: when the random tree finishes searching the road space, the algorithm backtracks the nodes on the random tree from the end point, and removes redundant nodes and straightens the path by judging whether the connecting line between the nodes has an obstacle, thereby obtaining a better path.
Chassis line control system of formula-free racing car
The chassis line control system of the formula-free racing car mainly comprises a line control driving system, a line control steering system and a line control braking system, compared with a control strategy of a manned mode, five control strategy states are added in the unmanned mode, namely the unmanned system is closed, the unmanned system is prepared, the unmanned system runs, unmanned completion and emergency braking are carried out, the switching between the manned mode and the unmanned mode is realized, and the working principle of the chassis line control system is as follows:
(1) the drive-by-wire system drives the motor by taking the speed and the torque calculated and output by the operation unit as input quantities and sending the input quantities to the motor driver in a CAN message mode to drive the motor;
(2) the steer-by-wire system takes a set rotation angle calculated by the operation unit as an input quantity to control steering, the main controller Micro AutoBoxII controls the on-off of a power supply circuit of a steering driver through a relay, and controls the steering driver in a PWM (pulse-width modulation) wave control mode to drive a servo motor to rotate so as to accurately control a steering angle;
(3) the brake-by-wire system triggers a brake signal by combining the calculation result of the operation unit, and controls the opening and closing of the relay so as to disable/enable the brake. The execution component of the brake-by-wire system mainly comprises a carbon fiber high-pressure gas cylinder, a pressure reducing valve, a flow limiting valve, an electromagnetic valve and a cylinder, and the control logic of the brake-by-wire system is shown in figure 2.
Has the advantages that:
the unmanned formula racing car adopts a double-motor driving control technical scheme, a multi-sensor information fusion technology based on multi-line laser radar, machine vision and combined navigation is combined with a CPU and a GPU to carry out acceleration calculation, a bottom layer controller is driven to realize vehicle driving, steering and braking, a driving path is planned in real time, and finally unmanned driving control under a racing track environment is realized. The invention mainly has the following advantages:
(1) the laser radar point cloud obstacle target identification method based on projection dimension reduction realizes data filtering and classification, effectively improves the real-time performance of target identification by using the projection dimension reduction, and increases the identification accuracy of surrounding environment targets. In addition, the noise points which are scattered by filtering according to the outlier characteristics have strong anti-interference capability on the noise points, and the clustering algorithm based on the DBSCAN has strong robustness on the change of the point cloud density;
(2) the MSER method based on image enhancement and the cone bucket detection algorithm of the geometric features ensure the accuracy of cone bucket identification, the adopted MSER method has invariance to the affine change of the gray scale of the image, has stability to the relative gray scale change supported by the region, and can detect and identify the regions with different sizes;
(3) the lane marking line detection method based on Hough transform divides a road surface area into a triangular self-adaptive region of interest, divides a road surface image into a left part and a right part, can effectively avoid the interference of noise, reduces the range of screening road boundary points, improves the efficiency of a lane line detection algorithm and meets the real-time detection of vehicles;
(4) the route planning is optimized based on the target deviation type bidirectional fast search random tree algorithm, and the search efficiency and the search speed are improved. The RRT random tree is expanded by adopting the dynamic step length, so that the obstacle avoidance capability of the racing car is enhanced, and the iteration times of the RRT are effectively reduced. By carrying out state space preprocessing, reserving a safe area of the racing car, and carrying out bending removal and straightening on the path when the path is traced back, so as to obtain the optimal path of the racing car;
(5) the chassis line control system of the unmanned formula racing car mainly comprises a line control driving system, a line control steering system and a line control braking system, and the whole perception planning strategy takes an operation control unit as a key signal coordination center of the system, so that the switching between a manned driving mode and an unmanned driving mode is easy to realize.
Drawings
FIG. 1 shows a perception planning method for an unmanned formula racing car based on multi-sensor fusion
FIG. 2 is a control logic diagram of a brake-by-wire system.
Detailed Description
The unmanned formula racing car adopts a double-motor driving control technical scheme, combines a CPU and a GPU to carry out accelerated calculation, plans a driving path in real time, drives a bottom layer controller to realize vehicle driving, steering and braking, adopts a redundant cone barrel detection algorithm to ensure the accuracy of cone barrel identification, utilizes a target deviation type fast search random tree algorithm to optimize path planning, and accordingly realizes advanced unmanned control under a track environment through a corresponding track environment quickly and accurately. The whole vehicle perception planning strategy specifically comprises the following contents:
1. perception planning method of unmanned formula racing car based on multi-sensor information fusion
The perception planning strategy of the unmanned formula racing car based on the multi-sensor fusion is shown in fig. 1, and the overall perception planning strategy takes an operation control unit as a key signal coordination center of a system. The unmanned sensor system reads GPS/IMU signals, laser radar signals and camera signals, the vehicle-mounted receiving end reads 'GO' signals and remote control emergency stop signals of the remote control transmitting end, the read signals are analyzed and processed by the operation control unit, signals such as turning system closing/activating signals, driving system closing/activating signals, braking system and emergency braking system disengaging/activating signals are sent to control corresponding executing mechanisms, and finally safe and stable operation of the unmanned formula racing car is achieved.
The principle of the vehicle-mounted receiving end is that a remote control transmitting end encodes a 'GO' signal and an emergency stop signal and remotely transmits the signals through the wireless serial port, the wireless serial port receives the signals transmitted by the remote control transmitting end and then decodes the signals and executes corresponding actions, the signals are transmitted to the TTL-to-RS 232 module in a TTL signal form, the TTL-to-RS 232 module converts the signal form, the signals are transmitted to a master controller MicroAutoBoxII in an RS232 signal form, the master controller receives signal instructions and then carries out corresponding processing, and level signals are sent to control the on-off of the relay module so as to determine whether a safety loop is disconnected or not. The remote control transmitting end is powered by a 24V power supply, and the vehicle-mounted receiving end is powered by a 5V power supply.
Perception planning hardware system of formula-free racing car
The integral perception planning strategy of the unmanned formula racing car based on multi-sensor fusion information fusion takes an operation control unit as a key signal coordination center of a system, and mainly comprises the following perception planning hardware systems:
(1) a laser radar. The laser radar comprises a transmitting system, a receiving system and an information processing part, visible and near-infrared light waves are used for transmitting, reflecting and receiving to detect objects, the initial point cloud data of the laser radar is obtained, the local point cloud data of the laser radar is subjected to feature extraction, projection dimension reduction and other processing, obstacle identification is completed based on the point cloud of the laser radar, and finally environment perception of three-dimensional modeling of the laser radar is achieved. Because the requirement on the positioning accuracy of the cone-barrel track of the unmanned formula racing car is high, the high-vertical-resolution laser radar Pandar40 of the standing race is used as a main environment perception sensor, the laser radar has the advantages of no influence of ambient light, high distance measurement accuracy, wide scanning range and the like, and reliable three-dimensional contour information of a target can be obtained. The horizontal field angle of the selected laser radar is 360 degrees, the vertical field angle is-16 degrees to 7 degrees, when the scanning frequency is 10Hz, the horizontal angle resolution of the laser radar is 0.2 degrees, when the scanning frequency is 20Hz, the horizontal angle resolution of the laser radar is 0.4 degrees, and the laser radar is powered by a 24V power supply;
(2) monocular camera. And selecting a monocular camera with the model number of MANTA G-504C, and identifying the track characteristic points based on the color characteristics. The maximum frame rate of the camera under full resolution is 9.2fps, and the monocular camera is powered by a 24V power supply;
(3) GPS/IMU sensors. The GPS positioning updating has no error accumulation, but the GPS updating frequency is low, so the obtained position information has large error, and accurate real-time positioning is difficult to be given when the vehicle runs quickly. The inertial sensor IMU is a sensor for detecting acceleration and rotation motion, and comprises a three-axis acceleration sensor and a three-axis gyroscope angular velocity meter, and the IMU has high updating frequency. The acceleration sensor is used for measuring the acceleration of the three-dimensional space, the acceleration is integrated to obtain the position information of the speed and the displacement of the object, the gyroscope angular velocity meter is used for measuring the rotation angular velocity, and the angular velocity is integrated to obtain the posture information of the object. Since the position information and attitude information are obtained by integrating the acceleration and angular velocity, the IMU update process has an accumulation of errors.
Since the formula-free racing car runs in a complex dynamic environment, data of a GPS and inertial sensors need to be fused to enhance positioning accuracy, and reliability and safety of the unmanned racing car are improved. The GPS/IMU combined navigation is mainly used for judging the position and the posture of the unmanned formula racing car, the invention adopts a NovAtel PwrPak7 model GPS/IMU sensor, the combined navigation adopts a GNSS network, the bandwidth is 2.046MHz, the combined navigation is powered by a 24V power supply, the precision can reach 40cm, and the precision requirement of the unmanned formula racing car is met.
(4) Arithmetic unit controller hardware. The operation unit controller mainly comprises a main controller unit, a data processing unit and an acceleration image calculation unit, wherein Micro AutoBox II is selected as an unmanned control main controller, an ARK-3520P industrial personal computer is selected to process radar point cloud data, NVIDIA TX2 is selected to accelerate image calculation, a CPU and a GPU are adopted for parallel calculation, the real-time requirements of perception data processing and planning algorithm can be met, and the operation control unit is powered by a 24V power supply.
Environment perception algorithm based on point cloud obstacle recognition
The invention provides an environment perception algorithm based on point cloud obstacle identification, which comprises the following specific steps:
(1) outputting a UDP (user Datagram protocol) according to the data of the Pandar40 of the Hosier laser radar, compiling a UDP (user Datagram protocol) driver, and obtaining initial point cloud data of a laser radar three-dimensional target scene by using the driver;
(2) carrying out projection dimensionality reduction on original three-dimensional point cloud data of the laser radar, carrying out orthographic projection on three-dimensional target point cloud to obtain a target surface projection image, and limiting projection in a point cloud interesting area;
(3) searching road edges by adopting Hough transform to obtain road candidate points, extracting lane line edges by adopting a Canny edge detection operator to obtain balance of noise suppression and edge detection, filtering irrelevant areas and road surface point cloud data according to the road edges and the road surface characteristics, and extracting point cloud data of a three-dimensional target;
(4) and filtering discrete noise points according to the outlier characteristics, and eliminating abnormal values in the radar point cloud data. The method comprises the steps of adopting an MAD method as a judgment standard of an outlier, namely calculating the sum of distances between all factors and an average value to detect the outlier, firstly calculating median of all factors, then obtaining an absolute deviation value of each factor and the median, calculating the median of the absolute deviation values, finally determining a reasonable range of the factors, and adjusting factor values beyond the reasonable range;
(5) clustering is carried out by adopting a DBSCAN algorithm, clustering of any shape is identified based on a density clustering algorithm, the density of a space where a certain point r is located is measured through the number of sample points in the neighborhood, then pile cylinders are identified, the boundary of an area is fitted, and finally a local map of the unmanned formula racing car is established to determine a passable area.
Maximum stable extremum region MSER method based on image enhancement and cone bucket detection algorithm of geometric features
Carrying out cone bucket identification based on machine vision, and providing a MSER method based on image enhancement and cone bucket detection of geometric characteristics, wherein the method comprises the following specific steps:
(1) and segmenting the cone-barrel image based on the color characteristics, extracting the white segmentation image and the cone-barrel color segmentation image, and fusing the two template images to obtain a binary image of the cone barrel.
(2) Carrying out color enhancement on the color of the track cone barrel, taking a threshold value from a gray image (the gray value is 0-255) for binarization processing, sequentially increasing the threshold value from 0 to 255, and sequencing pixel points;
(3) carrying out binarization on the image by using different gray threshold values to obtain a most stable region, wherein the most stable extremum region is the gray value of the pixels in the set which is always greater than or less than the gray value of the pixels in the neighborhood region;
(4) reversing the original image, performing binarization processing by taking a threshold value again, detecting a white area of the gray level image, extracting a maximum stable extremum area MSER to obtain an ROI area of the cone barrel, and extracting a contour;
(5) the cone detection is performed based on the geometric features, and the image segmentation based on the color is easily interfered by the background color, so the cone detection is performed based on the geometric features to remove the interference of the complex background.
(6) The method comprises the steps of resolving and positioning cone bucket coordinates based on monocular vision, obtaining pixel points which are in contact with the ground in cone bucket pixels for extracted cone bucket pixel points, and judging the dimensionality of an object where the pixels are located by taking the world coordinate positions of the pixel points as the world coordinates of an actual cone bucket.
Lane sign line detection based on Hough transform
The invention provides a lane marking line detection method of an unmanned formula racing car based on Hough transform, which reduces noise interference and meets the real-time requirement of the unmanned racing car, and comprises the following specific steps:
(1) carrying out image preprocessing on the acquired unmanned equation racetrack video, and extracting lane line edges by adopting a Canny edge detection operator to obtain the balance of noise suppression and edge detection;
(2) dividing a road surface area to generate a triangular self-adaptive region of interest, and reducing a search area;
(3) dividing the road preprocessing image into a left part and a right part, and sequentially carrying out Hough transformation on the left image and the right image to obtain a corresponding left lane line and a corresponding right lane line;
(4) and respectively averaging the polar diameters and polar angles of all straight lines of the left image and the right image, outputting two lane lines, acquiring the lane lines and superposing the lane lines into the original image.
Target-biased bidirectional rapid search random tree (RRT) algorithm
The target deviation type bidirectional rapid search random tree algorithm searches a road space through a continuously expanded random tree until a feasible path connecting a starting point and an end point is found, is improved by methods such as target deviation, dynamic step length, path optimization and the like on the basis of a basic RRT algorithm, and realizes the search of the road space and the optimal path under specific precision by backtracking and searching the optimized path, wherein the specific improvement strategy is as follows:
(1) preprocessing a state space: performing state space preprocessing according to the volume of the formula-free racing car, expanding the edge of the state space, and reserving a safe area to prevent collision of the racing car;
(2) bidirectional search: the random trees are respectively searched by taking the initial point and the target point as starting points, and compared with the unidirectional random tree search by taking the initial point as the starting point, the search efficiency and the search speed are improved;
(3) target bias: when a new random tree node is generated, the random tree node is expanded towards the target direction with a certain probability, so that the random tree continuously approaches to the terminal point during expansion, if the random tree is successfully expanded towards the target direction last time, the random tree node is continuously expanded towards the terminal point when a new node is generated, and if an obstacle is met, the random point is generated again. If the obstacle is not encountered all the time, continuing to expand along the target direction until the end point is reached, thereby effectively reducing the iteration times of the RRT;
(3) dynamic step length: in the searching process of the random tree, a dynamic step algorithm is adopted, and the dynamic change of the step is adopted to expand new nodes of the RRT random number. The step length can be changed, so that the algorithm has higher flexibility, the searching direction can be effectively adjusted when the algorithm encounters an obstacle, the searching efficiency of the algorithm is improved, and the obstacle avoidance capability is enhanced;
(4) path optimization: when the random tree finishes searching the road space, the algorithm backtracks the nodes on the random tree from the end point, calculates the distances between all the nodes and the starting point by taking the node which is just added into the path as the base point, and selects the node which is closest to the starting point to be added into the path. And continuously backtracking by taking the point as a base point until the point reaches a starting point, and finally, deleting redundant nodes and bending and straightening the path by judging whether a connecting line between the nodes has an obstacle or not so as to obtain a better path.
Chassis line control system of formula-free racing car
The chassis line control system of the formula-free racing car mainly comprises a line control driving system, a line control steering system and a line control braking system, compared with a control strategy of a manned mode, five control strategy states are added in the unmanned mode, namely the unmanned system is closed, the unmanned system is prepared, the unmanned system runs, unmanned completion and emergency braking are carried out, the switching between the manned mode and the unmanned mode is realized, and the working principle of the chassis line control system is as follows:
(1) the drive-by-wire system drives the motor by taking the speed and the torque calculated and output by the operation unit as input quantities and sending the input quantities to the motor driver in a CAN message mode to drive the motor;
(2) the steer-by-wire system takes the set rotation angle calculated by the operation unit as an input quantity to control steering, the main controller Micro AutoBoxII controls the on-off of a power supply circuit of the steering driver through a relay, and the steering driver is controlled in a PWM wave control mode to drive the servo motor to rotate, so that the steering angle is accurately controlled. The continuous working torque of the driver of the unmanned steering system is 2.16 N.m, the continuous working current is 700mA, the reduction ratio is 1:1, the rotation angle is-90 degrees to 90 degrees, the motor driver of the steer-by-wire system is powered by a 24V power supply, and the relay of the steering system is powered by a 5V power supply;
(3) the brake-by-wire system triggers a brake signal by combining the target speed calculated by the arithmetic unit, and controls the opening and closing of the relay so as to disable/enable the brake. The execution component of the brake-by-wire system mainly comprises a carbon fiber high-pressure gas cylinder, a pressure reducing valve, a flow limiting valve, an electromagnetic valve and a cylinder, the control logic of the brake-by-wire system is shown in figure 2, and the working principle of the unmanned brake-by-wire system and the working principle of switching between the manned brake mode and the unmanned brake mode are as follows:
Figure DEST_PATH_IMAGE001
the working principle of the unmanned-wire control brake system. The compressed air in the high-pressure gas cylinder displays the gas pressure in the cylinder through the pressure reducing valve, the pressure reducing valve is communicated with the flow limiting valve, the flow limiting valve controls the output gas pressure, the output gas pressure is connected with the electromagnetic valve, the electromagnetic valve is a 24V direct-current normally closed electromagnetic valve, the output end of the electromagnetic valve is connected into the cylinder, and the tail end of the cylinder is connected with the brake pedal. When the brake pedal works, 24V direct current is supplied to two ends of the electromagnetic valve, the input and the output of the electromagnetic valve are conducted, and the air pressure output by the flow limiting valve pushes the air cylinder to do work, so that the brake of the brake pedal is completed.
Figure 81814DEST_PATH_IMAGE002
The working principle of the manned brake mode and the unmanned brake mode is switched. The emergency braking system realizes the functions of braking system switching and unmanned braking through the relay and the electromagnetic valve, the micro control module controls the states of the five electromagnetic valves by controlling the three relays, wherein two normally closed electromagnetic valves 8 control the switch of the manned braking oil can, two normally closed electromagnetic valves 7 control the switch of the unmanned braking oil can, and a normally open electromagnetic valve 6 controls the switch of the high-pressure gas cylinder. The switch of the electromagnetic valve is controlled through the relay so as to control the switch of the oil can and the high-pressure gas cylinder, so that the racing car can be reasonably switched between a manned braking state and an unmanned braking state.
The reduction ratio of a brake driver of the brake-by-wire system is 4.5, the continuous working current is 700mA, and the continuous working speed is 10 cm/s. The brake driver of the brake-by-wire system and the electromagnetic valve of the brake system/emergency brake system are powered by a 24V power supply, the relays of the brake system and the emergency brake system are powered by a 5V power supply, and the brake driver performs braking of the driver through level control.

Claims (5)

1. A perception planning method of an unmanned formula racing car based on multi-sensor information fusion is characterized in that an operation control unit is used as a signal coordination center, a GPS/IMU signal, a laser radar signal and a camera signal are read based on a sensor system, a remote control transmitting end signal is read based on a vehicle-mounted receiving end, and after analysis and processing are carried out by the operation control unit, a control signal is sent to a chassis line control system, so that safe and stable running of the racing car is realized; the specific sensing method comprises the following steps:
(1) performing obstacle identification based on the laser radar point cloud to realize the environmental perception of laser radar three-dimensional modeling;
(2) carrying out cone bucket color identification and detection positioning based on an MSER method of a maximum stable extremum region of image enhancement and a cone bucket detection algorithm of geometrical characteristics;
(3) detecting lane marking lines based on Hough transform;
(4) and performing road space search and path optimization based on a target deviation type bidirectional rapid search random tree RRT algorithm.
2. The method for the perception planning of the unmanned formula racing car based on the multi-sensor information fusion as claimed in claim 1, wherein the environment perception algorithm based on the identification of the laser radar point cloud obstacles comprises the following steps:
(1) compiling a UDP (user Datagram protocol) protocol drive according to the point cloud data of the laser radar, and obtaining initial point cloud data of a three-dimensional target scene of the laser radar by using the drive;
(2) carrying out projection dimensionality reduction on original three-dimensional point cloud data of the laser radar, carrying out orthographic projection on three-dimensional target point cloud to obtain a target surface projection image, and limiting projection in a point cloud interesting area;
(3) searching road edges by adopting Hough transform to obtain road candidate points, extracting lane line edges by adopting a Canny edge detection operator to obtain balance of noise suppression and edge detection, filtering irrelevant areas and road surface point cloud data according to the road edges and the road surface characteristics, and extracting point cloud data of a three-dimensional target;
(4) discrete noise points are filtered by using outlier characteristics, abnormal values in the radar point cloud data are eliminated, and an MAD method is used as a judgment standard of the outliers;
(5) clustering is carried out based on a DBSCAN algorithm, pile cylinders are identified, the boundary of the area is fitted, a local map of the unmanned formula racing car is established, and the passable area is determined.
3. The method for perceptual planning of the auto-human formula racing based on the multi-sensor information fusion of claim 1, wherein the image-enhanced MSER method and the geometric feature-based cone-bucket detection algorithm comprise the following steps:
(1) segmenting the cone-barrel image based on the color characteristics, extracting a white segmentation image and a cone-barrel color segmentation image, and fusing the two template images to obtain a binary image of the cone barrel;
(2) carrying out color enhancement on the color of the track cone barrel, carrying out binarization processing on the gray level image by taking a threshold value, sequentially increasing the threshold value from 0 to 255, and sequencing pixel points;
(3) carrying out binarization on the image by using different gray level threshold values to obtain a most stable region;
(4) reversing the original image, performing binarization processing by taking a threshold value again, detecting a white area of the gray level image, extracting a maximum stable extremum area MSER to obtain an ROI area of the cone barrel, and extracting a contour;
(5) carrying out cone bucket detection based on geometric characteristics to remove the interference of a complex background;
(6) the method comprises the steps of resolving and positioning cone bucket coordinates based on monocular vision, obtaining pixel points in cone bucket pixels, wherein the pixel points are in contact with the ground, and judging the dimensionality of an object where the pixels are located by taking the world coordinate positions of the pixel points as the world coordinates of an actual cone bucket.
4. The perception planning method for the formula racer based on the multi-sensor information fusion as claimed in claim 1, wherein the hough transform based method for detecting lane marking lines of the formula racer comprises the following steps:
(1) carrying out image preprocessing on the acquired unmanned equation racetrack video, and extracting lane line edges by adopting a Canny edge detection operator to obtain the balance of noise suppression and edge detection;
(2) dividing a road surface area to generate a triangular self-adaptive region of interest, and reducing a search area;
(3) dividing the road preprocessing image into a left part and a right part, and sequentially carrying out Hough transformation on the left image and the right image to obtain a corresponding left lane line and a corresponding right lane line;
(4) and respectively averaging the polar diameters and polar angles of all straight lines of the left image and the right image, outputting two lane lines, acquiring the lane lines and superposing the lane lines into the original image.
5. The perception planning method for the unmanned formula racing car based on the multi-sensor information fusion of claim 1, wherein the target bias type based bidirectional fast search random tree RRT algorithm comprises the following steps:
(1) preprocessing a state space: performing state space preprocessing according to the volume of the formula-free racing car, expanding the edge of the state space, and reserving a safe area to prevent collision of the racing car;
(2) bidirectional search: respectively searching random trees by taking an initial point and a target point as starting points;
(3) target bias: when a new random tree node is generated, the node is expanded towards a target direction with a certain probability, so that the random tree is continuously approached to a terminal point during expansion, and the iteration frequency of RRT is reduced;
(4) dynamic step length: in the process of searching the random tree, a dynamic step algorithm is adopted, the dynamic change of the step is adopted to expand a new node of the RRT random number, the searching direction can be effectively adjusted when an obstacle is encountered, and the obstacle avoidance capability is enhanced;
(5) path optimization: when the random tree finishes searching the road space, the algorithm backtracks the nodes on the random tree from the end point, and removes redundant nodes and straightens the path by judging whether the connecting line between the nodes has an obstacle, thereby obtaining a better path.
CN202010849407.7A 2020-08-21 2020-08-21 Unmanned formula racing car perception planning method based on multi-sensor information fusion Active CN112101128B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010849407.7A CN112101128B (en) 2020-08-21 2020-08-21 Unmanned formula racing car perception planning method based on multi-sensor information fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010849407.7A CN112101128B (en) 2020-08-21 2020-08-21 Unmanned formula racing car perception planning method based on multi-sensor information fusion

Publications (2)

Publication Number Publication Date
CN112101128A CN112101128A (en) 2020-12-18
CN112101128B true CN112101128B (en) 2021-06-22

Family

ID=73753328

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010849407.7A Active CN112101128B (en) 2020-08-21 2020-08-21 Unmanned formula racing car perception planning method based on multi-sensor information fusion

Country Status (1)

Country Link
CN (1) CN112101128B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112578673B (en) * 2020-12-25 2022-08-02 浙江科技学院 Perception decision and tracking control method for multi-sensor fusion of formula-free racing car
CN113156932A (en) * 2020-12-29 2021-07-23 上海市东方海事工程技术有限公司 Obstacle avoidance control method and system for rail flaw detection vehicle
CN112562093B (en) * 2021-03-01 2021-05-18 湖北亿咖通科技有限公司 Object detection method, electronic medium, and computer storage medium
CN113467480B (en) * 2021-08-09 2024-02-13 广东工业大学 Global path planning algorithm for unmanned equation
CN113655498B (en) * 2021-08-10 2023-07-18 合肥工业大学 Method and system for extracting cone barrel information in racetrack based on laser radar
CN113665591B (en) * 2021-09-28 2023-07-11 上海焱眼鑫睛智能科技有限公司 Unmanned control method, unmanned control device, unmanned control equipment and unmanned control medium
CN113888621B (en) * 2021-09-29 2022-08-26 中科海微(北京)科技有限公司 Loading rate determining method, loading rate determining device, edge computing server and storage medium
CN114898319B (en) * 2022-05-25 2024-04-02 山东大学 Vehicle type recognition method and system based on multi-sensor decision level information fusion

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103852265B (en) * 2014-03-27 2016-07-06 北京联合大学 A kind of automatic driving vehicle environment subitem Performance Test System and method of testing
US10733661B1 (en) * 2015-05-22 2020-08-04 Walgreen Co. Automatic mapping of store layout using soft object recognition
CN106950964B (en) * 2017-04-26 2020-03-24 北京理工大学 Unmanned electric university student formula racing car and control method thereof
CN108009548A (en) * 2018-01-09 2018-05-08 贵州大学 A kind of Intelligent road sign recognition methods and system
CN108519773B (en) * 2018-03-07 2020-01-14 西安交通大学 Path planning method for unmanned vehicle in structured environment
CN108664968B (en) * 2018-04-18 2020-07-07 江南大学 Unsupervised text positioning method based on text selection model
KR102014097B1 (en) * 2019-01-16 2019-08-26 주식회사 나노시스템즈 calibration system of scanner and camera
CN110379178B (en) * 2019-07-25 2021-11-02 电子科技大学 Intelligent unmanned automobile parking method based on millimeter wave radar imaging
CN110780305B (en) * 2019-10-18 2023-04-21 华南理工大学 Track cone detection and target point tracking method based on multi-line laser radar
CN111376970A (en) * 2020-04-22 2020-07-07 大连理工大学 Automatic and manual auto-change over device of unmanned equation motorcycle race a steering system

Also Published As

Publication number Publication date
CN112101128A (en) 2020-12-18

Similar Documents

Publication Publication Date Title
CN112101128B (en) Unmanned formula racing car perception planning method based on multi-sensor information fusion
US11593950B2 (en) System and method for movement detection
CN108983781B (en) Environment detection method in unmanned vehicle target search system
Barth et al. Where will the oncoming vehicle be the next second?
CN109166140B (en) Vehicle motion track estimation method and system based on multi-line laser radar
Li et al. Springrobot: A prototype autonomous vehicle and its algorithms for lane detection
Cai et al. Vision-based trajectory planning via imitation learning for autonomous vehicles
CN111551957B (en) Park low-speed automatic cruise and emergency braking system based on laser radar sensing
CN107422730A (en) The AGV transportation systems of view-based access control model guiding and its driving control method
CN111788102A (en) Odometer system and method for tracking traffic lights
CA3086261A1 (en) Vehicle tracking
CN101701828A (en) Blind autonomous navigation method based on stereoscopic vision and information fusion
CN113071518B (en) Automatic unmanned driving method, minibus, electronic equipment and storage medium
Aldibaja et al. LIDAR-data accumulation strategy to generate high definition maps for autonomous vehicles
CN114998276B (en) Robot dynamic obstacle real-time detection method based on three-dimensional point cloud
Fries et al. Autonomous convoy driving by night: The vehicle tracking system
Wang et al. Map-enhanced ego-lane detection in the missing feature scenarios
Jun et al. Autonomous driving system design for formula student driverless racecar
Chetan et al. An overview of recent progress of lane detection for autonomous driving
CN115923839A (en) Vehicle path planning method
CN114120075A (en) Three-dimensional target detection method integrating monocular camera and laser radar
CN114383598B (en) Tunnel construction operation car and automatic driving system thereof
Tsukiyama Global navigation system with RFID tags
US20220297696A1 (en) Moving object control device, moving object control method, and storage medium
CN114954525A (en) Unmanned transport vehicle system suitable for phosphorite mining roadway and operation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant