WO2022082843A1 - Multi-sensor integrated unmanned vehicle detection and obstacle avoidance system and obstacle avoidance method - Google Patents
Multi-sensor integrated unmanned vehicle detection and obstacle avoidance system and obstacle avoidance method Download PDFInfo
- Publication number
- WO2022082843A1 WO2022082843A1 PCT/CN2020/124842 CN2020124842W WO2022082843A1 WO 2022082843 A1 WO2022082843 A1 WO 2022082843A1 CN 2020124842 W CN2020124842 W CN 2020124842W WO 2022082843 A1 WO2022082843 A1 WO 2022082843A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- unmanned vehicle
- obstacle
- obstacle avoidance
- line
- arm
- Prior art date
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 62
- 238000000034 method Methods 0.000 title claims abstract description 21
- 230000009977 dual effect Effects 0.000 claims abstract description 10
- 230000033001 locomotion Effects 0.000 claims description 26
- 230000004927 fusion Effects 0.000 claims description 21
- 238000012790 confirmation Methods 0.000 claims description 16
- 238000013459 approach Methods 0.000 claims description 7
- 238000007500 overflow downdraw method Methods 0.000 claims description 6
- 230000006870 function Effects 0.000 abstract description 10
- 238000012545 processing Methods 0.000 description 34
- 230000008569 process Effects 0.000 description 12
- 238000010586 diagram Methods 0.000 description 11
- 238000004891 communication Methods 0.000 description 8
- 238000013461 design Methods 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 7
- 230000009286 beneficial effect Effects 0.000 description 5
- 238000004422 calculation algorithm Methods 0.000 description 5
- 230000002093 peripheral effect Effects 0.000 description 5
- 230000000007 visual effect Effects 0.000 description 5
- 238000011161 development Methods 0.000 description 4
- 230000018109 developmental process Effects 0.000 description 4
- 230000001133 acceleration Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 239000000428 dust Substances 0.000 description 3
- 239000003897 fog Substances 0.000 description 3
- 238000003032 molecular docking Methods 0.000 description 3
- 230000002441 reversible effect Effects 0.000 description 3
- 239000000779 smoke Substances 0.000 description 3
- 230000001360 synchronised effect Effects 0.000 description 3
- 206010039203 Road traffic accident Diseases 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 230000007613 environmental effect Effects 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000008447 perception Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 101000857682 Homo sapiens Runt-related transcription factor 2 Proteins 0.000 description 1
- 102100025368 Runt-related transcription factor 2 Human genes 0.000 description 1
- 229910000577 Silicon-germanium Inorganic materials 0.000 description 1
- LEVVHYCKPQWKOP-UHFFFAOYSA-N [Si].[Ge] Chemical compound [Si].[Ge] LEVVHYCKPQWKOP-UHFFFAOYSA-N 0.000 description 1
- 230000005856 abnormality Effects 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000032683 aging Effects 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 239000003054 catalyst Substances 0.000 description 1
- 239000003795 chemical substances by application Substances 0.000 description 1
- 230000006378 damage Effects 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000007123 defense Effects 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000007667 floating Methods 0.000 description 1
- 238000007499 fusion processing Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000002310 reflectometry Methods 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000003860 storage Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0238—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
- G05D1/024—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0214—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0221—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0223—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0225—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving docking at a fixed facility, e.g. base station or loading bay
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0246—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0255—Control of position or course in two dimensions specially adapted to land vehicles using acoustic signals, e.g. ultra-sonic singals
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0257—Control of position or course in two dimensions specially adapted to land vehicles using a radar
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0276—Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
Definitions
- the invention belongs to the technical field of unmanned driving, and in particular relates to a multi-sensor fusion unmanned vehicle detection and obstacle avoidance system and an obstacle avoidance method.
- a driverless car is a smart car that senses the road environment through the on-board sensing system, automatically plans the driving route and controls the vehicle to reach the predetermined target. It uses on-board sensors to perceive the surrounding environment of the vehicle, and controls the steering and speed of the vehicle according to the road, vehicle position and obstacle information obtained by the perception, so that the vehicle can drive on the road safely and reliably.
- the driverless car integrates many technologies such as automatic control, architecture, artificial intelligence, and visual computing. It is the product of the highly developed computer science, pattern recognition and intelligent control technology. It is also an important indicator of a country's scientific research strength and industrial level. , has broad application prospects in the fields of national defense and national economy.
- ranging sensors include: ultrasonic ranging sensors, infrared ranging sensors, CCD vision systems, millimeter-wave radar, microwave radar, and lidar, etc.
- Lidar is actually a radar that works in the optical band (special band). Lidar is active detection and does not depend on external lighting conditions or the radiation characteristics of the target itself. It only needs to emit its own laser beam and emit laser light through detection. The echo signal of the beam is used to obtain target information.
- the laser wavelength is short, it can emit a laser beam with a very small divergence angle, and the multi-path effect is small, and it can detect low-altitude/ultra-low-altitude targets.
- Single-line lidar is a kind of lidar.
- single-line lidar Since there is only one transmission and one reception, the structure is relatively simple and the use is relatively convenient; the scanning period of single-line lidar is short, the scanning speed of the forward direction environment is fast, and the angle resolution is high , the radar itself is small in size, relatively light in weight, relatively low in power consumption, high in reliability, and relatively low in cost; single-line lidar has a relatively wide detection range and can provide a large amount of environmental scanning point distance information, which can be used for control decisions. Greater convenience and the above advantages make single-line lidar a preferred choice for unmanned vehicles to perceive unknown environments.
- the general structure of a simple unmanned vehicle is shown in Figure 1, and the principle of the detection and obstacle avoidance system is shown in Figure 2.
- the unmanned vehicle is detected by the lidar sensor detection system and sent to the PC, and then the PC is coded and processed to send control commands to the lower computer based on the single-chip microcomputer, and the single-chip control module sends control commands to the DC brushless motor after communication decoding.
- the controller drives multiple DC brushless motors to move; the single-chip control system adjusts the speed of the motors according to changes in the surrounding environment, and then controls the position of the unmanned vehicle in the actual environment to realize the operation of the unmanned vehicle in the actual working conditions.
- the existing simple unmanned vehicle control systems are all implemented by a single single-chip microcomputer controlling a single single-line lidar sensor or a multi-line lidar sensor to achieve the above functions.
- Existing unmanned vehicles have many problems in long-term operation, mainly including:
- the data obtained by the single-line lidar is 2D data, and the information such as the height of the target cannot be distinguished. Some small objects will be ignored and eventually become obstacles.
- the navigation of the single-line lidar sensor has become a bottleneck in the vehicle field;
- a single single-line lidar cannot obtain road information, and needs to cooperate with other sensors to read and discriminate the ground information;
- the multi-line laser radar can realize 2.5D or 3D data, judge the height of obstacles, and process the information on the ground, etc., the price is relatively expensive.
- the price of a 64-beam laser radar is as high as 700,000 yuan. Widespread use;
- a single single-line lidar cannot detect information such as corners, road cliffs, etc., and needs to be used with other sensors to read the surrounding obstacle signals or positioning sensor signs;
- the existing unmanned vehicles basically only consider forward detection and obstacle avoidance, and do not consider the information of obstacles in the rear. achieve accelerated avoidance;
- a single-line lidar-based unmanned vehicle will also have a detection blind spot during the actual driving process, and once an obstacle enters the movement blind spot during the movement, a traffic accident will also occur;
- the unmanned vehicle based on the single-line lidar has a slow acquisition speed of the road image ahead, which affects the rapid travel of the unmanned vehicle;
- the detection range of lidars with relatively high cost performance is generally less than 100 meters, which is not conducive to the judgment of unmanned vehicles to quickly travel obstacles.
- the principle and structure of the visual sensor are similar to the human sensory organization, and the visual sensor has the advantages of small size, low cost, convenient installation, good concealment, wide detection range and large amount of information.
- Adding a camera to the unmanned vehicle environment detection system can sense the surrounding environment in real time, collect data, and identify, detect and track static and dynamic objects.
- the controller senses possible dangers, effectively increasing the comfort and safety of car driving; the disadvantages of lidar and vision sensors are:
- the sensor detection system of the unmanned vehicle is improved compared with the single-line lidar, but this distance is not conducive to the high-speed driving of the unmanned vehicle;
- CCD-based monocular vision obstacle identification requires a feature library with a large data capacity. Once an object has no feature library data to match with it, it will cause obstacles to be undistinguished and thus cannot accurately estimate the target's characteristics. The distance is not conducive to the high-speed driving of unmanned vehicles;
- Microwaves are radio waves with very short wavelengths.
- the directionality of microwaves is very good, and the speed is equal to the speed of light.
- the microwave radar measures the distance of obstacles according to the round-trip time of electromagnetic waves.
- microwave Compared with optical guidance such as infrared and laser, microwave has strong ability to penetrate fog, smoke and dust, and has the characteristics of all-weather (except heavy rain) all day.
- the present invention provides a multi-sensor fusion unmanned vehicle detection and obstacle avoidance system and obstacle avoidance method, so that the unmanned vehicle can find obstacles in a complex environment from a distance all-weather and quickly realize effective Avoid obstacles, thereby improving the safety of unmanned vehicles when driving at high speed.
- the present invention achieves the above technical purpose through the following technical means.
- a multi-sensor fusion method for unmanned vehicle detection and obstacle avoidance specifically:
- the on-board computer NUC retrieves the driving path and navigation map information of the unmanned vehicle through the control station, and the ARM+FPGA controller determines that the unmanned vehicle starts to accelerate automatically when the blind spot ultrasonic sensor group determines that there are no obstacles in the blind spot.
- the unmanned vehicle At the moment when the unmanned vehicle starts, it enters the working condition selection mode according to the weather conditions: if the weather is good, the microwave radar, single-line lidar sensor and CCD camera all work, and the CCD camera and microwave radar transmit long-distance obstacle information to the ARM+FPGA controller , the single-line lidar sensor transmits short-range obstacle information to the NUC.
- the obstacle information is used as the feedback distance signal for the autonomous navigation of the unmanned vehicle; if the weather is bad, only the microwave radar works, and the unmanned vehicle is based on the feedback distance signal. Start to slow down and navigate autonomously;
- the ARM+FPGA will adjust the position and posture of the unmanned vehicle before normal driving according to the received road marking points, drive normally according to the vehicle map information, and approach the forward direction of the long-distance obstacle. And implement obstacle avoidance; if the weather is bad, the unmanned vehicle will avoid obstacles at medium and long distances.
- the microwave radar at the front and the top of the unmanned vehicle cooperates with the single-line laser radar to avoid obstacles.
- the microwave radar MR1 and the single-line laser radar L1 cooperate to detect the front.
- MR1 first conducts medium and long-distance detection
- the single-line lidar L1 further confirms the depth and width of the undulations
- the microwave radars MR1 and MR2 cooperate with the single-line lidars L1 and L3 to determine whether there are obstacles in front of them: microwave radar MR2 first conducts medium and long-distance detection, and after finding suspected obstacles, it is confirmed by microwave radar MR1.
- microwave radar MR1, MR2 and single-line laser radar L3 , L2 cooperate to determine whether there is an obstacle in the front left: microwave radar MR2 first conducts medium and long-distance detection, after finding a suspected obstacle, microwave radar MR1 conducts a second confirmation, and the suspected obstacle is roughly determined, and then single-line laser radar L3, L2.
- microwave radar MR1, MR2 cooperate with single-line lidar L3, L4 to determine whether there is an obstacle in the front right: microwave radar MR2 first performs medium and long-distance detection, and microwave radar MR1 performs secondary confirmation after finding a suspected obstacle.
- the single-line lidars L3 and L4 are used for precise location confirmation.
- the single-line lidar L3 detects the obstacle information, it enters the single-line lidar precise positioning and navigation mode:
- the unmanned vehicle If the single-line lidar L3 and L1 detect that there are undulations on the road ahead, if the height and width of the undulations exceed the requirements for the unmanned vehicle to cross, the unmanned vehicle will perform forward avoidance protection; Within the range, it will drive at the set normal speed;
- the unmanned vehicle will perform emergency obstacle avoidance to the left or right; if there is no obstacle, the unmanned vehicle will accelerate to the set normal speed. ;
- the unmanned vehicle will perform an emergency obstacle avoidance to the right; if there is no obstacle, the unmanned vehicle will accelerate to the set normal speed;
- the unmanned vehicle will perform an emergency obstacle avoidance to the left; if there is no obstacle, the unmanned vehicle will accelerate to the set normal speed.
- the single-line laser radar and microwave radar behind the unmanned vehicle always detect the environment behind it, and if it is judged that there is an obstacle in the rear approaching the unmanned vehicle, the rear obstacle avoidance protection is performed.
- the front blind area ultrasonic sensor group and the rear blind area ultrasonic sensor group always detect the environment of the blind area, and if it is judged that a temporary obstacle is approaching the blind area of the unmanned vehicle, the blind area obstacle avoidance protection is performed.
- the CCD camera reads various navigation signs on both sides of the moving direction, and after processing by the ARM+FPGA controller, it is used as the navigation signs for the high-speed unmanned vehicle running.
- the CCD camera reads the sign of the site where the unmanned vehicle arrives, so as to realize the automatic walking, position tracking and scheduling of the unmanned vehicle.
- a multi-sensor fusion detection and obstacle avoidance system for unmanned vehicles including multiple single-line laser radars, dual CCD cameras, microwave radar, front blind area ultrasonic sensor group, rear blind area ultrasonic sensor group and three-core control based on ARM+FPGA+NUC
- Multiple single-line lidars communicate with NUC, dual CCD cameras, microwave radar, front blind area ultrasonic sensor group, rear blind area ultrasonic sensor group all communicate with ARM+FPGA controller, and NUC communicates with ARM+FPGA controller.
- the plurality of single-line laser radars include a single-line laser radar L1 arranged on the roof of the unmanned vehicle and having an included angle with the horizontal plane of ⁇ , where ⁇ is 5 to 15°.
- the single-line laser radar further includes a single-line laser radar group arranged in front of the unmanned vehicle and a single-line laser radar group arranged behind the unmanned vehicle, and microwave radars are arranged between the single-line laser radar groups.
- the data fusion of multiple single-line lidars of unmanned vehicles is processed by the NUC, which makes the control relatively simple, greatly improves the operation speed, solves the bottleneck of slow single ARM soft operation, shortens the development cycle, and the program can be Strong transplant ability.
- the present invention completely realizes the single-board control of the unmanned vehicle, saves the space occupied by the control board, and also realizes the effective detection and obstacle avoidance of multiple independent areas of the unmanned vehicle, which is beneficial to improve the stability of the unmanned vehicle system performance and dynamic performance.
- the controller in the present invention uses NUC to process the data and algorithms of multiple single-line lidar sensors, and fully considers the surrounding interference sources, the ARM is relieved from the heavy workload and effectively prevents the main motion control.
- the program's "running flight” greatly enhances the anti-interference ability of unmanned vehicles.
- the synchronous acquisition system inside the FPGA of the present invention ensures the synchronization of the data acquisition of the dual CCD cameras, and ensures the accuracy of the subsequent distance calculation.
- the controller in the present invention uses FPGA to process a large amount of binocular vision image data, and fully considers the surrounding interference sources, the ARM is freed from the heavy image processing work, which not only improves the operation speed, but also effectively The "runaway" of the motion control main program is prevented, and the anti-interference ability of the unmanned vehicle is greatly enhanced.
- the image data collection of the CCD camera in the present invention is farther than the detection distance of the economical and practical single-line laser radar, so that the obstacle detection range of the unmanned vehicle is wider, and the existence of the microwave radar fills the detection distance of the CCD camera and the single-line laser radar.
- a blank area between is conducive to the tracking of obstacles and the determination of the distance, which is conducive to the acceleration and deceleration of the unmanned vehicle, and improves the dynamic performance of the unmanned vehicle.
- binocular vision has no restriction on the recognition rate, and the parallax principle is used to directly measure objects for all obstacles, and the measurement accuracy is higher than that of monocular vision, so that the distance from the obstacle can be more accurately estimated and realized in advance. Obstacle avoidance warning.
- the CCD camera in the present invention can effectively detect the obstacles protruding from the ground around the running direction of the high-speed unmanned vehicle, which can not only improve the accuracy of obstacle avoidance, but also provide accurate positioning for the unmanned vehicle navigation. .
- the CCD camera in the present invention can effectively distinguish road signs such as lane detection lines, straight driving and turning in regular traffic, and the unmanned vehicle can rely on these signs to correct its own position and attitude, which improves the free running time of the unmanned vehicle.
- the stability and accuracy of autonomous navigation can effectively distinguish road signs such as lane detection lines, straight driving and turning in regular traffic, and the unmanned vehicle can rely on these signs to correct its own position and attitude, which improves the free running time of the unmanned vehicle.
- the CCD camera in the present invention can effectively distinguish traffic prompts such as green light, yellow light and red light in regular traffic, and the unmanned vehicle can adjust its own speed according to these information to meet the needs of fast driving, fast parking, etc. Safety and stability of unmanned vehicles when they travel freely.
- the single-line laser radar L1 has a certain angle with the ground. This angle can help the single-line laser radar L1 to further accurately locate the undulations of the moving road surface found by the CCD camera, and prevent the deep pit caused by road damage from affecting the unmanned vehicle. normal driving.
- the single-line laser radar L1 has a certain angle with the ground. This angle can help the single-line laser radar L1 to further accurately locate the CCD camera to find small obstacles temporarily left on the moving road, and notify the unmanned vehicle control system twice to realize Emergency avoidance ensures the normal driving of the unmanned vehicle.
- the fusion of multiple single-line laser radars and ultrasonic sensors in the front can accurately locate the location of the obstacles found by the CCD camera, and notify the unmanned vehicle control system twice to achieve avoidance, which is beneficial to improve the speed of unmanned vehicles. and security.
- a plurality of single-line laser radars and microwave radars are integrated in the front. Since the directions of the single-line laser radar and the microwave radar are intersected, the cylindrical objects on both sides found by the CCD camera can be accurately detected, which provides certain conditions for the forward positioning of the unmanned vehicle. s help.
- multiple single-line laser radars and microwave radars are integrated in the front. Since the directions of the single-line laser radar and the microwave radar are intersected, the free areas on both sides found by the CCD camera can be accurately detected, which is the best way for the unmanned vehicle to turn and avoid. provide some assistance.
- the fusion of multiple single-line laser radars and microwave radars in the rear can effectively detect the distance between the unmanned vehicle and the moving obstacles behind.
- the unmanned vehicle can speed up and escape with the help of the controller. Dangerous areas, play the role of protecting the unmanned vehicle body.
- the front blind spot detection and obstacle avoidance system in the present invention can effectively eliminate the close blind spot that occurs when the unmanned vehicle just starts to accelerate forward, and improves the safety and reliability of the unmanned vehicle when it starts to accelerate forward; It can effectively eliminate the short-range blind spots that appear in real time when the unmanned vehicle is driving normally, and further improve the safety and reliability of the unmanned vehicle.
- the rear blind spot detection and obstacle avoidance system in the present invention can effectively eliminate the short-range blind spot that occurs when the unmanned vehicle just starts reversing, thereby improving the safety and reliability of the unmanned vehicle when reversing; and can effectively eliminate the unmanned vehicle
- the short-range blind spot that appears in real time when the car is reversing further improves the safety and reliability of the unmanned vehicle.
- the microwave radar in the present invention is activated to detect the forward environment at long and medium distances, and the microwave radar is used for navigation under the condition that the laser radar is interfered, which is beneficial to Improve the safety of unmanned vehicles in harsh environments.
- a site sensor with a certain degree of redundancy is added, which is not only conducive to the positioning of the unmanned vehicle, but also helps the terminal to detect the unmanned vehicle. tracking.
- the binocular vision in the present invention does not require the ARM controller to compare a huge sample feature library with the collected images, which directly solves the identification failure caused by the lack of the data feature library, and improves the performance of the high-speed unmanned vehicle. safety.
- the CCD camera in the present invention can transmit on-site images to the control station through a wireless device when the unmanned vehicle encounters an emergency, and the control station prejudges and makes a plan that requires emergency treatment;
- Fig. 1 is a two-dimensional structure diagram of an ordinary simple unmanned vehicle
- Figure 2 is a schematic diagram of an ordinary unmanned vehicle detection and obstacle avoidance system
- Fig. 3 is the schematic diagram of image processing of the connection between ARM and FPGA of the present invention.
- Figure 4 is a two-dimensional structural diagram of a multi-sensor fusion unmanned vehicle
- Figure 5 is a two-dimensional structural diagram of the arrangement of the front multi-element radar group
- Fig. 6 is a two-dimensional structure diagram based on binocular vision CCD black and white camera arrangement
- Figure 7 is a two-dimensional structural diagram of the arrangement of ultrasonic sensor groups in the front blind area
- Figure 8 is a two-dimensional structural diagram of the arrangement of the rear multi-radar group and the rear blind area ultrasonic sensor group;
- Figure 9 is a schematic diagram of a multi-sensor fusion unmanned vehicle detection and obstacle avoidance system
- Figure 10 is a schematic diagram of the operation of the multi-sensor fusion unmanned vehicle
- Figure 11 is the acceleration and deceleration curve diagram of the unmanned vehicle operation.
- SICK's lidar adopts the mature laser-time-of-flight principle and multiple echo technology, non-contact detection, and can set various graphic protection areas according to the needs of the scene, and can simply modify the graphics at any time according to the needs of the scene.
- the sensor has reliable anti-interference performance through internal filtering and multiple echo technology.
- LMS151 and LMS122 are new high-performance lidars from SICK, which are respectively aimed at short-range detection.
- the LMS151 series is aimed at objects with a 10% reflectivity, and the distance can reach 50 meters, and the LMS122 detection distance can reach 20 meters.
- the present invention uses a LMS1XXX series based lidar group to form an unmanned vehicle short-range front and rear obstacle detection and protection system: the present invention adopts a position slightly higher than the roof of the vehicle, and the included angle with the horizontal plane is ⁇ (5 ⁇ 15°), the LMS151-10100 single-line lidar L1 located at the center of the front of the roof, diagonally downwards (Figure 4, 5), with a set of LMS151-10100 single-line about 40cm above the ground and parallel to the horizontal plane
- the lidar group FLT (usually 3, L2, L3, L4, see Figures 4 and 5) constitutes an accurate forward short-range detection and obstacle avoidance system, in which L2 and L4 are located at the front left and right of the front of the vehicle, respectively.
- L3 is located at the center of L2 and L4, and its center direction is consistent with the movement direction;
- the present invention adopts a group of LMS122-10100 single-line laser radar group BLTs (generally 2 pieces, L5 and L6 respectively, see Figures 4 and 8) that are about 40cm-60cm above the ground and parallel to the horizontal plane to form the rear detection and detection system of the unmanned vehicle. Protection System.
- the principle of the binocular CCD camera acquisition system is similar to the human eye.
- the binocular vision based on the CCD camera is to add a CCD camera to the monocular vision.
- the binocular vision can perceive the distance of the object. The farther the object is, the smaller the parallax. On the contrary, the greater the parallax; the binocular vision directly measures all obstacles, and the measurement accuracy is higher than that of the monocular vision, and the distance is calculated directly by the parallax, without the need for a huge data sample library.
- the present invention installs two (CCD1 and CCD2) CCD cameras on the front windshield of the unmanned vehicle to form binocular vision (see Figures 4 and 6) for long-distance environmental detection and obstacle avoidance.
- Infineon Technologies AG specializes in the production of microwave radar for vehicles: the vehicle radar system emits radio waves, which are reflected back by vehicles or other objects ahead. Infineon's radar chip is responsible for sending and receiving these high-frequency signals, and passing them to the radar electronic control unit (ECU), which measures the distance between the car and other moving objects and their speed, for people and unmanned people. Driving provides a distance criterion; Infineon microwave radars mainly include 77GHz and 24GHz. 77GHz is the standard frequency range for radar applications such as adaptive cruise control and collision warning.
- the 77GHz radar chip can Let the unmanned vehicle “recognize” obstacles and other road usage within a distance of 250 meters; the 24GHz radar chip can also "identify” obstacles and other road usage within a distance of 100 meters.
- This microwave radar is made of silicon germanium technology and The new product operating in the 24GHz ISM band (24.0GHz to 24.25GHz) is equipped with an on-chip radar transceiver with the industry's highest integration level and a receive-only auxiliary chip, enabling system design flexibility to achieve low cost and high performance design goals.
- the three devices of the new series are BGT24MTR11 (single transmitter and single receiver channel), BGT24MTR12 (single transmitter and double receiver channel) and BGTMR2 (dual receiver); for the reasons of cost performance, the present invention adopts BGT24MTR11 for medium and long distance detection, as shown in Figure 5
- the setting position and method of the microwave radar MR1 are the same as those of the single-line laser radar L1, and its included angle with the horizontal plane is ⁇ , and ⁇ ; Between radar L5 and single-line lidar L6 ( Figure 8).
- the unmanned vehicle Due to the combination of sensors, the unmanned vehicle generally has a blind spot in the forward motion area when it starts to drive forward.
- the present invention adds a set of ultrasonic sensors US1, US2, US3 to the bottom of the unmanned vehicle. , US4, US5 composed of front blind spot detection and obstacle avoidance system (see Figure 7, that is, the front blind zone ultrasonic sensor group FBZT in Figure 4).
- the front blind spot detection system works. If there is no obstacle in the safe area when the unmanned vehicle starts to accelerate and drive forward, the unmanned vehicle will switch to the multi-sensor fusion navigation state.
- the unmanned vehicle Due to the combination of sensors, the unmanned vehicle generally has a blind spot in the rear movement area when it starts to reverse.
- the rear blind spot detection and obstacle avoidance system composed of , US9, and US10 (see Figure 8, that is, the rear blind spot ultrasonic sensor group BBZT in Figure 4); at the moment when the unmanned vehicle starts to reverse, the rear blind spot detection system works.
- the human vehicle starts to accelerate and reverse, there is no obstacle in the safe area, and the unmanned vehicle will enter the multi-sensor fusion navigation state.
- the new STM32F7 MCU series produced by STM is the world's first mass-produced microcontroller with a 32-bit ARM Cortex-M7 processor.
- the products are equipped with a Cortex-M7 core with floating-point arithmetic units and DSP expansion functions.
- STM32F7 has excellent instruction and pin compatibility: Cortex-M7 is backward compatible with Cortex-M4 instruction set, STM32F7 series is pin compatible with STM32F4 series; The advantages of Cortex-M4) are used to the extreme, and the efficiency is nearly twice that of DSP.
- the above characteristics make STM32F7 very suitable to replace the STM32F4
- Image processing can be roughly divided into low-level processing and high-level processing.
- Low-level processing has a large amount of data, simple algorithms, and large parallelism;
- high-level processing has complex algorithms and small data volumes.
- using software processing is a very time-consuming process, but using hardware processing, a large amount of data can be processed in parallel, which can greatly improve the processing speed.
- the FPGA itself is just a standard cell array and does not have the functions of a general integrated circuit, but users can recombine and connect its interior through specific placement and routing tools according to their own design needs, and design their own in the shortest time.
- Application-specific integrated circuits because FPGA adopts the design idea of software to realize the design of hardware circuit, which makes the system based on FPGA design have good reusability and modification. This new design idea has been gradually applied to high-performance image Processing and rapid development.
- the binocular vision synchronous acquisition and image data segmentation processing of the present invention are handed over to the FPGA, while the object binocular visual distance calculation is handed over to the ARM.
- the data processing connection of the two is shown in Figure 3.
- the invention abandons the single-line laser radar or multi-line laser radar working modes adopted by domestic unmanned vehicles, and proposes a microcomputer based on the seventh generation NUC. +ARM (the latest embedded STM32F767)+FPGA's new three-core control mode.
- NUC. +ARM the latest embedded STM32F767+FPGA's new three-core control mode.
- multiple single-line lidars + dual CCD cameras + ultrasonic sensor fusion technology are used to detect and avoid obstacles.
- the control board takes STM32F767 as the processing core, receives in real time the multi-sensor digital fusion signal of the host computer based on NUC7 (seventh generation NUC microcomputer) and the image signal collected by the CCD camera based on FPGA, and responds to various interrupts in real time to realize and control the main station data communication and storage of real-time signals.
- NUC7 eventh generation NUC microcomputer
- FPGA field-programmable gate array
- the present invention introduces FPGA and Intel's seventh-generation NUC microcomputer into the STM32F767-based controller to form a three-stage system based on ARM+FPGA+NUC.
- Nuclear controller this controller integrates multiple single-line lidar detection and obstacle avoidance system controller systems, and fully considers the role of batteries in this system to realize the detection and obstacle avoidance of unmanned vehicles in various areas.
- the processing of multiple single-line lidar signals with the largest workload in the unmanned vehicle control system is handed over to the NUC microcomputer for processing, giving full play to the NUC microcomputer's fast data processing speed, and the binocular visual graphics data processing of the CCD camera is handed over to ARM.
- the ARM, FPGA and NUC first complete initialization, and then the vehicle-mounted computer NUC retrieves the driving path and map of the unmanned vehicle through the unmanned vehicle control station information, and then the blind spot ultrasonic sensor group, CCD-based binocular vision, microwave radar and multiple single-line lidars start to work.
- the ARM+FPGA controller determines that there are no obstacles entering the working area and then turns on the unmanned vehicle walking mode, and calculates the dual It collects the image data and microwave radar data of the CCD camera, and communicates with the NUC at the same time.
- the NUC receives and decodes multiple single-line lidar feedback signals in real time, and then communicates with the ARM+FPGA controller and transmits control signals to the ARM+FPGA controller.
- the ARM+FPGA controller precisely controls the DC brushless servo motor by decoding the output control signal.
- the DC brushless servo motor drives the unmanned vehicle to drive after the mechanical device transforms the power, and feeds back signals such as displacement, speed and acceleration to the ARM controller in real time.
- the unmanned vehicle control system is divided into two parts: the upper computer system based on the on-board computer NUC and the ARM+FPGA lower computer system based on STM32F767.
- the NUC host computer system based on the on-board computer completes the functions of path and map input, multi-sensor data fusion and online output; based on the ARM+FPGA host computer control system, the unmanned vehicle system servo control, CCD binocular vision data processing and Microwave radar ranging, I/O control and other functions, among which the multi-axis DC brushless servo system control with the largest workload, CCD-based binocular vision data processing and microwave radar ranging are jointly processed by ARM+FPGA, giving full play to ARM+
- the advantages of FPGA's respective data processing thus realizing the division of labor between NUC and ARM+FPGA, at the same time, communication between the three can be carried out, and data exchange and call can be carried out in real time.
- the unmanned vehicle Before the unmanned vehicle receives the motion command, it generally waits in the waiting area for the start command issued by the control station. If the voltage is low, the unmanned vehicle will automatically dock with the charging device for charging.
- the on-board computer NUC retrieves the driving path and navigation map information of the unmanned vehicle through the control station, and then the ARM+FPGA controller turns on the front blind spot ultrasonic sensors US1 ⁇ US5 to detect the front blind spot.
- the ARM+FPGA controller will issue an alarm and wait for the obstacle to be cleared; if there is no obstacle entering the blind spot, the unmanned vehicle will start to accelerate automatically.
- the unmanned vehicle starts to start, enter the working condition selection mode according to the weather conditions: if the weather is good, turn on the microwave radar MR1 ⁇ MR3, the single-line lidar sensors L1 ⁇ L6 and the CCD camera.
- the CCD camera and MR1 ⁇ MR3 start to ARM+
- the FPGA controller transmits long-distance obstacle information, and at the same time, multiple single-line lidars begin to transmit short-range obstacle information to the NUC.
- the ARM+FPGA controller and NUC begin to decode the obstacle information and convert it into the distance between the obstacle and the unmanned vehicle.
- the unmanned vehicle starts autonomous navigation with the help of these feedback distance signals, and starts to drive along the specified route; if the weather is bad, the ARM+FPGA controller will prohibit the binocular CCD camera and the single-line lidar sensor L1 ⁇ L6 from working, only Turn on the microwave radars MR1 ⁇ MR3, and MR1 ⁇ MR3 begin to transmit the information of mid- and long-distance obstacles to the ARM+FPGA-based controller.
- the ARM+FPGA controller and the NUC communicate with each other, and the unmanned vehicle starts to decelerate autonomously with the help of MR1 ⁇ MR3 feedback distance signals. Navigate and start driving along the prescribed route.
- the first task of the CCD camera is to find the road marking point based on the existing road map information: this marking point may be the end of the road or the turning point , or some docking sites
- the CCD camera communicates with the FPGA after finding these marking points
- the FPGA decodes the image of the CCD camera and then communicates with the ARM
- the ARM continues to process the marking point data information according to the internal algorithm, and then converts it into DC brushless
- the PWM control signal of the servo motor drives the unmanned vehicle to perform positioning and attitude adjustment before normal driving; after the unmanned vehicle combines the CCD camera image acquisition to complete the positioning and attitude adjustment, it will drive normally according to the on-board map information, and the CCD camera will drive normally during the driving process.
- the real-time image is transmitted to the FPGA through the FPGA synchronous acquisition system, and the FPGA decodes and processes the data and transmits the data to the ARM.
- the ARM controller uses the internal algorithm to convert the real-time decoded image data into the distance between the unmanned vehicle and the obstacle, and then the ARM+FPGA controller starts Fine-tune the control of the brushless DC servo motor, so that the unmanned vehicle starts to navigate autonomously and approach and avoid obstacles in the forward direction away from obstacles. If the weather is bad, the CCD camera will be prohibited from working, and the unmanned vehicle will abandon the long-distance obstacle avoidance mode, and can only rely on the microwave radar MR1 to MR3 to first achieve medium and long distance obstacle avoidance.
- the mid- and long-distance detection microwave radars MR1 and MR2 cooperate with the front short-range detection single-line lidar L1 and L2, L3, and L4 to detect the environment ahead, and the unmanned vehicle starts to implement obstacle avoidance.
- L1 and MR1 can work independently. Due to their certain inclination angle, MR1 and L1 can cooperate well to detect the ups and downs of the road ahead. First, MR1 detects the medium and long distances, and then L1 performs further accurate confirmation.
- MR1 and L1 are very easy Find the depth and width of the undulations; MR1, MR2, L1 and L3 cooperate to mainly detect the presence of obstacles in front of them: MR2 first detects the medium and long distances, and MR1 conducts secondary confirmation after the suspected obstacles are found.
- the suspected obstacles are roughly After confirmation, there are L1 and L3 for accurate position confirmation; MR1, MR2, L3 and L2 cooperate to detect the existence of obstacles in the left front: firstly, MR2 detects the medium and long distance, and after finding the suspected obstacles, MR1 conducts secondary confirmation, The suspected obstacle is roughly determined, and then L3 and L2 confirm the precise position; MR1, MR2, L3, and L4 detect the existence of the right front obstacle: first, MR2 detects the medium and long distance, and after finding the suspected obstacle, MR1 carries out the second step. After confirmation, the suspected obstacle is roughly determined, and then L3 and L4 confirm the precise location.
- MR2 detects the medium and long distances in the driving direction at all times, and the detection signal is input to the ARM controller for decoding to obtain the approximate distance of the obstacle. Then, when the obstacle enters the detection range of MR1, the detection signals of MR2 and MR1 At the same time, it will be sent to the ARM controller for decoding, and the distance information of obstacles will be further obtained. If there is no obstacle in the direction of movement, the unmanned vehicle continues to drive at the original speed; if there is indeed a suspected obstacle, the unmanned vehicle decelerates to a low speed according to the speed control mode in Figure 11 and enters the medium and long distance detection and obstacle avoidance mode. +FPGA communicates with NUC at all times.
- the unmanned vehicle turns on the low-speed driving mode, continues to use MR2 and MR1 as forward navigation sensors to approach obstacles and relies on microwave radar signals to avoid obstacles; if the weather is good, the microwave signals and multiple single-line lidar signals are good at this time, ARM +FPGA controller will continue to receive microwave radar MR2 and MR1 signals, and the NUC will receive multiple single-line lidar L1 ⁇ L6 feedback signals, and the unmanned vehicle will enter the microwave radar signal and single-line laser radar signal transfer area. Once the single-line laser radar L3 detects Obstacle information. At this time, the collected signal of the microwave radar will be used as an auxiliary signal, and the controller will enter the single-line laser radar precise positioning and
- L3 and L1 detect that there is a certain height of undulating pits in the forward motion path, if the height and width exceed the requirements for unmanned vehicles to cross, they will send an interrupt request to the STM32F767 in the ARM+FPGA controller and at the same time put the undulating pits.
- the data is transmitted to the NUC and ARM+FPGA for processing, and the STM32F767 will give priority to the interrupt and enter the front avoidance protection subroutine; if the height and width of the undulating pit are within the tolerance range of the unmanned vehicle, the unmanned vehicle will follow the set normal speed. to drive;
- L1 and L3 detect an obstacle in the forward motion path, they will send an interrupt request to the STM32F767 in the ARM+FPGA controller and transmit the obstacle data to the NUC for processing.
- the STM32F767 will prioritize the interrupt and enter the emergency forward obstacle avoidance Protection subroutine: STM32F767 enters the left or right emergency obstacle avoidance yield according to the data of NUC communication; if no obstacle enters the operating range, the unmanned vehicle will accelerate to the set normal speed according to the speed control mode in Figure 11 .
- L2 and L3 detect an obstacle in the left front motion path, they will send an interrupt request to the STM32F767 in the ARM+FPGA controller and transmit the obstacle data to the NUC for processing.
- the STM32F767 will prioritize the interrupt and enter the emergency left front avoidance.
- Obstacle protection subroutine STM32F767 enters the right emergency obstacle avoidance yield according to the data of NUC communication; if no obstacle enters the operating range, the unmanned vehicle will accelerate to the set normal speed according to the speed control mode in Figure 11;
- L4 and L3 detect an obstacle in the right front motion path, they will send an interrupt request to the STM32F767 in the ARM+FPGA controller and transmit the obstacle data to the NUC for processing.
- the STM32F767 will prioritize the interrupt and enter the emergency right forward avoidance.
- Obstacle protection subroutine STM32F767 enters the emergency obstacle avoidance to the left according to the data of NUC communication; if no obstacle enters the operating range, the unmanned vehicle will accelerate to the set normal speed according to the speed control mode in Figure 11.
- the single-line lidar L5, L6 and microwave radar MR3 detect the environment behind it at all times.
- MR3, L5 or L6 judges that there is an obstacle in the rear approaching the unmanned vehicle, it will send the ARM+FPGA controller to the controller.
- the STM32F767 in the STM32F767 sends an interrupt request and transmits the obstacle data to the NUC for processing.
- the STM32F767 will give priority to the interrupt, and then enter the rear obstacle avoidance protection subroutine and issue an alarm; if there is no obstacle behind entering the protection range, the unmanned vehicle will follow the Drive at the set normal speed.
- the front blind spot ultrasonic sensors US1 ⁇ US5 and the rear blind spot ultrasonic sensors US6 ⁇ US10 always detect the environment of the blind spot. If US1 ⁇ US5 or US6 ⁇ US10 judges that there is a temporary obstacle to the unmanned vehicle blind spot When approaching, an interrupt request will be sent to the STM32F767 and the obstacle data will be transmitted to the NUC for processing. The STM32F767 will give priority to the interrupt, and the unmanned vehicle will then enter the blind spot obstacle avoidance protection subroutine and issue an alarm; if there is no obstacle in the blind spot, it enters the protection range , the unmanned vehicle will drive at the set normal speed.
- the unmanned vehicle Under the condition that the unmanned vehicle enters the track and the normal running speed meets the requirements, its single-line lidar L1 ⁇ L6, ultrasonic sensor US1 ⁇ US10, binocular CCD camera and microwave radar MR1 ⁇ MR3 will collect peripheral signals in real time, and give feedback.
- the signal is sent to the NUC or ARM+FPGA controller.
- the NUC performs the laser radar data fusion processing
- the FPGA performs the binocular CCD image data processing
- the ARM performs the microwave radar data processing and responds to various interruption protection, and then the three communicate with each other.
- the STM32F767 generates a control signal to the brushless DC servo motor based on the sensor fusion decoded signal.
- the controller adjusts the motion of the servo motor to change the motion speed and direction of the unmanned vehicle, so that the unmanned vehicle can easily follow the on-board input path.
- the binocular CCD camera When the unmanned vehicle enters the track and runs normally, the binocular CCD camera reads various navigation signs on the ground in some key sign areas on both sides of the moving direction in real time, and then directly transmits the collected images to the FPGA, which is decoded and processed by the FPGA. Transmitted to ARM, STM32F767 will process the decoded data of these images as one of the navigation signs for tasks such as fast forwarding, parking, starting and turning of high-speed unmanned vehicles, and perform secondary attitude adjustment.
- the present invention adds a ground sign to the site position, when the unmanned vehicle is about to arrive.
- the station When the station is on, the ARM+FPGA controller will read the station logo through the binocular CCD camera. When the station is read, it will automatically accumulate. In order to realize the automatic walking cycle function of the unmanned vehicle, the unmanned vehicle will reach the last station. Auto-zero and recount from station 1.
- the ARM controller stores and generates the entry information record table, and then sends it to the control station through the wireless device, which is beneficial to the control station's tracking of the position of the unmanned vehicle and the scheduling of the unmanned vehicle.
- the present invention adds a stop selection function: in the initial stage of unmanned vehicle operation, the control station can freely set the stops that unmanned vehicles need to go to, Then the unmanned vehicle can complete this setting independently by relying on its own sensors. If it encounters an emergency during operation, the control station needs to change the running path or stop site.
- the control master station communicates with the unmanned vehicle ARM+FPGA+NUC through a wireless device.
- the core controller communicates and sends the change of walking information to the on-board computer NUC through wireless.
- the unmanned vehicle on-board computer NUC will receive and automatically update the route and stop site information and communicate with the ARM+FPGA controller.
- the driver drives the unmanned vehicle according to the new requirements to complete the task.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Aviation & Aerospace Engineering (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Electromagnetism (AREA)
- Optics & Photonics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Acoustics & Sound (AREA)
- Traffic Control Systems (AREA)
Abstract
Provided are a multi-sensor integrated unmanned vehicle detection and obstacle avoidance system and obstacle avoidance method; the obstacle avoidance system comprises a plurality of single-line lidars, dual CCD cameras, microwave radar, front and rear blind-spot ultrasonic sensor sets, and a triple-core controller based on ARM+FPGA+NUC; the single-line lidar signals are processed by an NUC microcomputer; the binocular vision graphic data from the CCD cameras is processed by the ARM+FPGA controller together, and blind-spot detection and obstacle avoidance, human-computer interface, path planning, online output, and other functions are left to the STM32F767 to perform alone; the ARM+FPGA controller precisely controls a DC brushless servo motor by means of decoding the output control signal, driving the unmanned vehicle. Thus the unmanned vehicle is capable of detecting obstacles in complex environments from a greater distance in any weather and quickly performing effective obstacle avoidance, thereby improving the safety and stability of the unmanned vehicle when traveling at high speed.
Description
本发明属于无人驾驶技术领域,具体涉及一种多传感器融合的无人车探测避障系统及避障方法。The invention belongs to the technical field of unmanned driving, and in particular relates to a multi-sensor fusion unmanned vehicle detection and obstacle avoidance system and an obstacle avoidance method.
伴随着经济的快速发展,汽车已经成为了人们生活中越来越重要的组成部分。驾驶员们的疏忽大意会导致许多事故,因此,汽车制造商们集中精力设计能确保汽车安全的系统,安全是拉动无人驾驶车需求增长的主要因素之一;其次,我国大城市里严重的交通阻塞让开车并非那么美好,人工智能的无人车代替有人驾驶可以完全解决交通阻塞等问题;另外,糟糕的空气状况也是推广无人驾驶汽车的“催化剂”。With the rapid economic development, automobiles have become an increasingly important part of people's lives. The negligence of drivers can lead to many accidents. Therefore, car manufacturers focus on designing systems that can ensure car safety. Safety is one of the main factors driving the demand for driverless cars; Traffic jams make driving not so good, and artificial intelligence-based unmanned vehicles can completely solve problems such as traffic jams instead of human drivers; in addition, poor air conditions are also a "catalyst" for the promotion of driverless cars.
无人驾驶汽车是通过车载传感系统感知道路环境,自动规划行车路线并控制车辆到达预定目标的智能汽车。它是利用车载传感器来感知车辆周围环境,并根据感知所获得的道路、车辆位置和障碍物信息,控制车辆的转向和速度,从而使车辆能够安全、可靠地在道路上行驶。无人驾驶汽车集自动控制、体系结构、人工智能、视觉计算等众多技术于一体,是计算机科学、模式识别和智能控制技术高度发展的产物,也是衡量一个国家科研实力和工业水平的一个重要标志,在国防和国民经济领域具有广阔的应用前景。A driverless car is a smart car that senses the road environment through the on-board sensing system, automatically plans the driving route and controls the vehicle to reach the predetermined target. It uses on-board sensors to perceive the surrounding environment of the vehicle, and controls the steering and speed of the vehicle according to the road, vehicle position and obstacle information obtained by the perception, so that the vehicle can drive on the road safely and reliably. The driverless car integrates many technologies such as automatic control, architecture, artificial intelligence, and visual computing. It is the product of the highly developed computer science, pattern recognition and intelligent control technology. It is also an important indicator of a country's scientific research strength and industrial level. , has broad application prospects in the fields of national defense and national economy.
目前,无人车发展还处于起步阶段,各国都相继开始了智能无人驾驶汽车的研究。无论是何种程度的智能驾驶,第一步都是感知,也就是感知车辆周边复杂的路况环境,在这个基础上才能做出相应的路径规划和驾驶行为决策,感知传感器的选择是无人车成功避障的前提。常用的测距感知传感器有:超声波测距传感器、红外测距传感器、CCD视觉系统、毫米波雷达、微波雷达和激光雷达等等。At present, the development of unmanned vehicles is still in its infancy, and countries have successively started research on intelligent unmanned vehicles. No matter what level of intelligent driving, the first step is to perceive, that is, to perceive the complex road conditions and environment around the vehicle. Only on this basis can the corresponding path planning and driving behavior decisions be made. The choice of perception sensor is the unmanned vehicle. A prerequisite for successful obstacle avoidance. Commonly used ranging sensors include: ultrasonic ranging sensors, infrared ranging sensors, CCD vision systems, millimeter-wave radar, microwave radar, and lidar, etc.
激光雷达实际上是一种工作在光学波段(特殊波段)的雷达,激光雷达属于主动探测,不依赖于外界光照条件或目标本身的辐射特性,它只需发射自己的激光束,通过探测发射激光束的回波信号来获取目标信息。激光波长短,可发射发散角非常小的激光束,多路径效应小,可探测低空/超低空目标。单线激光雷达是激光雷达中的一种,由于只有一路发射和一路接收,结构相对简单,使用也比较方便;单线激光雷达扫描周期较短,对前进方向环境的扫描速度快,角度分辨率较高,雷达本身体积较小、重量相对较轻、功耗也比较低、可靠性较高、成本相对低廉;单线激光雷达探测范围相对较广,能提供大量环境扫描点距离信息,可以为控制决策提供较大的方便,以上优点使得单线激光雷达成为了无人车感知未知环境的一个优先选择。Lidar is actually a radar that works in the optical band (special band). Lidar is active detection and does not depend on external lighting conditions or the radiation characteristics of the target itself. It only needs to emit its own laser beam and emit laser light through detection. The echo signal of the beam is used to obtain target information. The laser wavelength is short, it can emit a laser beam with a very small divergence angle, and the multi-path effect is small, and it can detect low-altitude/ultra-low-altitude targets. Single-line lidar is a kind of lidar. Since there is only one transmission and one reception, the structure is relatively simple and the use is relatively convenient; the scanning period of single-line lidar is short, the scanning speed of the forward direction environment is fast, and the angle resolution is high , the radar itself is small in size, relatively light in weight, relatively low in power consumption, high in reliability, and relatively low in cost; single-line lidar has a relatively wide detection range and can provide a large amount of environmental scanning point distance information, which can be used for control decisions. Greater convenience and the above advantages make single-line lidar a preferred choice for unmanned vehicles to perceive unknown environments.
一般普通的简易无人驾驶车结构如图1,探测和避障系统原理如图2。无人驾驶车由激光雷达传感器探测系统探测环境并输送给PC机,然后PC机经过编码处理,发送控制指令给基于单片机的下位机,单片机控制模块经过通讯解码后发送控制指令给直流无刷电机控制器,控制器驱动多个直流无刷电机运动;单片机控制系统根据外围环境的变化来调节电机的速度,进而控制无人车在实际环境中的位置,实现无人车在实际工况当中的行走和避障,现有的简易无人车控制系统均是由单个单片机控制单个单线激光雷达传感器或多线激光雷达传感器来实现上述功能的。现有的无人驾驶车在长时间运行中,存在着诸多问题,主要有:The general structure of a simple unmanned vehicle is shown in Figure 1, and the principle of the detection and obstacle avoidance system is shown in Figure 2. The unmanned vehicle is detected by the lidar sensor detection system and sent to the PC, and then the PC is coded and processed to send control commands to the lower computer based on the single-chip microcomputer, and the single-chip control module sends control commands to the DC brushless motor after communication decoding. The controller drives multiple DC brushless motors to move; the single-chip control system adjusts the speed of the motors according to changes in the surrounding environment, and then controls the position of the unmanned vehicle in the actual environment to realize the operation of the unmanned vehicle in the actual working conditions. For walking and obstacle avoidance, the existing simple unmanned vehicle control systems are all implemented by a single single-chip microcomputer controlling a single single-line lidar sensor or a multi-line lidar sensor to achieve the above functions. Existing unmanned vehicles have many problems in long-term operation, mainly including:
(1)由于无人车受周围环境不稳定因素干扰,基于单片机的控制器抗干扰能力较差,经常会出现异常,引起无人车失控;(1) Since the unmanned vehicle is disturbed by the unstable factors of the surrounding environment, the anti-interference ability of the controller based on the single-chip microcomputer is poor, and abnormalities often occur, causing the unmanned vehicle to lose control;
(2)现有的无人驾驶车均采用低级的DSP、ARM系列芯片,工作频率最大才100兆赫兹左右,无法满足无人车复杂数据的快速运算;(2) The existing unmanned vehicles all use low-level DSP and ARM series chips, and the maximum operating frequency is only about 100 MHz, which cannot meet the fast calculation of complex data of unmanned vehicles;
(3)受无人车PC机性能影响,无人车的传感器采集数据无法快速计算和储存;(3) Affected by the performance of the PC of the unmanned vehicle, the data collected by the sensors of the unmanned vehicle cannot be quickly calculated and stored;
(4)单线激光雷达获取的数据为2D数据,无法区别目标的高度等信息,一些小型物体会被忽略,最终成为障碍物,单一单线激光雷达传感器导航成为车载领域的瓶颈;(4) The data obtained by the single-line lidar is 2D data, and the information such as the height of the target cannot be distinguished. Some small objects will be ignored and eventually become obstacles. The navigation of the single-line lidar sensor has become a bottleneck in the vehicle field;
(5)单一的单线激光雷达无法获取路面信息,需要配合其它传感器对地面信息进行读取和判别;(5) A single single-line lidar cannot obtain road information, and needs to cooperate with other sensors to read and discriminate the ground information;
(6)多线激光雷达虽然可以实现2.5D或3D数据,可以判断障碍物的高度,处理地面的信息等,但是价格相对比较昂贵,一台64束的激光雷达售价高达70万人民币,无法大面积推广使用;(6) Although the multi-line laser radar can realize 2.5D or 3D data, judge the height of obstacles, and process the information on the ground, etc., the price is relatively expensive. The price of a 64-beam laser radar is as high as 700,000 yuan. Widespread use;
(7)单一的单线激光雷达无法探测到弯角、路崖等信息,需要配合其它传感器使用才可以读取到周围障碍物信号或定位传感器标志;(7) A single single-line lidar cannot detect information such as corners, road cliffs, etc., and needs to be used with other sensors to read the surrounding obstacle signals or positioning sensor signs;
(8)现有的无人车基本上只考虑前向探测和避障,均未考虑后方的障碍物信息,有的时候后方出现的障碍物会伤害到无人车本体,而无人车无法实现加速躲避;(8) The existing unmanned vehicles basically only consider forward detection and obstacle avoidance, and do not consider the information of obstacles in the rear. achieve accelerated avoidance;
(9)基于单一的单线激光雷达无人车在刚启动瞬间存在着一个探测盲区,一旦有障碍物处于盲区,易于产生交通事故;(9) Based on a single single-line LiDAR unmanned vehicle, there is a detection blind spot at the moment of startup. Once an obstacle is in the blind spot, it is easy to cause traffic accidents;
(10)基于单一的单线激光雷达无人车在实际行驶过程中也会出现探测盲区,一旦在运动过程中有障碍物进入运动盲区也会产生交通事故;(10) A single-line lidar-based unmanned vehicle will also have a detection blind spot during the actual driving process, and once an obstacle enters the movement blind spot during the movement, a traffic accident will also occur;
(11)基于单线激光雷达的无人车对前方道路图像采集速度较慢,影响了无人车快速行进;(11) The unmanned vehicle based on the single-line lidar has a slow acquisition speed of the road image ahead, which affects the rapid travel of the unmanned vehicle;
(12)在长距离行驶中,基于单线激光雷达的无人车对周围环境辨认较差,无法实现精确定位;(12) In long-distance driving, the unmanned vehicle based on single-line lidar has poor recognition of the surrounding environment and cannot achieve precise positioning;
(13)在规则交通中,无人车行驶路径的地面上具有各种交通标志,但是单线激光雷达无法辨认,失去了无人车快速行进时的辅助导航;(13) In regular traffic, there are various traffic signs on the ground of the driving path of the unmanned vehicle, but the single-line lidar cannot be recognized, and the auxiliary navigation when the unmanned vehicle is traveling quickly is lost;
(14)在规则交通中,无人车行驶路径的空中具有红绿灯等标志,但是单线激光雷达无法辨认,削弱了无人车快速行进时的安全性;(14) In regular traffic, there are traffic lights and other signs in the air of the driving path of the unmanned vehicle, but the single-line lidar cannot be recognized, which weakens the safety of the unmanned vehicle when it travels fast;
(15)受激光雷达价格和性能的影响,一般性价比比较高的激光雷达探测范围不足100米,这个距离不利于无人车快速行进障碍物的判断。(15) Affected by the price and performance of lidars, the detection range of lidars with relatively high cost performance is generally less than 100 meters, which is not conducive to the judgment of unmanned vehicles to quickly travel obstacles.
视觉传感器的原理和结构与人类的感官组织类似,并且视觉传感器具有体积小、成本低、安装方便、隐蔽性好、探测范围广和包含的信息量大等优点。在无人车环境探测系统中加入摄像头可以实时感应周围的环境、收集数据,进行静态、动态物体的辨识、侦测与追踪,并结合导航仪地图数据,进行系统的运算与分析,可以预先让控制器察觉到可能发生的危险,有效增加汽车驾驶的舒适性和安全性;激光雷达和视觉传感器存在的缺点为:The principle and structure of the visual sensor are similar to the human sensory organization, and the visual sensor has the advantages of small size, low cost, convenient installation, good concealment, wide detection range and large amount of information. Adding a camera to the unmanned vehicle environment detection system can sense the surrounding environment in real time, collect data, and identify, detect and track static and dynamic objects. The controller senses possible dangers, effectively increasing the comfort and safety of car driving; the disadvantages of lidar and vision sensors are:
(1)受单目视觉本身缺陷影响,无人车传感器探测系统相对单线激光雷达有所提高,但是这个距离不利于无人车高速行驶;(1) Affected by the defects of monocular vision itself, the sensor detection system of the unmanned vehicle is improved compared with the single-line lidar, but this distance is not conducive to the high-speed driving of the unmanned vehicle;
(2)基于CCD的单目视觉障碍物判别需要一个数据容量较大的特征库,一旦某个物体没有特征库数据与之匹配,就会导致障碍物无法判别,从而也无法准确估算这些目标的距离,不利于无人车高速行驶;(2) CCD-based monocular vision obstacle identification requires a feature library with a large data capacity. Once an object has no feature library data to match with it, it will cause obstacles to be undistinguished and thus cannot accurately estimate the target's characteristics. The distance is not conducive to the high-speed driving of unmanned vehicles;
(3)无论是单线激光雷达、多线激光雷达或摄像机对有雨雾、灰尘、烟雾的天气非常敏感,雨雾、灰尘、烟雾对激光雷达和视频采集吸收很厉害,所以在雨雾、灰尘、烟雾的天气,激光雷达和视频信号性能会大大下降,对无人车的安全性造成较大的影响;(3) Whether it is a single-line lidar, a multi-line lidar or a camera, it is very sensitive to the weather with rain, fog, dust, and smoke. Weather, lidar and video signal performance will be greatly reduced, which will have a greater impact on the safety of unmanned vehicles;
(4)无论是单线激光雷达、多线激光雷达或摄像机对有强光的天气非常敏感,强烈的阳光有时候可以使激光雷达和摄像机性能会大大下降,有时候甚至没有信号输出,对无人车的安全性造成较大的影响。(4) Whether it is a single-line laser radar, a multi-line laser radar or a camera, it is very sensitive to the weather with strong light. The strong sunlight can sometimes greatly reduce the performance of the laser radar and the camera, and sometimes there is no signal output. The safety of the vehicle is greatly affected.
微波是波长很短的无线电波,微波的方向性很好、速度等于光速,微波遇到障碍物立即被反射回来,可以被雷达计接收,微波雷达根据电磁波往返时间测得障碍物的距离。与红外、激光等光学导引相比,微波穿透雾、烟、灰尘的能力强,具有全天候(大雨天除外)全天时的特点。Microwaves are radio waves with very short wavelengths. The directionality of microwaves is very good, and the speed is equal to the speed of light. When the microwaves encounter obstacles, they are immediately reflected back and can be received by the radar meter. The microwave radar measures the distance of obstacles according to the round-trip time of electromagnetic waves. Compared with optical guidance such as infrared and laser, microwave has strong ability to penetrate fog, smoke and dust, and has the characteristics of all-weather (except heavy rain) all day.
发明内容SUMMARY OF THE INVENTION
针对现有技术中存在不足,本发明提供了一种多传感器融合的无人车探测避障系统及避障方法,使得无人车可以全天候较远地发现复杂环境下的障碍物并快速实现有效避障,进而提高无人车高速行驶时的安全性。In view of the deficiencies in the prior art, the present invention provides a multi-sensor fusion unmanned vehicle detection and obstacle avoidance system and obstacle avoidance method, so that the unmanned vehicle can find obstacles in a complex environment from a distance all-weather and quickly realize effective Avoid obstacles, thereby improving the safety of unmanned vehicles when driving at high speed.
本发明是通过以下技术手段实现上述技术目的的。The present invention achieves the above technical purpose through the following technical means.
一种多传感器融合的无人车探测避障方法,具体为:A multi-sensor fusion method for unmanned vehicle detection and obstacle avoidance, specifically:
车载电脑NUC通过控制总站调取无人车行驶路径和导航地图信息,ARM+FPGA控制器由前方盲区超声波传感器组确定运动盲区无障碍物时,无人车开始自动加速;The on-board computer NUC retrieves the driving path and navigation map information of the unmanned vehicle through the control station, and the ARM+FPGA controller determines that the unmanned vehicle starts to accelerate automatically when the blind spot ultrasonic sensor group determines that there are no obstacles in the blind spot.
无人车开始启动瞬间,根据天气情况进入工况选择模式:如果天气良好,微波雷达、单线激光雷达传感器以及CCD摄像机均工作,CCD摄像机和微波雷达向ARM+FPGA控制器传输远距离障碍物信息,单线激光雷达传感器向NUC传输近距离障碍物信息,所述障碍物信息经处理后,作为无人车自主导航的反馈距离信号;如果天气恶劣,只有微波雷达工作,无人车根据反馈距离信号开始降速自主导航行驶;At the moment when the unmanned vehicle starts, it enters the working condition selection mode according to the weather conditions: if the weather is good, the microwave radar, single-line lidar sensor and CCD camera all work, and the CCD camera and microwave radar transmit long-distance obstacle information to the ARM+FPGA controller , the single-line lidar sensor transmits short-range obstacle information to the NUC. After processing, the obstacle information is used as the feedback distance signal for the autonomous navigation of the unmanned vehicle; if the weather is bad, only the microwave radar works, and the unmanned vehicle is based on the feedback distance signal. Start to slow down and navigate autonomously;
无人车进入运动路线后,如果天气良好,ARM+FPGA由接收的道路标志点,调整无人车进行正常行驶前的位姿,根据车载地图信息正常行驶,向远距离障碍物的前进方向靠近并实施避障;如果天气恶劣,无人车进行中远距离避障。After the unmanned vehicle enters the movement route, if the weather is good, the ARM+FPGA will adjust the position and posture of the unmanned vehicle before normal driving according to the received road marking points, drive normally according to the vehicle map information, and approach the forward direction of the long-distance obstacle. And implement obstacle avoidance; if the weather is bad, the unmanned vehicle will avoid obstacles at medium and long distances.
进一步,所述无人车在向远距离障碍物靠近时,位于无人车前方和顶部的微波雷达与单线激光雷达配合,进行避障,具体为:微波雷达MR1和单线激光雷达L1配合探测前方道路的起伏,MR1先进行中远距离探测,单线激光雷达L1进一步精确确认,确定起伏的深度和宽度;微波雷达MR1、MR2和单线激光雷达L1、L3配合,确定正前方是否存在障碍物:微波雷达MR2先进行中远距离探测,发现疑似障碍物后由微波雷达MR1进行二次确认,疑似障碍物大致确定后再由单线激光雷达L1、L3进行精确位置确认;微波雷达MR1、MR2和单线激光雷达L3、L2配合,确定左前方是否存在障碍物:微波雷达MR2先进行中远距离探测,发现疑似障碍物后由微波雷达MR1进行二次确认,疑似障碍物大致确定后再由单线激光雷达L3、L2进行精确位置确认;微波雷达MR1、MR2和单线激光雷达L3、L4配合,确定右前方是否存在障碍物:微波雷达MR2先进行中远距离探测,发现疑似障碍物后由微波雷达MR1进行二次确认,疑似障碍物大致确定后再由单线激光雷达L3、L4进行精确位置确认。Further, when the unmanned vehicle approaches a long-distance obstacle, the microwave radar at the front and the top of the unmanned vehicle cooperates with the single-line laser radar to avoid obstacles. Specifically, the microwave radar MR1 and the single-line laser radar L1 cooperate to detect the front. For the undulations of the road, MR1 first conducts medium and long-distance detection, and the single-line lidar L1 further confirms the depth and width of the undulations; the microwave radars MR1 and MR2 cooperate with the single-line lidars L1 and L3 to determine whether there are obstacles in front of them: microwave radar MR2 first conducts medium and long-distance detection, and after finding suspected obstacles, it is confirmed by microwave radar MR1. The suspected obstacles are roughly determined, and then single-line laser radar L1 and L3 are used for precise position confirmation; microwave radar MR1, MR2 and single-line laser radar L3 , L2 cooperate to determine whether there is an obstacle in the front left: microwave radar MR2 first conducts medium and long-distance detection, after finding a suspected obstacle, microwave radar MR1 conducts a second confirmation, and the suspected obstacle is roughly determined, and then single-line laser radar L3, L2. Accurate location confirmation; microwave radar MR1, MR2 cooperate with single-line lidar L3, L4 to determine whether there is an obstacle in the front right: microwave radar MR2 first performs medium and long-distance detection, and microwave radar MR1 performs secondary confirmation after finding a suspected obstacle. After the obstacles are roughly determined, the single-line lidars L3 and L4 are used for precise location confirmation.
更进一步,当单线激光雷达L3探测到障碍物信息,进入单线激光雷达精确定位导航模式:Further, when the single-line lidar L3 detects the obstacle information, it enters the single-line lidar precise positioning and navigation mode:
如果单线激光雷达L3和L1探测到前方道路存在起伏,若起伏的高度和宽度超过了无人车越过的要求,无人车进行前方躲避保护;如果起伏的高度和宽度在无人车能够越过的范围内,将按照设定的正常速度进行行驶;If the single-line lidar L3 and L1 detect that there are undulations on the road ahead, if the height and width of the undulations exceed the requirements for the unmanned vehicle to cross, the unmanned vehicle will perform forward avoidance protection; Within the range, it will drive at the set normal speed;
如果单线激光雷达L1和L3探测到前方运动路径中存在障碍物,无人车进行向左或向右的紧急避障让行;如果没有障碍物,无人车将加速至设定的正常速度行驶;If the single-line lidar L1 and L3 detect that there is an obstacle in the moving path ahead, the unmanned vehicle will perform emergency obstacle avoidance to the left or right; if there is no obstacle, the unmanned vehicle will accelerate to the set normal speed. ;
如果单线激光雷达L2和L3探测到左前方运动路径中存在障碍物,无人车进行向右的紧急避障让行;如果没有障碍物,无人车将加速至设定的正常速度行驶;If the single-line lidar L2 and L3 detect an obstacle in the left front motion path, the unmanned vehicle will perform an emergency obstacle avoidance to the right; if there is no obstacle, the unmanned vehicle will accelerate to the set normal speed;
如果单线激光雷达L4和L3探测到右前方运动路径中存在障碍物,无人车进行向左的紧急避障让行;如果没有障碍物,无人车将加速至设定的正常速度行驶。If the single-line lidar L4 and L3 detect an obstacle in the right front motion path, the unmanned vehicle will perform an emergency obstacle avoidance to the left; if there is no obstacle, the unmanned vehicle will accelerate to the set normal speed.
进一步,所述无人车进入运动路线后,无人车后方的单线激光雷达和微波雷达时刻检测后方的环境,若判断后方存在障碍物向无人车靠近,进行后方避障保护。Further, after the unmanned vehicle enters the movement route, the single-line laser radar and microwave radar behind the unmanned vehicle always detect the environment behind it, and if it is judged that there is an obstacle in the rear approaching the unmanned vehicle, the rear obstacle avoidance protection is performed.
进一步,所述无人车进入运动路线后,前方盲区超声波传感器组和后方盲区超声波传感器组时刻检测盲区的环境,若判断有临时的障碍物向无人车盲区靠近,进行盲区避障保护。Further, after the unmanned vehicle enters the moving route, the front blind area ultrasonic sensor group and the rear blind area ultrasonic sensor group always detect the environment of the blind area, and if it is judged that a temporary obstacle is approaching the blind area of the unmanned vehicle, the blind area obstacle avoidance protection is performed.
进一步,所述CCD摄像机在无人车正常运行时,对运动方向两边的各种导航标志进行读取,ARM+FPGA控制器处理后,作为高速无人车运行的导航标志。Further, when the unmanned vehicle is running normally, the CCD camera reads various navigation signs on both sides of the moving direction, and after processing by the ARM+FPGA controller, it is used as the navigation signs for the high-speed unmanned vehicle running.
进一步,所述CCD摄像机在无人车正常运行时,对无人车到达的站点标志进行读取,实现无人车的自动行走、位置追踪和调度。Further, when the unmanned vehicle is running normally, the CCD camera reads the sign of the site where the unmanned vehicle arrives, so as to realize the automatic walking, position tracking and scheduling of the unmanned vehicle.
一种多传感器融合的无人车探测避障系统,包括多个单线激光雷达、双CCD摄像机、微波雷达、前方盲区超声波传感器组、后方盲区超声波传感器组和基于ARM+FPGA+NUC的三核控制器,多个单线激光雷达与NUC进行通讯,双CCD摄像机、微波雷达、前方盲区超声波传感器组、后方盲区超声波传感器组均与ARM+FPGA控制器通讯,NUC与ARM+FPGA控制器通讯。A multi-sensor fusion detection and obstacle avoidance system for unmanned vehicles, including multiple single-line laser radars, dual CCD cameras, microwave radar, front blind area ultrasonic sensor group, rear blind area ultrasonic sensor group and three-core control based on ARM+FPGA+NUC Multiple single-line lidars communicate with NUC, dual CCD cameras, microwave radar, front blind area ultrasonic sensor group, rear blind area ultrasonic sensor group all communicate with ARM+FPGA controller, and NUC communicates with ARM+FPGA controller.
上述技术方案中,所述多个单线激光雷达包括设置在无人车车顶、且与水平面夹角为α的单线激光雷达L1,所述α为5~15°。In the above technical solution, the plurality of single-line laser radars include a single-line laser radar L1 arranged on the roof of the unmanned vehicle and having an included angle with the horizontal plane of α, where α is 5 to 15°.
上述技术方案中,所述单线激光雷达还包括设置在无人车前方的单线激光雷达组和设置在无人车后方的单线激光雷达组,所述单线激光雷达组之间均设有微波雷达。In the above technical solution, the single-line laser radar further includes a single-line laser radar group arranged in front of the unmanned vehicle and a single-line laser radar group arranged behind the unmanned vehicle, and microwave radars are arranged between the single-line laser radar groups.
本发明具有的有益效果为:The beneficial effects that the present invention has are:
(1)由NUC处理无人车的多个单线激光雷达的数据融合,使得控制比较简单,大大提高了运算速度,解决了单ARM软运行较慢的瓶颈,缩短了开发周期短,并且程序可移植能力强。(1) The data fusion of multiple single-line lidars of unmanned vehicles is processed by the NUC, which makes the control relatively simple, greatly improves the operation speed, solves the bottleneck of slow single ARM soft operation, shortens the development cycle, and the program can be Strong transplant ability.
(2)本发明完全实现了无人车的单板控制,节省了控制板占用空间,而且还实现了无人车多个独立区域的有效探测和避障,有利于提高无人车系统的稳定性和动态性能。(2) The present invention completely realizes the single-board control of the unmanned vehicle, saves the space occupied by the control board, and also realizes the effective detection and obstacle avoidance of multiple independent areas of the unmanned vehicle, which is beneficial to improve the stability of the unmanned vehicle system performance and dynamic performance.
(3)由于本发明中控制器采用NUC处理多个单线激光雷达传感器的数据与算法,并充分考虑了周围的干扰源,把ARM从繁重的工作量中解脱出来,有效地防止了运动控制主程序的“跑飞”,无人车抗干扰能力大大增强。(3) Since the controller in the present invention uses NUC to process the data and algorithms of multiple single-line lidar sensors, and fully considers the surrounding interference sources, the ARM is relieved from the heavy workload and effectively prevents the main motion control. The program's "running flight" greatly enhances the anti-interference ability of unmanned vehicles.
(4)本发明的FPGA内部的同步采集系统保证了双CCD摄像头数据采集的同步,保证了后续距离计算的准确性。(4) The synchronous acquisition system inside the FPGA of the present invention ensures the synchronization of the data acquisition of the dual CCD cameras, and ensures the accuracy of the subsequent distance calculation.
(5)由于本发明中控制器采用FPGA处理大量双目视觉的图像数据,并充分考虑了周围 的干扰源,把ARM从繁重的图像处理工作中解脱出来,不仅提高了运算速度,而且有效地防止了运动控制主程序的“跑飞”,无人车抗干扰能力大大增强。(5) Since the controller in the present invention uses FPGA to process a large amount of binocular vision image data, and fully considers the surrounding interference sources, the ARM is freed from the heavy image processing work, which not only improves the operation speed, but also effectively The "runaway" of the motion control main program is prevented, and the anti-interference ability of the unmanned vehicle is greatly enhanced.
(6)本发明中的CCD摄像机图像数据采集比经济实用型单线激光雷达探测的距离远,使得无人车障碍物探测范围更广,同时微波雷达的存在填补了CCD摄像机和单线激光雷达探测距离之间的一个空白区域,有利于障碍物的追踪和距离的确定,利于无人车的加速和减速,提高了无人车的动态性能。(6) The image data collection of the CCD camera in the present invention is farther than the detection distance of the economical and practical single-line laser radar, so that the obstacle detection range of the unmanned vehicle is wider, and the existence of the microwave radar fills the detection distance of the CCD camera and the single-line laser radar. A blank area between is conducive to the tracking of obstacles and the determination of the distance, which is conducive to the acceleration and deceleration of the unmanned vehicle, and improves the dynamic performance of the unmanned vehicle.
(7)本发明中双目视觉没有识别率的限制,而且对所有障碍物采用视差原理对物体直接进行测量,测量精度比单目视觉高,从而更精确估算出离障碍物的距离,提前实现避障预警。(7) In the present invention, binocular vision has no restriction on the recognition rate, and the parallax principle is used to directly measure objects for all obstacles, and the measurement accuracy is higher than that of monocular vision, so that the distance from the obstacle can be more accurately estimated and realized in advance. Obstacle avoidance warning.
(8)本发明中的CCD摄像机可有效探测出高速无人车运行方向周围凸出地面的障碍物,不仅可以提高避障的准确性,而且这些障碍物还可以为无人车导航提供精确定位。(8) The CCD camera in the present invention can effectively detect the obstacles protruding from the ground around the running direction of the high-speed unmanned vehicle, which can not only improve the accuracy of obstacle avoidance, but also provide accurate positioning for the unmanned vehicle navigation. .
(9)本发明中的CCD摄像机可有效分辨出规则交通中的车道检测线、直行和转弯等道路标志,无人车可以依靠这些标志修正自己的位置和姿态,提高了无人车自由行驶时的自主导航的稳定性和精确性。(9) The CCD camera in the present invention can effectively distinguish road signs such as lane detection lines, straight driving and turning in regular traffic, and the unmanned vehicle can rely on these signs to correct its own position and attitude, which improves the free running time of the unmanned vehicle. The stability and accuracy of autonomous navigation.
(10)本发明中的CCD摄像机可有效分辨出规则交通中的绿灯、黄灯和红灯等交通提示,无人车可以根据这些信息调整自身的速度满足快速行驶、快速停车等需要,提高了无人车自由行驶时的安全性和稳定性。(10) The CCD camera in the present invention can effectively distinguish traffic prompts such as green light, yellow light and red light in regular traffic, and the unmanned vehicle can adjust its own speed according to these information to meet the needs of fast driving, fast parking, etc. Safety and stability of unmanned vehicles when they travel freely.
(11)本发明中单线激光雷达L1与地面具有一定的角度,这个角度可以帮助单线激光雷达L1进一步精确定位CCD摄像机发现的运动路面的起伏,防止路面破损所引起的深坑影响到无人车正常的行驶。(11) In the present invention, the single-line laser radar L1 has a certain angle with the ground. This angle can help the single-line laser radar L1 to further accurately locate the undulations of the moving road surface found by the CCD camera, and prevent the deep pit caused by road damage from affecting the unmanned vehicle. normal driving.
(12)本发明中单线激光雷达L1与地面具有一定的角度,这个角度可以帮助单线激光雷达L1进一步精确定位CCD摄像机发现运动路面临时遗落的小型障碍物,二次通知无人车控制系统实现紧急避让,保障了无人车正常的行驶。(12) In the present invention, the single-line laser radar L1 has a certain angle with the ground. This angle can help the single-line laser radar L1 to further accurately locate the CCD camera to find small obstacles temporarily left on the moving road, and notify the unmanned vehicle control system twice to realize Emergency avoidance ensures the normal driving of the unmanned vehicle.
(13)本发明中前方多个单线激光雷达与超声波传感器融合,可以精确定位CCD摄像机发现的障碍物所在位置,二次通知无人车控制系统实现避让,有利于提高无人车行驶的快速性和安全性。(13) In the present invention, the fusion of multiple single-line laser radars and ultrasonic sensors in the front can accurately locate the location of the obstacles found by the CCD camera, and notify the unmanned vehicle control system twice to achieve avoidance, which is beneficial to improve the speed of unmanned vehicles. and security.
(14)本发明中前方多个单线激光雷达与微波雷达融合,由于单线激光雷达和微波雷达的方向有交叉,可精确探测到CCD摄像机发现的两侧柱状物体,为无人车前进定位提供一定的帮助。(14) In the present invention, a plurality of single-line laser radars and microwave radars are integrated in the front. Since the directions of the single-line laser radar and the microwave radar are intersected, the cylindrical objects on both sides found by the CCD camera can be accurately detected, which provides certain conditions for the forward positioning of the unmanned vehicle. s help.
(15)本发明中前方多个单线激光雷达与微波雷达融合,由于单线激光雷达和微波雷达的方向有交叉,可精确探测到CCD摄像机发现的两侧空闲区域,为无人车前进转弯和避障提供一定的帮助。(15) In the present invention, multiple single-line laser radars and microwave radars are integrated in the front. Since the directions of the single-line laser radar and the microwave radar are intersected, the free areas on both sides found by the CCD camera can be accurately detected, which is the best way for the unmanned vehicle to turn and avoid. provide some assistance.
(16)本发明中后方多个单线激光雷达与微波雷达融合,可有效探测到无人车与后方移动障碍物的距离,当遇到紧急情况时,无人车可以在控制器帮助下加速逃离危险区域,起到保护无人车本体的作用。(16) In the present invention, the fusion of multiple single-line laser radars and microwave radars in the rear can effectively detect the distance between the unmanned vehicle and the moving obstacles behind. When encountering an emergency, the unmanned vehicle can speed up and escape with the help of the controller. Dangerous areas, play the role of protecting the unmanned vehicle body.
(17)本发明中的前盲区探测和避障系统可有效消除无人车刚启动向前加速时出现的近距离盲区,提高了无人车向前启动加速时的安全性和可靠性;同时可有效消除无人车正常行驶时实时出现的近距离盲区,进一步提高了无人车安全性和可靠性。(17) The front blind spot detection and obstacle avoidance system in the present invention can effectively eliminate the close blind spot that occurs when the unmanned vehicle just starts to accelerate forward, and improves the safety and reliability of the unmanned vehicle when it starts to accelerate forward; It can effectively eliminate the short-range blind spots that appear in real time when the unmanned vehicle is driving normally, and further improve the safety and reliability of the unmanned vehicle.
(18)本发明中的后盲区探测和避障系统可有效消除无人车刚启动倒车时出现的近距离盲区,提高了无人车倒车时的安全性和可靠性;且可有效消除无人车倒车时实时出现的近距离盲区,进一步提高了无人车安全性和可靠性。(18) The rear blind spot detection and obstacle avoidance system in the present invention can effectively eliminate the short-range blind spot that occurs when the unmanned vehicle just starts reversing, thereby improving the safety and reliability of the unmanned vehicle when reversing; and can effectively eliminate the unmanned vehicle The short-range blind spot that appears in real time when the car is reversing further improves the safety and reliability of the unmanned vehicle.
(19)在有雨雾或烟雾或灰尘较多的天气状况下,启动本发明中的微波雷达对前进环境进行远距离和中距离探测,在激光雷达受到干扰的条件下采用微波雷达导航,有利于提高无人车恶劣环境下的安全性。(19) In a weather condition with a lot of rain, fog, smoke or dust, the microwave radar in the present invention is activated to detect the forward environment at long and medium distances, and the microwave radar is used for navigation under the condition that the laser radar is interfered, which is beneficial to Improve the safety of unmanned vehicles in harsh environments.
(20)对于本发明中的无人车来说,为了满足大范围多站点运行,加入了具有一定冗余度的站点传感器,不仅利于无人车的定位,而且也有利于总站对无人车的追踪。(20) For the unmanned vehicle in the present invention, in order to meet the large-scale multi-site operation, a site sensor with a certain degree of redundancy is added, which is not only conducive to the positioning of the unmanned vehicle, but also helps the terminal to detect the unmanned vehicle. tracking.
(21)本发明中的双目视觉不需要ARM控制器进行庞大的样本特征库与之采集的图像进行对比,直接解决了由于数据特征库缺乏而导致的辨别失败,提高了高速无人车的安全性。(21) The binocular vision in the present invention does not require the ARM controller to compare a huge sample feature library with the collected images, which directly solves the identification failure caused by the lack of the data feature library, and improves the performance of the high-speed unmanned vehicle. safety.
(22)本发明中的CCD摄像机可以在无人车遇到紧急情况时通过无线装置传输现场图像给控制总站,由控制总站预判做出需要紧急处理的方案;(22) The CCD camera in the present invention can transmit on-site images to the control station through a wireless device when the unmanned vehicle encounters an emergency, and the control station prejudges and makes a plan that requires emergency treatment;
(23)在运动过程中,充分考虑了电池在这个系统中的作用,基于ARM+FPGA+NUC三核控制器时刻都在对无人车的运行状态进行监测和运算,避免了大电流的产生,所以从根本上解决了大电流对电池的冲击,避免了由于大电流放电而引起的蓄电池过度老化现象的发生。(23) During the movement process, the role of the battery in this system is fully considered. Based on the ARM+FPGA+NUC three-core controller, the running state of the unmanned vehicle is monitored and calculated at all times, avoiding the generation of large currents. Therefore, the impact of high current on the battery is fundamentally solved, and the occurrence of excessive aging of the battery caused by high current discharge is avoided.
图1为普通简易无人驾驶车二维结构图;Fig. 1 is a two-dimensional structure diagram of an ordinary simple unmanned vehicle;
图2为普通无人车探测和避障系统原理图;Figure 2 is a schematic diagram of an ordinary unmanned vehicle detection and obstacle avoidance system;
图3为本发明ARM与FPGA连接图像处理原理图;Fig. 3 is the schematic diagram of image processing of the connection between ARM and FPGA of the present invention;
图4为多传感器融合无人驾驶车二维结构图;Figure 4 is a two-dimensional structural diagram of a multi-sensor fusion unmanned vehicle;
图5为前方多元雷达组排列二维结构图;Figure 5 is a two-dimensional structural diagram of the arrangement of the front multi-element radar group;
图6为基于双目视觉CCD黑白相机排列二维结构图;Fig. 6 is a two-dimensional structure diagram based on binocular vision CCD black and white camera arrangement;
图7为前方盲区超声波传感器组排列二维结构图;Figure 7 is a two-dimensional structural diagram of the arrangement of ultrasonic sensor groups in the front blind area;
图8为后方多元雷达组和后方盲区超声波传感器组排列二维结构图;Figure 8 is a two-dimensional structural diagram of the arrangement of the rear multi-radar group and the rear blind area ultrasonic sensor group;
图9为多传感器融合无人车探测和避障系统原理图;Figure 9 is a schematic diagram of a multi-sensor fusion unmanned vehicle detection and obstacle avoidance system;
图10为多传感器融合无人车运行示意图;Figure 10 is a schematic diagram of the operation of the multi-sensor fusion unmanned vehicle;
图11为无人车运行加减速曲线图。Figure 11 is the acceleration and deceleration curve diagram of the unmanned vehicle operation.
下面结合附图以及具体实施例对本发明作进一步的说明,但本发明的保护范围并不限于此。The present invention will be further described below with reference to the accompanying drawings and specific embodiments, but the protection scope of the present invention is not limited thereto.
SICK公司的激光雷达采用成熟的激光-时间飞行原理及多重回波技术,非接触式检测,可以根据现场需要,设置各种图形的保护区域,且可以根据现场的需要,随时简单的修改图形,通过内部滤波及多重回波技术使得传感器具有可靠的抗干扰性能。LMS151和LMS122是SICK公司新推出的高性能、分别针对近距离探测的激光雷达,LMS151系列针对10%反射率的物体,距离可以达到50米,LMS122检测距离最远可到达20米。鉴于以上特点,本发明采用基于LMS1XXX系列的激光雷达组来组成无人车近距离前方和后方障碍物探测和保护系统:本发明采用一颗位置略高于车顶、与水平面的夹角为α(5~15°)、斜向下、位于车顶前部中心位置的LMS151-10100单线激光雷达L1(图4、5),配合一组离地大概40cm、且与水平面平行的LMS151-10100单线激光雷达组FLT(一般为3颗,分别为L2、L3、L4,参见图4、5)组成精确的前方近距离探测和避障系统,其中L2、L4分别位于车头的左前部和右前部,它们中心方向远离运动方向均设有一个近似30°的夹角,可分别有效探测无人车左侧和右侧的障碍物,L3位于L2和L4的中心位置,其中心方向与运动方向一致;本发明采用一组离地大概40cm~60cm、且与水平面平行LMS122-10100单线激光雷达组BLT(一般为2颗,分别为L5、L6,参见图4、8)来组成无人车后方探测和保护系统。SICK's lidar adopts the mature laser-time-of-flight principle and multiple echo technology, non-contact detection, and can set various graphic protection areas according to the needs of the scene, and can simply modify the graphics at any time according to the needs of the scene. The sensor has reliable anti-interference performance through internal filtering and multiple echo technology. LMS151 and LMS122 are new high-performance lidars from SICK, which are respectively aimed at short-range detection. The LMS151 series is aimed at objects with a 10% reflectivity, and the distance can reach 50 meters, and the LMS122 detection distance can reach 20 meters. In view of the above characteristics, the present invention uses a LMS1XXX series based lidar group to form an unmanned vehicle short-range front and rear obstacle detection and protection system: the present invention adopts a position slightly higher than the roof of the vehicle, and the included angle with the horizontal plane is α (5 ~ 15°), the LMS151-10100 single-line lidar L1 located at the center of the front of the roof, diagonally downwards (Figure 4, 5), with a set of LMS151-10100 single-line about 40cm above the ground and parallel to the horizontal plane The lidar group FLT (usually 3, L2, L3, L4, see Figures 4 and 5) constitutes an accurate forward short-range detection and obstacle avoidance system, in which L2 and L4 are located at the front left and right of the front of the vehicle, respectively. They have an included angle of approximately 30° in the center direction away from the movement direction, which can effectively detect obstacles on the left and right sides of the unmanned vehicle, respectively. L3 is located at the center of L2 and L4, and its center direction is consistent with the movement direction; The present invention adopts a group of LMS122-10100 single-line laser radar group BLTs (generally 2 pieces, L5 and L6 respectively, see Figures 4 and 8) that are about 40cm-60cm above the ground and parallel to the horizontal plane to form the rear detection and detection system of the unmanned vehicle. Protection System.
双目CCD摄像采集系统的原理与人眼相似,基于CCD摄像机的双目视觉是在单目视觉上再加一个CCD摄像机,双目视觉可以感知物体的远近,物体距离越远,视差越小,反之,视差越大;双目视觉对所有的障碍物进行直接测量,测量精度比单目视觉高,直接利用视觉差计算距离,无需庞大的数据样本库。基于上述优点,本发明在无人车的前挡风玻璃上安装两颗(CCD1和CCD2)CCD摄像机来形成双目视觉(参见图4、图6),进行远距离环境探测和避障。The principle of the binocular CCD camera acquisition system is similar to the human eye. The binocular vision based on the CCD camera is to add a CCD camera to the monocular vision. The binocular vision can perceive the distance of the object. The farther the object is, the smaller the parallax. On the contrary, the greater the parallax; the binocular vision directly measures all obstacles, and the measurement accuracy is higher than that of the monocular vision, and the distance is calculated directly by the parallax, without the need for a huge data sample library. Based on the above advantages, the present invention installs two (CCD1 and CCD2) CCD cameras on the front windshield of the unmanned vehicle to form binocular vision (see Figures 4 and 6) for long-distance environmental detection and obstacle avoidance.
德国英飞凌科技股份公司专一生产车用微波雷达:车用雷达系统发出无线电波,电波被前方的车辆或其他物体反射回来。英飞凌的雷达芯片负责发送和接收这些高频信号,并将它们传给雷达电控单元(ECU),雷达ECU测出汽车与其它运动物体的间隔距离以及它们的速度,为有人和无人驾驶提供距离判据;英飞凌微波雷达主要有77GHz和24GHz两种,77GHz是自适应巡航控制和碰撞预警等雷达应用的标准频率范围,即使在能见度很低的情况下,77GHz雷达芯片也能让无人车“识别”250米距离内的障碍物和其他道路使用情况;24GHz 雷达芯片也能“识别”100米距离内的障碍物和其他道路使用情况,此微波雷达是采用硅锗工艺和工作在24GHz ISM频段(24.0GHz至24.25GHz)的全新产品,配备了一个具有业界最高集成度的片上雷达收发器和一个仅用于接收的辅助芯片,使系统设计能灵活实现多种应用的低成本和高性能的设计目标。全新系列的三款器件分别为BGT24MTR11(单发射单接收通道)、BGT24MTR12(单发射双接收通道)和BGTMR2(双接收器);由于性价比的原因,本发明采用BGT24MTR11进行中远距离探测,如图5所示,微波雷达MR1的设置位置和方式与单线激光雷达L1相同,其与水平面的夹角为β,且β<α;微波雷达MR2设置在单线激光雷达L3旁,微波雷达MR3设置在单线激光雷达L5和单线激光雷达L6之间(图8)。Germany's Infineon Technologies AG specializes in the production of microwave radar for vehicles: the vehicle radar system emits radio waves, which are reflected back by vehicles or other objects ahead. Infineon's radar chip is responsible for sending and receiving these high-frequency signals, and passing them to the radar electronic control unit (ECU), which measures the distance between the car and other moving objects and their speed, for people and unmanned people. Driving provides a distance criterion; Infineon microwave radars mainly include 77GHz and 24GHz. 77GHz is the standard frequency range for radar applications such as adaptive cruise control and collision warning. Even in low visibility conditions, the 77GHz radar chip can Let the unmanned vehicle "recognize" obstacles and other road usage within a distance of 250 meters; the 24GHz radar chip can also "identify" obstacles and other road usage within a distance of 100 meters. This microwave radar is made of silicon germanium technology and The new product operating in the 24GHz ISM band (24.0GHz to 24.25GHz) is equipped with an on-chip radar transceiver with the industry's highest integration level and a receive-only auxiliary chip, enabling system design flexibility to achieve low cost and high performance design goals. The three devices of the new series are BGT24MTR11 (single transmitter and single receiver channel), BGT24MTR12 (single transmitter and double receiver channel) and BGTMR2 (dual receiver); for the reasons of cost performance, the present invention adopts BGT24MTR11 for medium and long distance detection, as shown in Figure 5 As shown in the figure, the setting position and method of the microwave radar MR1 are the same as those of the single-line laser radar L1, and its included angle with the horizontal plane is β, and β<α; Between radar L5 and single-line lidar L6 (Figure 8).
由于传感器组合的原因,无人车在启动向前行驶时一般在前方运动区域存在一个盲区,为了防止启动时发生碰撞,本发明在无人车的底部加入一组由超声波传感器US1、US2、US3、US4、US5组成的前盲区探测和避障系统(参见图7,即图4中的前方盲区超声波传感器组FBZT)。在无人车启动向前行驶瞬间,前盲区探测系统工作,如果在无人车启动加速向前行驶时安全区域不存在障碍,无人车会转入多传感器融合导航状态。由于传感器组合的原因,无人车在启动向后倒车时一般在后方运动区域存在一个盲区,为了防止启动时发生碰撞,本发明在无人车的底部加入一组由超声波传感器US6、US7、US8、US9、US10组成的后盲区探测和避障系统(参见图8,即图4中的后方盲区超声波传感器组BBZT);在无人车启动向后倒车瞬间,后盲区探测系统工作,如果在无人车启动加速向后倒车时安全区域不存在障碍,无人车会转入多传感器融合导航状态。Due to the combination of sensors, the unmanned vehicle generally has a blind spot in the forward motion area when it starts to drive forward. In order to prevent collision during starting, the present invention adds a set of ultrasonic sensors US1, US2, US3 to the bottom of the unmanned vehicle. , US4, US5 composed of front blind spot detection and obstacle avoidance system (see Figure 7, that is, the front blind zone ultrasonic sensor group FBZT in Figure 4). At the moment when the unmanned vehicle starts to drive forward, the front blind spot detection system works. If there is no obstacle in the safe area when the unmanned vehicle starts to accelerate and drive forward, the unmanned vehicle will switch to the multi-sensor fusion navigation state. Due to the combination of sensors, the unmanned vehicle generally has a blind spot in the rear movement area when it starts to reverse. The rear blind spot detection and obstacle avoidance system composed of , US9, and US10 (see Figure 8, that is, the rear blind spot ultrasonic sensor group BBZT in Figure 4); at the moment when the unmanned vehicle starts to reverse, the rear blind spot detection system works. When the human vehicle starts to accelerate and reverse, there is no obstacle in the safe area, and the unmanned vehicle will enter the multi-sensor fusion navigation state.
STM公司生产的全新STM32F7 MCU系列产品,是全球第一个量产且拥有32位元ARM Cortex-M7处理器的微控制器,产品都配备拥有浮点运算单位及DSP扩充功能的Cortex-M7核心,运算速度最高216MHz;具有面向内核、外设和存储器互连的AXI和多AHB总线矩阵,采用6级超标量流水线和浮点单元(Floating Point Unit,FPU);两个通用DMA控制器和一个专用于图形加速器的DMA;外设速度独立于CPU速度(双时钟支持),使得系统时钟变化不影响外设工作;相比之前的STM32系列,拥有更丰富的外设;上述出色的能效归功于意法半导体市场领先的90纳米制造工艺、独有的减少闪存访存时间、先进的主频和功耗优化技术,在所有寄存器和SRAM内容都能继续保持的停止模式下,具有100μA的典型电流消耗,同时STM32F7具有优良的指令与管脚兼容性:Cortex-M7向下兼容Cortex-M4指令集,STM32F7系列与STM32F4系列引脚兼容;STM32F7 MCU系列产品将ARM Cortex-M7效能超越早期核心(譬如Cortex-M4)的优势运用到极致,效能达到将近DSP两倍,上述特点使得STM32F7非常适合替代STM32F4系列芯片做无人车多传感器融合的数据处理。The new STM32F7 MCU series produced by STM is the world's first mass-produced microcontroller with a 32-bit ARM Cortex-M7 processor. The products are equipped with a Cortex-M7 core with floating-point arithmetic units and DSP expansion functions. , operating speed up to 216MHz; with AXI and multi-AHB bus matrix for core, peripheral and memory interconnection, using 6-stage superscalar pipeline and Floating Point Unit (FPU); two general-purpose DMA controllers and one DMA dedicated to graphics accelerator; peripheral speed is independent of CPU speed (dual clock support), so that system clock changes do not affect peripheral work; compared to the previous STM32 series, it has more peripherals; the above excellent energy efficiency is due to ST's market-leading 90nm manufacturing process, unique flash memory access time reduction, advanced clock and power optimization techniques, and 100µA typical current in stop mode where all registers and SRAM contents are retained At the same time, STM32F7 has excellent instruction and pin compatibility: Cortex-M7 is backward compatible with Cortex-M4 instruction set, STM32F7 series is pin compatible with STM32F4 series; The advantages of Cortex-M4) are used to the extreme, and the efficiency is nearly twice that of DSP. The above characteristics make STM32F7 very suitable to replace the STM32F4 series chips for data processing of multi-sensor fusion of unmanned vehicles.
图像处理大致可以分为低级处理和高级处理,低级处理的数据量大、算法简单,存在着 较大的并行性;高级处理的算法复杂、数据量小。图像低级处理阶段,利用软件处理是一个很耗时的过程,但是利用硬件处理,就可以对大量数据进行并行处理,能够极大的提高处理速度。FPGA本身只是标准的单元阵列,没有一般的集成电路所具有的功能,但用户可以根据自己的设计需要,通过特定的布局布线工具对其内部进行重新组合连接,在最短的时间内设计出自己的专用集成电路,由于FPGA采用软件化的设计思想实现硬件电路的设计,这样就使得基于FPGA设计的系统具有良好的可复用和修改性,这种全新的设计思想已经逐渐应用在高性能的图像处理上并快速发展。Image processing can be roughly divided into low-level processing and high-level processing. Low-level processing has a large amount of data, simple algorithms, and large parallelism; high-level processing has complex algorithms and small data volumes. In the low-level image processing stage, using software processing is a very time-consuming process, but using hardware processing, a large amount of data can be processed in parallel, which can greatly improve the processing speed. The FPGA itself is just a standard cell array and does not have the functions of a general integrated circuit, but users can recombine and connect its interior through specific placement and routing tools according to their own design needs, and design their own in the shortest time. Application-specific integrated circuits, because FPGA adopts the design idea of software to realize the design of hardware circuit, which makes the system based on FPGA design have good reusability and modification. This new design idea has been gradually applied to high-performance image Processing and rapid development.
结合ARM和FPGA的优点,本发明双目视觉同步采集和图像数据分割处理等交给FPGA,而物体双目视觉距离计算交给ARM,二者数据处理连接如图3。Combining the advantages of ARM and FPGA, the binocular vision synchronous acquisition and image data segmentation processing of the present invention are handed over to the FPGA, while the object binocular visual distance calculation is handed over to the ARM. The data processing connection of the two is shown in Figure 3.
本发明为克服现有无人车稳定性差、快速性差和性价比较差的缺点,舍弃了国产无人车所采用的单一单线激光雷达或多线激光雷达工作模式,提出基于第七代NUC微型电脑+ARM(最新嵌入式STM32F767)+FPGA的全新三核控制模式。为了减少无人车的整体硬件成本和提高无人车探测的距离,采用多个单线激光雷达+双CCD摄像机+超声波传感器融合技术来实现障碍物的探测和避障。控制板以STM32F767为处理核心,实时接收基于NUC7(第七代NUC微型电脑)的上位机多传感器数字融合信号和基于FPGA的CCD摄像机采集的图像信号,并实时响应各种中断,实现与控制总站的数据通信和存储实时信号。In order to overcome the shortcomings of poor stability, poor rapidity and poor cost performance of the existing unmanned vehicles, the invention abandons the single-line laser radar or multi-line laser radar working modes adopted by domestic unmanned vehicles, and proposes a microcomputer based on the seventh generation NUC. +ARM (the latest embedded STM32F767)+FPGA's new three-core control mode. In order to reduce the overall hardware cost of unmanned vehicles and improve the detection distance of unmanned vehicles, multiple single-line lidars + dual CCD cameras + ultrasonic sensor fusion technology are used to detect and avoid obstacles. The control board takes STM32F767 as the processing core, receives in real time the multi-sensor digital fusion signal of the host computer based on NUC7 (seventh generation NUC microcomputer) and the image signal collected by the CCD camera based on FPGA, and responds to various interrupts in real time to realize and control the main station data communication and storage of real-time signals.
为了提高运算速度,保证无人车控制系统的快速性、稳定性和可靠性,本发明在基于STM32F767控制器中引入FPGA和英特尔的第七代NUC微型电脑,形成基于ARM+FPGA+NUC的三核控制器,此控制器把多个单线激光雷达探测和避障系统控制器系统集中设计,并充分考虑电池在这个系统的作用,实现无人车在各个区域的探测和避障。把无人车控制系统中工作量最大的多个单线激光雷达信号处理交给NUC微型电脑处理,充分发挥NUC微型电脑数据处理速度较快的特点,CCD摄像机的双目视觉图形数据处理交给ARM和FPGA共同处理,发挥各自在图像处理在不同阶段的优点,而盲区探测和避障、人机界面、路径规划、在线输出等功能交给STM32F767单独完成,这样就实现了ARM、FPGA、NUC微型电脑的分工,同时三者之间实时进行通讯进行数据交换和调用。In order to improve the computing speed and ensure the rapidity, stability and reliability of the unmanned vehicle control system, the present invention introduces FPGA and Intel's seventh-generation NUC microcomputer into the STM32F767-based controller to form a three-stage system based on ARM+FPGA+NUC. Nuclear controller, this controller integrates multiple single-line lidar detection and obstacle avoidance system controller systems, and fully considers the role of batteries in this system to realize the detection and obstacle avoidance of unmanned vehicles in various areas. The processing of multiple single-line lidar signals with the largest workload in the unmanned vehicle control system is handed over to the NUC microcomputer for processing, giving full play to the NUC microcomputer's fast data processing speed, and the binocular visual graphics data processing of the CCD camera is handed over to ARM. Co-processing with FPGA to give full play to their advantages in different stages of image processing, while blind spot detection and obstacle avoidance, human-machine interface, path planning, online output and other functions are handed over to STM32F767 to complete independently, thus realizing ARM, FPGA, NUC micro The division of labor between the computers, and real-time communication between the three for data exchange and calling.
对于本发明设计的基于ARM+NUC+FPGA三核控制器,在电源打开状态下,ARM、FPGA和NUC首先完成初始化,然后车载电脑NUC通过无人车控制总站调取无人车行驶路径和地图信息,随后盲区超声波传感器组、基于CCD的双目视觉、微波雷达和多个单线激光雷达开始工作,ARM+FPGA控制器确定无障碍物进入工作区域后开启无人车行走模式,并实时计算双目CCD摄像机的图像采集数据和微波雷达数据,同时与NUC相互通讯,NUC实时接收多个单线激光雷达反馈信号并解码,然后与ARM+FPGA控制器通讯并传输控制信号给 ARM+FPGA控制器,ARM+FPGA控制器通过解码输出控制信号精确控制直流无刷伺服电机,直流无刷伺服电机经机械装置变换动力后驱动无人车行驶,并实时反馈位移、速度和加速度等信号给ARM控制器。For the three-core controller based on ARM+NUC+FPGA designed by the present invention, when the power is turned on, the ARM, FPGA and NUC first complete initialization, and then the vehicle-mounted computer NUC retrieves the driving path and map of the unmanned vehicle through the unmanned vehicle control station information, and then the blind spot ultrasonic sensor group, CCD-based binocular vision, microwave radar and multiple single-line lidars start to work. The ARM+FPGA controller determines that there are no obstacles entering the working area and then turns on the unmanned vehicle walking mode, and calculates the dual It collects the image data and microwave radar data of the CCD camera, and communicates with the NUC at the same time. The NUC receives and decodes multiple single-line lidar feedback signals in real time, and then communicates with the ARM+FPGA controller and transmits control signals to the ARM+FPGA controller. The ARM+FPGA controller precisely controls the DC brushless servo motor by decoding the output control signal. The DC brushless servo motor drives the unmanned vehicle to drive after the mechanical device transforms the power, and feeds back signals such as displacement, speed and acceleration to the ARM controller in real time.
参照图9,把无人车控制系统分为两部分:基于车载电脑NUC的上位机系统和基于STM32F767的ARM+FPGA下位机系统。其中基于车载电脑NUC上位机系统完成路径和地图输入、多传感器的数据融合和在线输出等功能;基于ARM+FPGA下位机控制系统完成无人车系统的伺服控制、CCD的双目视觉数据处理和微波雷达测距、I/O控制等功能,其中工作量最大的多轴直流无刷伺服系统控制、基于CCD的双目视觉数据处理和微波雷达测距由ARM+FPGA共同处理,充分发挥ARM+FPGA各自数据处理的优点,这样就实现了NUC与ARM+FPGA的分工,同时三者之间又可以进行通讯,实时进行数据交换和调用。Referring to Figure 9, the unmanned vehicle control system is divided into two parts: the upper computer system based on the on-board computer NUC and the ARM+FPGA lower computer system based on STM32F767. Among them, the NUC host computer system based on the on-board computer completes the functions of path and map input, multi-sensor data fusion and online output; based on the ARM+FPGA host computer control system, the unmanned vehicle system servo control, CCD binocular vision data processing and Microwave radar ranging, I/O control and other functions, among which the multi-axis DC brushless servo system control with the largest workload, CCD-based binocular vision data processing and microwave radar ranging are jointly processed by ARM+FPGA, giving full play to ARM+ The advantages of FPGA's respective data processing, thus realizing the division of labor between NUC and ARM+FPGA, at the same time, communication between the three can be carried out, and data exchange and call can be carried out in real time.
具体实现过程如下:The specific implementation process is as follows:
1)在无人车未接到运动命令之前,它一般在等待区域等待控制总站发出的出发命令,如果电压较低的话,无人车会自动与充电装置对接进行充电。1) Before the unmanned vehicle receives the motion command, it generally waits in the waiting area for the start command issued by the control station. If the voltage is low, the unmanned vehicle will automatically dock with the charging device for charging.
2)无人车在等待期间一旦接到出发任务后,车载电脑NUC通过控制总站调取无人车行驶路径和导航地图信息,随后ARM+FPGA控制器开启前方盲区超声波传感器US1~US5对前方盲区进行扫描,如果有障碍物进入运动盲区,ARM+FPGA控制器会发出警报,并等待障碍物的清除;如果无障碍物进入运动盲区,无人车开始自动加速。2) Once the unmanned vehicle receives the departure task during the waiting period, the on-board computer NUC retrieves the driving path and navigation map information of the unmanned vehicle through the control station, and then the ARM+FPGA controller turns on the front blind spot ultrasonic sensors US1~US5 to detect the front blind spot. During scanning, if there is an obstacle entering the blind spot, the ARM+FPGA controller will issue an alarm and wait for the obstacle to be cleared; if there is no obstacle entering the blind spot, the unmanned vehicle will start to accelerate automatically.
3)无人车开始启动瞬间,根据天气情况进入工况选择模式:如果天气良好,开启微波雷达MR1~MR3、单线激光雷达传感器L1~L6以及CCD摄像机,CCD摄像机和MR1~MR3开始向ARM+FPGA控制器传输远距离障碍物信息,同时多个单线激光雷达开始向NUC传输近距离障碍物信息,ARM+FPGA控制器和NUC开始解码这些障碍物信息并转化为障碍物与无人车的距离信号并相互通讯,无人车借助这些反馈距离信号开始自主导航,沿着规定路线开始行驶;如果天气恶劣,ARM+FPGA控制器将禁止双目CCD摄像机和单线激光雷达传感器L1~L6工作,只开启微波雷达MR1~MR3,MR1~MR3开始向基于ARM+FPGA控制器传输中远距离障碍物信息,同时ARM+FPGA控制器和NUC相互通讯,无人车借助MR1~MR3反馈距离信号开始降速自主导航,沿着规定路线开始行驶。3) At the moment when the unmanned vehicle starts to start, enter the working condition selection mode according to the weather conditions: if the weather is good, turn on the microwave radar MR1~MR3, the single-line lidar sensors L1~L6 and the CCD camera. The CCD camera and MR1~MR3 start to ARM+ The FPGA controller transmits long-distance obstacle information, and at the same time, multiple single-line lidars begin to transmit short-range obstacle information to the NUC. The ARM+FPGA controller and NUC begin to decode the obstacle information and convert it into the distance between the obstacle and the unmanned vehicle. Signals and communicate with each other, the unmanned vehicle starts autonomous navigation with the help of these feedback distance signals, and starts to drive along the specified route; if the weather is bad, the ARM+FPGA controller will prohibit the binocular CCD camera and the single-line lidar sensor L1~L6 from working, only Turn on the microwave radars MR1~MR3, and MR1~MR3 begin to transmit the information of mid- and long-distance obstacles to the ARM+FPGA-based controller. At the same time, the ARM+FPGA controller and the NUC communicate with each other, and the unmanned vehicle starts to decelerate autonomously with the help of MR1~MR3 feedback distance signals. Navigate and start driving along the prescribed route.
4)无人车进入运动路线后,如果天气良好,CCD摄像机的第一个任务就是结合已有的道路地图信息寻找道路的标志点:这个标志点可能是道路的尽头,也有可能是转弯的地点,也可能是某些停靠站点,CCD摄像机发现这些标志点后与FPGA通讯,FPGA对CCD摄像机的图像进行解码然后与ARM通讯,ARM根据内部算法继续处理标志点数据信息,然后转化为直流无刷伺服电机的PWM控制信号,驱动无人车进行正常行驶前的定位和姿态调整; 无人车结合CCD摄像机图像采集完成定位和姿态调整后,将根据车载地图信息正常行驶,在行驶过程中CCD摄像机通过FPGA同步采集系统将实时图像传输给FPGA,FPGA解码处理后传输数据给ARM,ARM控制器利用内部算法把实时解码图像数据转化为无人车与障碍物的距离,然后ARM+FPGA控制器开始微调直流无刷伺服电机的控制,使无人车开始自主导航向远离障碍物的前进方向靠近并实施避障。如果天气恶劣,CCD摄像机将被禁止工作,无人车将舍弃远距离避障模式,只能依靠微波雷达MR1~MR3首先实现中远距离避障。4) After the unmanned vehicle enters the movement route, if the weather is good, the first task of the CCD camera is to find the road marking point based on the existing road map information: this marking point may be the end of the road or the turning point , or some docking sites, the CCD camera communicates with the FPGA after finding these marking points, the FPGA decodes the image of the CCD camera and then communicates with the ARM, the ARM continues to process the marking point data information according to the internal algorithm, and then converts it into DC brushless The PWM control signal of the servo motor drives the unmanned vehicle to perform positioning and attitude adjustment before normal driving; after the unmanned vehicle combines the CCD camera image acquisition to complete the positioning and attitude adjustment, it will drive normally according to the on-board map information, and the CCD camera will drive normally during the driving process. The real-time image is transmitted to the FPGA through the FPGA synchronous acquisition system, and the FPGA decodes and processes the data and transmits the data to the ARM. The ARM controller uses the internal algorithm to convert the real-time decoded image data into the distance between the unmanned vehicle and the obstacle, and then the ARM+FPGA controller starts Fine-tune the control of the brushless DC servo motor, so that the unmanned vehicle starts to navigate autonomously and approach and avoid obstacles in the forward direction away from obstacles. If the weather is bad, the CCD camera will be prohibited from working, and the unmanned vehicle will abandon the long-distance obstacle avoidance mode, and can only rely on the microwave radar MR1 to MR3 to first achieve medium and long distance obstacle avoidance.
无人车在向远距离障碍物靠近时,中远距离探测微波雷达MR1、MR2分别配合前方近距离探测单线激光雷达L1和L2、L3、L4时刻检测前方的环境,无人车开始实施避障。L1和MR1可单独工作,由于具有一定的倾斜角度,MR1和L1配合可以很好探测到前方道路的起伏,先由MR1对中远距离进行探测,然后由L1进行进一步精确确认,MR1和L1很容易发现起伏的深度和宽度;MR1、MR2、L1和L3配合主要探测正前方障碍物的存在与否:先由MR2对中远距离探测,发现疑似障碍物后由MR1进行二次确认,疑似障碍物大致确定后再有L1和L3进行精确位置确认;MR1、MR2、L3和L2配合检测左前方障碍物的存在与否:先由MR2对中远距离探测,发现疑似障碍物后由MR1进行二次确认,疑似障碍物大致确定后再由L3和L2进行精确位置确认;MR1、MR2、L3和L4检测右前方障碍物的存在与否:先由MR2对中远距离探测,发现疑似障碍物后由MR1进行二次确认,疑似障碍物大致确定后再由L3和L4进行精确位置确认。When the unmanned vehicle approaches a long-distance obstacle, the mid- and long-distance detection microwave radars MR1 and MR2 cooperate with the front short-range detection single-line lidar L1 and L2, L3, and L4 to detect the environment ahead, and the unmanned vehicle starts to implement obstacle avoidance. L1 and MR1 can work independently. Due to their certain inclination angle, MR1 and L1 can cooperate well to detect the ups and downs of the road ahead. First, MR1 detects the medium and long distances, and then L1 performs further accurate confirmation. MR1 and L1 are very easy Find the depth and width of the undulations; MR1, MR2, L1 and L3 cooperate to mainly detect the presence of obstacles in front of them: MR2 first detects the medium and long distances, and MR1 conducts secondary confirmation after the suspected obstacles are found. The suspected obstacles are roughly After confirmation, there are L1 and L3 for accurate position confirmation; MR1, MR2, L3 and L2 cooperate to detect the existence of obstacles in the left front: firstly, MR2 detects the medium and long distance, and after finding the suspected obstacles, MR1 conducts secondary confirmation, The suspected obstacle is roughly determined, and then L3 and L2 confirm the precise position; MR1, MR2, L3, and L4 detect the existence of the right front obstacle: first, MR2 detects the medium and long distance, and after finding the suspected obstacle, MR1 carries out the second step. After confirmation, the suspected obstacle is roughly determined, and then L3 and L4 confirm the precise location.
在正常行驶过程中,MR2时刻对行驶方向上的中远距离进行探测,探测信号输入ARM控制器进行解码,得到障碍物大致距离,然后当障碍物进入MR1的探测范围后,MR2和MR1的探测信号会同时输送给ARM控制器进行解码,进一步得到障碍物的距离信息等。如果无障碍物进入运动方向,无人车按照原有的速度继续行驶;如果确实有疑似障碍物存在,无人车按照图11的速度控制模式减速至低速进入中远距离探测和避障模式,ARM+FPGA时刻与NUC通讯。如果传感器探测到天气情况非常恶劣,此时多个单线激光雷达将受到严重干扰,ARM+FPGA控制器将继续接收微波雷达MR2和MR1信号而舍弃NUC传输的多个单线激光雷达L1~L6反馈信号,同时无人车开启低速行驶模式,继续以MR2和MR1作为前进导航传感器靠近障碍物并依靠微波雷达信号进行避障;如果天气情况良好,此时微波信号和多个单线激光雷达信号良好,ARM+FPGA控制器将继续接收微波雷达MR2和MR1信号,且NUC接收多个单线激光雷达L1~L6反馈信号,无人车进入微波雷达信号和单线激光雷达信号交接区域,一旦单线激光雷达L3探测到障碍物信息,此时微波雷达的采集信号将作为辅助信号,控制器进入单线激光雷达精确定位导航模式:In the normal driving process, MR2 detects the medium and long distances in the driving direction at all times, and the detection signal is input to the ARM controller for decoding to obtain the approximate distance of the obstacle. Then, when the obstacle enters the detection range of MR1, the detection signals of MR2 and MR1 At the same time, it will be sent to the ARM controller for decoding, and the distance information of obstacles will be further obtained. If there is no obstacle in the direction of movement, the unmanned vehicle continues to drive at the original speed; if there is indeed a suspected obstacle, the unmanned vehicle decelerates to a low speed according to the speed control mode in Figure 11 and enters the medium and long distance detection and obstacle avoidance mode. +FPGA communicates with NUC at all times. If the sensor detects that the weather is very bad, the multiple single-line lidars will be severely interfered at this time, and the ARM+FPGA controller will continue to receive the microwave radar MR2 and MR1 signals and discard the multiple single-line lidar L1~L6 feedback signals transmitted by the NUC At the same time, the unmanned vehicle turns on the low-speed driving mode, continues to use MR2 and MR1 as forward navigation sensors to approach obstacles and relies on microwave radar signals to avoid obstacles; if the weather is good, the microwave signals and multiple single-line lidar signals are good at this time, ARM +FPGA controller will continue to receive microwave radar MR2 and MR1 signals, and the NUC will receive multiple single-line lidar L1~L6 feedback signals, and the unmanned vehicle will enter the microwave radar signal and single-line laser radar signal transfer area. Once the single-line laser radar L3 detects Obstacle information. At this time, the collected signal of the microwave radar will be used as an auxiliary signal, and the controller will enter the single-line laser radar precise positioning and navigation mode:
如果L3和L1探测到前方运动路径中存在一定高度的起伏小坑,若高度和宽度超过了 无人车越过的要求,将向ARM+FPGA控制器中的STM32F767发出中断请求同时把起伏小坑的数据传输给NUC和ARM+FPGA进行处理,STM32F767会对中断优先处理并进入前方躲避保护子程序;如果起伏小坑的高度和宽度在无人车容忍范围,无人车将按照设定的正常速度进行行驶;If L3 and L1 detect that there is a certain height of undulating pits in the forward motion path, if the height and width exceed the requirements for unmanned vehicles to cross, they will send an interrupt request to the STM32F767 in the ARM+FPGA controller and at the same time put the undulating pits. The data is transmitted to the NUC and ARM+FPGA for processing, and the STM32F767 will give priority to the interrupt and enter the front avoidance protection subroutine; if the height and width of the undulating pit are within the tolerance range of the unmanned vehicle, the unmanned vehicle will follow the set normal speed. to drive;
如果L1和L3探测到前方运动路径中存在障碍物,将向ARM+FPGA控制器中的STM32F767发出中断请求同时把障碍物数据传输给NUC进行处理,STM32F767会对中断优先处理并进入紧急前方避障保护子程序:STM32F767根据NUC通讯的数据进入向左或向右的紧急避障让行;如果没有障碍物进入运行范围,无人车将按照图11的速度控制模式加速至设定的正常速度行驶。If L1 and L3 detect an obstacle in the forward motion path, they will send an interrupt request to the STM32F767 in the ARM+FPGA controller and transmit the obstacle data to the NUC for processing. The STM32F767 will prioritize the interrupt and enter the emergency forward obstacle avoidance Protection subroutine: STM32F767 enters the left or right emergency obstacle avoidance yield according to the data of NUC communication; if no obstacle enters the operating range, the unmanned vehicle will accelerate to the set normal speed according to the speed control mode in Figure 11 .
如果L2和L3探测到左前方运动路径中存在障碍物,将向ARM+FPGA控制器中的STM32F767发出中断请求同时把障碍物数据传输给NUC进行处理,STM32F767会对中断优先处理并进入紧急左前避障保护子程序:STM32F767根据NUC通讯的数据进入向右的紧急避障让行;如果没有障碍物进入运行范围,无人车将按照图11的速度控制模式加速至设定的正常速度行驶;If L2 and L3 detect an obstacle in the left front motion path, they will send an interrupt request to the STM32F767 in the ARM+FPGA controller and transmit the obstacle data to the NUC for processing. The STM32F767 will prioritize the interrupt and enter the emergency left front avoidance. Obstacle protection subroutine: STM32F767 enters the right emergency obstacle avoidance yield according to the data of NUC communication; if no obstacle enters the operating range, the unmanned vehicle will accelerate to the set normal speed according to the speed control mode in Figure 11;
如果L4和L3探测到右前方运动路径中存在障碍物,将向ARM+FPGA控制器中的STM32F767发出中断请求同时把障碍物数据传输给NUC进行处理,STM32F767会对中断优先处理并进入紧急右前避障保护子程序:STM32F767根据NUC通讯的数据进入向左的紧急避障让行;如果没有障碍物进入运行范围,无人车将按照图11的速度控制模式加速至设定的正常速度行驶。If L4 and L3 detect an obstacle in the right front motion path, they will send an interrupt request to the STM32F767 in the ARM+FPGA controller and transmit the obstacle data to the NUC for processing. The STM32F767 will prioritize the interrupt and enter the emergency right forward avoidance. Obstacle protection subroutine: STM32F767 enters the emergency obstacle avoidance to the left according to the data of NUC communication; if no obstacle enters the operating range, the unmanned vehicle will accelerate to the set normal speed according to the speed control mode in Figure 11.
5)无人车进入运动路线后,单线激光雷达L5、L6和微波雷达MR3时刻检测后方的环境,MR3、L5或L6判断后方存在障碍物向无人车靠近时,将向ARM+FPGA控制器中的STM32F767发出中断请求同时把障碍物数据传输给NUC进行处理,STM32F767会对中断优先处理,然后进入后方避障保护子程序并发出警报;如果后方没有障碍物进入保护范围,无人车将按照设定的正常速度进行行驶。5) After the unmanned vehicle enters the moving route, the single-line lidar L5, L6 and microwave radar MR3 detect the environment behind it at all times. When MR3, L5 or L6 judges that there is an obstacle in the rear approaching the unmanned vehicle, it will send the ARM+FPGA controller to the controller. The STM32F767 in the STM32F767 sends an interrupt request and transmits the obstacle data to the NUC for processing. The STM32F767 will give priority to the interrupt, and then enter the rear obstacle avoidance protection subroutine and issue an alarm; if there is no obstacle behind entering the protection range, the unmanned vehicle will follow the Drive at the set normal speed.
6)无人车进入运动路线后,前方盲区超声波传感器US1~US5和后方盲区超声波传感器US6~US10时刻检测盲区的环境,如果US1~US5或US6~US10判断有临时的障碍物向无人车盲区靠近时,将向STM32F767发出中断请求同时把障碍物数据传输给NUC进行处理,STM32F767会对中断优先处理,无人车然后进入盲区避障保护子程序并发出警报;如果盲区没有障碍物进入保护范围,无人车将按照设定的正常速度进行行驶。6) After the unmanned vehicle enters the moving route, the front blind spot ultrasonic sensors US1~US5 and the rear blind spot ultrasonic sensors US6~US10 always detect the environment of the blind spot. If US1~US5 or US6~US10 judges that there is a temporary obstacle to the unmanned vehicle blind spot When approaching, an interrupt request will be sent to the STM32F767 and the obstacle data will be transmitted to the NUC for processing. The STM32F767 will give priority to the interrupt, and the unmanned vehicle will then enter the blind spot obstacle avoidance protection subroutine and issue an alarm; if there is no obstacle in the blind spot, it enters the protection range , the unmanned vehicle will drive at the set normal speed.
7)在无人车进入轨道正常运行速度达到要求的条件下,其单线激光雷达L1~L6、超声波传感器US1~US10、双目CCD摄像机和微波雷达MR1~MR3将实时采集外围信号,并把回 馈信号输送给NUC或ARM+FPGA控制器,先由NUC进行激光雷达数据融合处理、FPGA对双目CCD图像数据处理、ARM对微波雷达数据处理和响应各种中断保护,然后三者相互通讯,由STM32F767根据传感器融合解码信号生成控制信号给直流无刷伺服电机,控制器通过调节伺服电机的运动来实现无人车的运动速度和运动方向改变,使得无人车可以轻松的跟随车载输入路径。7) Under the condition that the unmanned vehicle enters the track and the normal running speed meets the requirements, its single-line lidar L1~L6, ultrasonic sensor US1~US10, binocular CCD camera and microwave radar MR1~MR3 will collect peripheral signals in real time, and give feedback. The signal is sent to the NUC or ARM+FPGA controller. First, the NUC performs the laser radar data fusion processing, the FPGA performs the binocular CCD image data processing, the ARM performs the microwave radar data processing and responds to various interruption protection, and then the three communicate with each other. The STM32F767 generates a control signal to the brushless DC servo motor based on the sensor fusion decoded signal. The controller adjusts the motion of the servo motor to change the motion speed and direction of the unmanned vehicle, so that the unmanned vehicle can easily follow the on-board input path.
8)在无人车进入轨道正常运行时,双目CCD摄像机实时对运动方向两边的一些重点标志区域的地面各种导航标志进行读取,然后将采集图像直接传输给FPGA,经FPGA解码处理后传输给ARM,STM32F767将对这些图像解码的数据进行处理后作为高速无人车快速前进、停车、启动和转弯等任务的导航标志之一,并进行二次姿态调整。8) When the unmanned vehicle enters the track and runs normally, the binocular CCD camera reads various navigation signs on the ground in some key sign areas on both sides of the moving direction in real time, and then directly transmits the collected images to the FPGA, which is decoded and processed by the FPGA. Transmitted to ARM, STM32F767 will process the decoded data of these images as one of the navigation signs for tasks such as fast forwarding, parking, starting and turning of high-speed unmanned vehicles, and perform secondary attitude adjustment.
9)由于无人车在多数情况下,不是一站式服务模式,到达的地方较多,为了能够实现无人车的站点功能,本发明在站点位置加入了地面标志,当无人车将要到达站点时,ARM+FPGA控制器会通过双目CCD摄像机对站点标志进行读取,当站点读取后将自动累加,为了实现无人车的自动行走循环功能,无人车达到最后一个站点后会自动清零并重新从站点1计数。9) Because the unmanned vehicle is not a one-stop service mode in most cases, and there are many places to reach, in order to realize the site function of the unmanned vehicle, the present invention adds a ground sign to the site position, when the unmanned vehicle is about to arrive. When the station is on, the ARM+FPGA controller will read the station logo through the binocular CCD camera. When the station is read, it will automatically accumulate. In order to realize the automatic walking cycle function of the unmanned vehicle, the unmanned vehicle will reach the last station. Auto-zero and recount from station 1.
10)当无人车进入停靠站点后,ARM控制器存储生成进站信息记录表,然后通过无线装置发送给控制总站,有利于控制总站对无人车位置的追踪和无人车的调度。10) When the unmanned vehicle enters the docking station, the ARM controller stores and generates the entry information record table, and then sends it to the control station through the wireless device, which is beneficial to the control station's tracking of the position of the unmanned vehicle and the scheduling of the unmanned vehicle.
11)为了能够满足无人车在风景区等特殊情况下的实际功能需要,本发明加入了停靠站选择功能:在无人车运行初期,控制总站可以自由设置无人车需要去的停靠站,然后无人车依靠自身的传感器可以独立完成这个设定,如果在运行过程中遇到紧急情况控制总站需要更改运行路径或停靠站点,控制主站通过无线装置与无人车ARM+FPGA+NUC三核控制器进行通讯,并通过无线向车载电脑NUC发送更改行走信息,无人车车载电脑NUC会接收并自动更新路径和停靠站点信息并与ARM+FPGA控制器通讯,驱动器驱动无人车按照新的要求完成任务。11) In order to meet the actual functional needs of unmanned vehicles in special situations such as scenic spots, the present invention adds a stop selection function: in the initial stage of unmanned vehicle operation, the control station can freely set the stops that unmanned vehicles need to go to, Then the unmanned vehicle can complete this setting independently by relying on its own sensors. If it encounters an emergency during operation, the control station needs to change the running path or stop site. The control master station communicates with the unmanned vehicle ARM+FPGA+NUC through a wireless device. The core controller communicates and sends the change of walking information to the on-board computer NUC through wireless. The unmanned vehicle on-board computer NUC will receive and automatically update the route and stop site information and communicate with the ARM+FPGA controller. The driver drives the unmanned vehicle according to the new requirements to complete the task.
12)当无人车按固定路径行驶时,系统上的多种声光报警系统将工作,很容易提醒周围行人无人车的存在,当无人车与主站失去通讯时,ARM+FPGA控制器会发出自动停车信号,直接原地锁死无人车的运动伺服电机,这样就不易与其他无人车发生碰撞,此时控制主站由于无法收集到无人车的传输信息,将根据上一个停靠站点信息进行快速追踪,并解决故障问题。12) When the unmanned vehicle travels along a fixed path, various sound and light alarm systems on the system will work, and it is easy to remind the surrounding pedestrians of the existence of unmanned vehicles. When the unmanned vehicle loses communication with the main station, ARM+FPGA controls The device will send an automatic stop signal to directly lock the motion servo motor of the unmanned vehicle on the spot, so that it is not easy to collide with other unmanned vehicles. At this time, since the control master station cannot collect the transmission information of the unmanned vehicle, it will A docking station information to fast track and troubleshoot problems.
所述实施例为本发明的优选的实施方式,但本发明并不限于上述实施方式,在不背离本发明的实质内容的情况下,本领域技术人员能够做出的任何显而易见的改进、替换或变型均属于本发明的保护范围。The embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above-mentioned embodiments, and any obvious improvement, replacement or All modifications belong to the protection scope of the present invention.
Claims (10)
- 一种多传感器融合的无人车探测避障方法,其特征在于,具体为:A multi-sensor fusion method for detecting and avoiding obstacles for an unmanned vehicle, which is characterized in that:车载电脑NUC通过控制总站调取无人车行驶路径和导航地图信息,ARM+FPGA控制器由前方盲区超声波传感器组确定运动盲区无障碍物时,无人车开始自动加速;The on-board computer NUC retrieves the driving path and navigation map information of the unmanned vehicle through the control station, and the ARM+FPGA controller determines that the unmanned vehicle starts to accelerate automatically when the blind spot ultrasonic sensor group determines that there are no obstacles in the blind spot.无人车开始启动瞬间,根据天气情况进入工况选择模式:如果天气良好,微波雷达、单线激光雷达传感器以及CCD摄像机均工作,CCD摄像机和微波雷达向ARM+FPGA控制器传输远距离障碍物信息,单线激光雷达传感器向NUC传输近距离障碍物信息,所述障碍物信息经处理后,作为无人车自主导航的反馈距离信号;如果天气恶劣,只有微波雷达工作,无人车根据反馈距离信号开始降速自主导航行驶;The moment the unmanned vehicle starts to start, it enters the working condition selection mode according to the weather conditions: if the weather is good, the microwave radar, single-line lidar sensor and CCD camera all work, and the CCD camera and microwave radar transmit long-distance obstacle information to the ARM+FPGA controller , the single-line lidar sensor transmits short-range obstacle information to the NUC. After the obstacle information is processed, it is used as the feedback distance signal for the autonomous navigation of the unmanned vehicle; if the weather is bad, only the microwave radar works, and the unmanned vehicle is based on the feedback distance signal. Start to slow down and navigate autonomously;无人车进入运动路线后,如果天气良好,ARM+FPGA由接收的道路标志点,调整无人车进行正常行驶前的位姿,根据车载地图信息正常行驶,向远距离障碍物的前进方向靠近并实施避障;如果天气恶劣,无人车进行中远距离避障。After the unmanned vehicle enters the movement route, if the weather is good, the ARM+FPGA will adjust the position and posture of the unmanned vehicle before normal driving according to the received road marking points, drive normally according to the vehicle map information, and approach the forward direction of the long-distance obstacle. And implement obstacle avoidance; if the weather is bad, the unmanned vehicle will avoid obstacles at medium and long distances.
- 根据权利要求1所述的多传感器融合的无人车探测避障方法,其特征在于,所述无人车在向远距离障碍物靠近时,位于无人车前方和顶部的微波雷达与单线激光雷达配合,进行避障,具体为:微波雷达MR1和单线激光雷达L1配合探测前方道路的起伏,MR1先进行中远距离探测,单线激光雷达L1进一步精确确认,确定起伏的深度和宽度;微波雷达MR1、MR2和单线激光雷达L1、L3配合,确定正前方是否存在障碍物:微波雷达MR2先进行中远距离探测,发现疑似障碍物后由微波雷达MR1进行二次确认,疑似障碍物大致确定后再由单线激光雷达L1、L3进行精确位置确认;微波雷达MR1、MR2和单线激光雷达L3、L2配合,确定左前方是否存在障碍物:微波雷达MR2先进行中远距离探测,发现疑似障碍物后由微波雷达MR1进行二次确认,疑似障碍物大致确定后再由单线激光雷达L3、L2进行精确位置确认;微波雷达MR1、MR2和单线激光雷达L3、L4配合,确定右前方是否存在障碍物:微波雷达MR2先进行中远距离探测,发现疑似障碍物后由微波雷达MR1进行二次确认,疑似障碍物大致确定后再由单线激光雷达L3、L4进行精确位置确认。The method for detecting and avoiding obstacles for an unmanned vehicle based on multi-sensor fusion according to claim 1, wherein when the unmanned vehicle approaches a long-distance obstacle, microwave radar and single-line laser light located in front of and at the top of the unmanned vehicle Radar cooperates to avoid obstacles, specifically: microwave radar MR1 and single-line lidar L1 cooperate to detect the undulations of the road ahead, MR1 first performs medium and long-distance detection, and single-line lidar L1 further confirms accurately to determine the depth and width of the undulation; microwave radar MR1 , MR2 cooperates with single-line lidar L1, L3 to determine whether there is an obstacle in front of it: microwave radar MR2 first conducts medium and long-distance detection, and after finding a suspected obstacle, microwave radar MR1 conducts a second confirmation, and the suspected obstacle is roughly determined. Single-line lidar L1 and L3 are used for precise location confirmation; microwave radars MR1 and MR2 cooperate with single-line lidars L3 and L2 to determine whether there is an obstacle in the front left: microwave radar MR2 first performs mid- and long-distance detection, and after finding a suspected obstacle, the microwave radar MR1 is used for secondary confirmation, and the suspected obstacle is roughly determined, and then the single-line lidar L3 and L2 are used for precise position confirmation; the microwave radar MR1, MR2 and the single-line lidar L3 and L4 cooperate to determine whether there is an obstacle in the front right: microwave radar MR2 The medium and long-distance detection is carried out first, and after the suspected obstacle is found, the microwave radar MR1 will carry out the secondary confirmation.
- 根据权利要求2所述的多传感器融合的无人车探测避障方法,其特征在于,当单线激光雷达L3探测到障碍物信息,进入单线激光雷达精确定位导航模式:The multi-sensor fusion method for detecting and avoiding obstacles for an unmanned vehicle according to claim 2, wherein when the single-line laser radar L3 detects the obstacle information, it enters the single-line laser radar precise positioning and navigation mode:如果单线激光雷达L3和L1探测到前方道路存在起伏,若起伏的高度和宽度超过了无人车越过的要求,无人车进行前方躲避保护;如果起伏的高度和宽度在无人车能够越过的范围内,将按照设定的正常速度进行行驶;If the single-line lidar L3 and L1 detect that there are undulations on the road ahead, if the height and width of the undulations exceed the requirements for the unmanned vehicle to cross, the unmanned vehicle will perform forward avoidance protection; Within the range, it will drive at the set normal speed;如果单线激光雷达L1和L3探测到前方运动路径中存在障碍物,无人车进行向左或向右的紧急避障让行;如果没有障碍物,无人车将加速至设定的正常速度行驶;If the single-line lidar L1 and L3 detect that there is an obstacle in the moving path ahead, the unmanned vehicle will perform emergency obstacle avoidance to the left or right; if there is no obstacle, the unmanned vehicle will accelerate to the set normal speed. ;如果单线激光雷达L2和L3探测到左前方运动路径中存在障碍物,无人车进行向右的 紧急避障让行;如果没有障碍物,无人车将加速至设定的正常速度行驶;If the single-line lidar L2 and L3 detect an obstacle in the left front motion path, the unmanned vehicle will make an emergency obstacle avoidance to the right; if there is no obstacle, the unmanned vehicle will accelerate to the set normal speed;如果单线激光雷达L4和L3探测到右前方运动路径中存在障碍物,无人车进行向左的紧急避障让行;如果没有障碍物,无人车将加速至设定的正常速度行驶。If the single-line lidar L4 and L3 detect an obstacle in the right front motion path, the unmanned vehicle will make an emergency obstacle avoidance to the left; if there is no obstacle, the unmanned vehicle will accelerate to the set normal speed.
- 根据权利要求1所述的多传感器融合的无人车探测避障方法,其特征在于,所述无人车进入运动路线后,无人车后方的单线激光雷达和微波雷达时刻检测后方的环境,若判断后方存在障碍物向无人车靠近,进行后方避障保护。The method for detecting and avoiding obstacles for an unmanned vehicle based on multi-sensor fusion according to claim 1, wherein after the unmanned vehicle enters the moving route, the single-line laser radar and the microwave radar behind the unmanned vehicle always detect the environment behind, If it is judged that there is an obstacle in the rear approaching the unmanned vehicle, the rear obstacle avoidance protection is performed.
- 根据权利要求1所述的多传感器融合的无人车探测避障方法,其特征在于,所述无人车进入运动路线后,前方盲区超声波传感器组和后方盲区超声波传感器组时刻检测盲区的环境,若判断有临时的障碍物向无人车盲区靠近,进行盲区避障保护。The multi-sensor fusion method for detecting and avoiding obstacles for an unmanned vehicle according to claim 1, wherein after the unmanned vehicle enters the moving route, the front blind spot ultrasonic sensor group and the rear blind spot ultrasonic sensor group always detect the environment of the blind spot, If it is judged that a temporary obstacle is approaching the blind spot of the unmanned vehicle, the blind spot obstacle avoidance protection is performed.
- 根据权利要求1所述的多传感器融合的无人车探测避障方法,其特征在于,所述CCD摄像机在无人车正常运行时,对运动方向两边的各种导航标志进行读取,ARM+FPGA控制器处理后,作为高速无人车运行的导航标志。The multi-sensor fusion method for detecting and avoiding obstacles for an unmanned vehicle according to claim 1, wherein the CCD camera reads various navigation signs on both sides of the moving direction when the unmanned vehicle is running normally, and the ARM+ After being processed by the FPGA controller, it is used as a navigation sign for high-speed unmanned vehicles.
- 根据权利要求1所述的多传感器融合的无人车探测避障方法,其特征在于,所述CCD摄像机在无人车正常运行时,对无人车到达的站点标志进行读取,实现无人车的自动行走、位置追踪和调度。The multi-sensor fusion method for detecting and avoiding obstacles for an unmanned vehicle according to claim 1, wherein when the unmanned vehicle is running normally, the CCD camera reads the sign of the site where the unmanned vehicle arrives, so as to realize the unmanned vehicle. Automatic walking, location tracking and scheduling of vehicles.
- 一种根据权利要求1-7任一项所述的多传感器融合的无人车探测避障方法的避障系统,其特征在于,包括多个单线激光雷达、双CCD摄像机、微波雷达、前方盲区超声波传感器组、后方盲区超声波传感器组和基于ARM+FPGA+NUC的三核控制器,多个单线激光雷达与NUC进行通讯,双CCD摄像机、微波雷达、前方盲区超声波传感器组、后方盲区超声波传感器组均与ARM+FPGA控制器通讯,NUC与ARM+FPGA控制器通讯。An obstacle avoidance system for an unmanned vehicle detection and obstacle avoidance method based on multi-sensor fusion according to any one of claims 1 to 7, characterized in that it includes a plurality of single-line laser radars, dual CCD cameras, microwave radars, front blind spots Ultrasonic sensor group, rear blind area ultrasonic sensor group and three-core controller based on ARM+FPGA+NUC, multiple single-line lidars communicate with NUC, dual CCD cameras, microwave radar, front blind area ultrasonic sensor group, rear blind area ultrasonic sensor group Both communicate with the ARM+FPGA controller, and the NUC communicates with the ARM+FPGA controller.
- 根据权利要求8所述的多传感器融合的无人车探测避障系统,其特征在于,所述多个单线激光雷达包括设置在无人车车顶、且与水平面夹角为α的单线激光雷达L1,所述α为5~15°。The multi-sensor fusion unmanned vehicle detection and obstacle avoidance system according to claim 8, wherein the plurality of single-line laser radars comprise single-line laser radars arranged on the roof of the unmanned vehicle and having an angle α with the horizontal plane L1, the α is 5-15°.
- 根据权利要求9所述的多传感器融合的无人车探测避障系统,其特征在于,所述单线激光雷达还包括设置在无人车前方的单线激光雷达组和设置在无人车后方的单线激光雷达组,所述单线激光雷达组之间均设有微波雷达。The multi-sensor fusion unmanned vehicle detection and obstacle avoidance system according to claim 9, wherein the single-line laser radar further comprises a single-line laser radar group arranged in front of the unmanned vehicle and a single-line laser radar set arranged behind the unmanned vehicle A lidar group, and microwave radars are arranged between the single-line lidar groups.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011118655.0A CN112180941A (en) | 2020-10-19 | 2020-10-19 | Multi-sensor fusion unmanned vehicle detection obstacle avoidance system and obstacle avoidance method |
CN202011118655.0 | 2020-10-19 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022082843A1 true WO2022082843A1 (en) | 2022-04-28 |
Family
ID=73950945
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/124842 WO2022082843A1 (en) | 2020-10-19 | 2020-10-29 | Multi-sensor integrated unmanned vehicle detection and obstacle avoidance system and obstacle avoidance method |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN112180941A (en) |
WO (1) | WO2022082843A1 (en) |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114755693A (en) * | 2022-06-15 | 2022-07-15 | 天津大学四川创新研究院 | Infrastructure facility measuring system and method based on multi-rotor unmanned aerial vehicle |
CN115060509A (en) * | 2022-05-30 | 2022-09-16 | 一汽奔腾轿车有限公司 | Emergency avoidance function test system and method based on laser radar in meeting scene |
CN115361421A (en) * | 2022-08-23 | 2022-11-18 | 河北汉光重工有限责任公司 | FPGA-based obstacle identification and target vehicle positioning system |
CN115384406A (en) * | 2022-08-15 | 2022-11-25 | 江苏上钺汽车部件有限公司 | Large vehicle blind area comprehensive detection alarm protection system and method |
CN115631656A (en) * | 2022-12-20 | 2023-01-20 | 北京卓翼智能科技有限公司 | Control system of unmanned vehicle and unmanned vehicle thereof |
CN115792911A (en) * | 2022-12-15 | 2023-03-14 | 淮阴师范学院 | Obstacle monitoring and identifying method based on millimeter wave radar |
CN116039620A (en) * | 2022-12-05 | 2023-05-02 | 北京斯年智驾科技有限公司 | Safe redundant processing system based on automatic driving perception |
CN116080423A (en) * | 2023-04-03 | 2023-05-09 | 电子科技大学 | Cluster unmanned vehicle energy supply system based on ROS and execution method thereof |
CN116587781A (en) * | 2023-05-16 | 2023-08-15 | 广州铁诚工程质量检测有限公司 | Unmanned car for tunnel detection |
CN117111058A (en) * | 2023-10-24 | 2023-11-24 | 青岛慧拓智能机器有限公司 | Unmanned perception system and method for mining truck |
CN117140536A (en) * | 2023-10-30 | 2023-12-01 | 北京航空航天大学 | Robot control method and device and robot |
CN117291090A (en) * | 2023-08-25 | 2023-12-26 | 江苏国芯科技有限公司 | Multi-sensor fusion design system for 32-bit singlechip |
CN117539268A (en) * | 2024-01-09 | 2024-02-09 | 吉林省吉邦自动化科技有限公司 | VGA autonomous obstacle avoidance system based on fusion of machine vision and laser radar |
CN118379881A (en) * | 2024-06-21 | 2024-07-23 | 华睿交通科技股份有限公司 | Highway traffic safety early warning system based on vehicle-road cooperation |
CN118549448A (en) * | 2024-07-19 | 2024-08-27 | 三峡金沙江川云水电开发有限公司 | Rapid detection device and method for gate pier side surface shallow disease structure |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112987014A (en) * | 2021-02-20 | 2021-06-18 | 纳瓦电子(上海)有限公司 | Vehicle and detection system and detection method thereof |
CN113190008B (en) * | 2021-05-08 | 2024-08-06 | 珠海一微半导体股份有限公司 | Method for prolonging service life of laser radar of mobile robot, chip and robot |
CN113341417B (en) * | 2021-06-09 | 2024-04-19 | 深圳市九洲电器有限公司 | Road surface obstacle detection method based on detection radar, vehicle and storage medium |
CN113341824A (en) * | 2021-06-17 | 2021-09-03 | 鄂尔多斯市普渡科技有限公司 | Open type automatic driving obstacle avoidance control system and control method |
CN113376642A (en) * | 2021-07-09 | 2021-09-10 | 王�华 | Laser-based airport runway foreign matter identification device and identification method |
EP4427569A1 (en) * | 2021-11-01 | 2024-09-11 | Positec Power Tools (Suzhou) Co., Ltd. | Automatic lawn mower |
CN114281075A (en) * | 2021-11-19 | 2022-04-05 | 岚图汽车科技有限公司 | Emergency obstacle avoidance system based on service-oriented, control method and equipment thereof |
CN114489075B (en) * | 2022-01-26 | 2024-07-23 | 苏州挚途科技有限公司 | Unmanned clearance vehicle control method and device and electronic equipment |
CN114782626B (en) * | 2022-04-14 | 2024-06-07 | 国网河南省电力公司电力科学研究院 | Transformer substation scene map building and positioning optimization method based on laser and vision fusion |
CN115248597B (en) * | 2022-08-22 | 2024-09-06 | 威海市润通橡胶有限公司 | AGV intelligent parking method serving laser cleaning technology |
CN116360466B (en) * | 2023-05-31 | 2023-09-15 | 天津博诺智创机器人技术有限公司 | Robot operation obstacle avoidance system based on depth camera |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070181810A1 (en) * | 2006-02-06 | 2007-08-09 | Tan Michael R T | Vertical cavity surface emitting laser (VCSEL) array laser scanner |
CN107957583A (en) * | 2017-11-29 | 2018-04-24 | 江苏若博机器人科技有限公司 | A kind of round-the-clock quick unmanned vehicle detection obstacle avoidance system of Multi-sensor Fusion |
CN107977004A (en) * | 2017-11-29 | 2018-05-01 | 江苏若博机器人科技有限公司 | A kind of round-the-clock high speed unmanned vehicle detection obstacle avoidance system of Multi-sensor Fusion |
CN108021133A (en) * | 2017-11-29 | 2018-05-11 | 江苏若博机器人科技有限公司 | A kind of Multi-sensor Fusion high speed unmanned vehicle detects obstacle avoidance system |
CN108037757A (en) * | 2017-11-29 | 2018-05-15 | 江苏若博机器人科技有限公司 | A kind of round-the-clock middling speed unmanned vehicle detection obstacle avoidance system of Multi-sensor Fusion |
CN108227720A (en) * | 2018-03-19 | 2018-06-29 | 徐州艾奇机器人科技有限公司 | A kind of round-the-clock unmanned cruiser system of four-wheel drive high speed |
CN108268045A (en) * | 2018-03-19 | 2018-07-10 | 徐州艾奇机器人科技有限公司 | A kind of six wheel drives quickly unmanned cruiser system and method for work |
US20180257560A1 (en) * | 2017-03-10 | 2018-09-13 | The Hi-Tech Robotic Systemz Ltd. | Method and system for vehicle status based advanced driver assistance |
-
2020
- 2020-10-19 CN CN202011118655.0A patent/CN112180941A/en active Pending
- 2020-10-29 WO PCT/CN2020/124842 patent/WO2022082843A1/en active Application Filing
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070181810A1 (en) * | 2006-02-06 | 2007-08-09 | Tan Michael R T | Vertical cavity surface emitting laser (VCSEL) array laser scanner |
US20180257560A1 (en) * | 2017-03-10 | 2018-09-13 | The Hi-Tech Robotic Systemz Ltd. | Method and system for vehicle status based advanced driver assistance |
CN107957583A (en) * | 2017-11-29 | 2018-04-24 | 江苏若博机器人科技有限公司 | A kind of round-the-clock quick unmanned vehicle detection obstacle avoidance system of Multi-sensor Fusion |
CN107977004A (en) * | 2017-11-29 | 2018-05-01 | 江苏若博机器人科技有限公司 | A kind of round-the-clock high speed unmanned vehicle detection obstacle avoidance system of Multi-sensor Fusion |
CN108021133A (en) * | 2017-11-29 | 2018-05-11 | 江苏若博机器人科技有限公司 | A kind of Multi-sensor Fusion high speed unmanned vehicle detects obstacle avoidance system |
CN108037757A (en) * | 2017-11-29 | 2018-05-15 | 江苏若博机器人科技有限公司 | A kind of round-the-clock middling speed unmanned vehicle detection obstacle avoidance system of Multi-sensor Fusion |
CN108227720A (en) * | 2018-03-19 | 2018-06-29 | 徐州艾奇机器人科技有限公司 | A kind of round-the-clock unmanned cruiser system of four-wheel drive high speed |
CN108268045A (en) * | 2018-03-19 | 2018-07-10 | 徐州艾奇机器人科技有限公司 | A kind of six wheel drives quickly unmanned cruiser system and method for work |
Cited By (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115060509A (en) * | 2022-05-30 | 2022-09-16 | 一汽奔腾轿车有限公司 | Emergency avoidance function test system and method based on laser radar in meeting scene |
CN114755693A (en) * | 2022-06-15 | 2022-07-15 | 天津大学四川创新研究院 | Infrastructure facility measuring system and method based on multi-rotor unmanned aerial vehicle |
CN114755693B (en) * | 2022-06-15 | 2022-09-16 | 天津大学四川创新研究院 | Infrastructure facility measuring system and method based on multi-rotor unmanned aerial vehicle |
CN115384406A (en) * | 2022-08-15 | 2022-11-25 | 江苏上钺汽车部件有限公司 | Large vehicle blind area comprehensive detection alarm protection system and method |
CN115384406B (en) * | 2022-08-15 | 2023-10-24 | 江苏上钺汽车部件有限公司 | Comprehensive detection alarm protection system and method for blind area of large-sized vehicle |
CN115361421A (en) * | 2022-08-23 | 2022-11-18 | 河北汉光重工有限责任公司 | FPGA-based obstacle identification and target vehicle positioning system |
CN116039620B (en) * | 2022-12-05 | 2024-04-19 | 北京斯年智驾科技有限公司 | Safe redundant processing system based on automatic driving perception |
CN116039620A (en) * | 2022-12-05 | 2023-05-02 | 北京斯年智驾科技有限公司 | Safe redundant processing system based on automatic driving perception |
CN115792911A (en) * | 2022-12-15 | 2023-03-14 | 淮阴师范学院 | Obstacle monitoring and identifying method based on millimeter wave radar |
CN115792911B (en) * | 2022-12-15 | 2024-03-08 | 淮阴师范学院 | Obstacle monitoring and identifying method based on millimeter wave radar |
CN115631656A (en) * | 2022-12-20 | 2023-01-20 | 北京卓翼智能科技有限公司 | Control system of unmanned vehicle and unmanned vehicle thereof |
CN116080423B (en) * | 2023-04-03 | 2023-06-27 | 电子科技大学 | Cluster unmanned vehicle energy supply system based on ROS and execution method thereof |
CN116080423A (en) * | 2023-04-03 | 2023-05-09 | 电子科技大学 | Cluster unmanned vehicle energy supply system based on ROS and execution method thereof |
CN116587781A (en) * | 2023-05-16 | 2023-08-15 | 广州铁诚工程质量检测有限公司 | Unmanned car for tunnel detection |
CN117291090A (en) * | 2023-08-25 | 2023-12-26 | 江苏国芯科技有限公司 | Multi-sensor fusion design system for 32-bit singlechip |
CN117291090B (en) * | 2023-08-25 | 2024-05-10 | 江苏国芯科技有限公司 | Multi-sensor fusion design system for 32-bit singlechip |
CN117111058A (en) * | 2023-10-24 | 2023-11-24 | 青岛慧拓智能机器有限公司 | Unmanned perception system and method for mining truck |
CN117140536B (en) * | 2023-10-30 | 2024-01-09 | 北京航空航天大学 | Robot control method and device and robot |
CN117140536A (en) * | 2023-10-30 | 2023-12-01 | 北京航空航天大学 | Robot control method and device and robot |
CN117539268A (en) * | 2024-01-09 | 2024-02-09 | 吉林省吉邦自动化科技有限公司 | VGA autonomous obstacle avoidance system based on fusion of machine vision and laser radar |
CN118379881A (en) * | 2024-06-21 | 2024-07-23 | 华睿交通科技股份有限公司 | Highway traffic safety early warning system based on vehicle-road cooperation |
CN118549448A (en) * | 2024-07-19 | 2024-08-27 | 三峡金沙江川云水电开发有限公司 | Rapid detection device and method for gate pier side surface shallow disease structure |
Also Published As
Publication number | Publication date |
---|---|
CN112180941A (en) | 2021-01-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022082843A1 (en) | Multi-sensor integrated unmanned vehicle detection and obstacle avoidance system and obstacle avoidance method | |
CN107957583A (en) | A kind of round-the-clock quick unmanned vehicle detection obstacle avoidance system of Multi-sensor Fusion | |
CN108021133A (en) | A kind of Multi-sensor Fusion high speed unmanned vehicle detects obstacle avoidance system | |
CN211765500U (en) | Intelligent driving environment sensing system used in closed scene and automobile | |
CN206532138U (en) | A kind of unmanned vehicle automatic Pilot intelligence system | |
CN108177651A (en) | A kind of quick unmanned vehicle detection obstacle avoidance system of Multi-sensor Fusion | |
CN107977004A (en) | A kind of round-the-clock high speed unmanned vehicle detection obstacle avoidance system of Multi-sensor Fusion | |
CN108037756A (en) | A kind of Multi-sensor Fusion middling speed unmanned vehicle detects obstacle avoidance system | |
CN111422196A (en) | Intelligent networking automatic driving system and method suitable for mini bus | |
WO2020228393A1 (en) | Deep learning type intelligent driving environment perception system based on internet of things | |
CN108928343A (en) | A kind of panorama fusion automated parking system and method | |
US20200209869A1 (en) | Information processing device, autonomous mobile device, method, and program | |
CN108189834A (en) | A kind of Multi-sensor Fusion low speed unmanned vehicle detects obstacle avoidance system | |
US11106219B2 (en) | Vehicle control device, vehicle control method, and storage medium | |
CN102490780A (en) | Electric power steering system, steering control method and automobile | |
US11299176B2 (en) | Vehicle control device | |
WO2021243696A1 (en) | Vehicle navigation positioning method and apparatus, and base station, system and readable storage medium | |
CN108061903A (en) | A kind of round-the-clock low speed unmanned vehicle detection obstacle avoidance system of Multi-sensor Fusion | |
WO2024146195A1 (en) | Automatic operation system for electronic guided rubber-tyred tram | |
CN212322114U (en) | Environment sensing and road environment crack detection system for automatic driving vehicle | |
CN110609558A (en) | Unmanned fleet control system and control method thereof | |
CN113282085A (en) | Robot following system and method based on UWB | |
WO2023155283A1 (en) | Automatic driving information auxiliary system based on intelligent lamp pole | |
CN110989618A (en) | Cooperative carrying control system and method for swarm type carrying vehicle | |
CN207657812U (en) | A kind of Multi-sensor Fusion low speed unmanned vehicle detection obstacle avoidance system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20958423 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20958423 Country of ref document: EP Kind code of ref document: A1 |