CN113408353A - Real-time obstacle avoidance system based on RGB-D - Google Patents
Real-time obstacle avoidance system based on RGB-D Download PDFInfo
- Publication number
- CN113408353A CN113408353A CN202110542757.3A CN202110542757A CN113408353A CN 113408353 A CN113408353 A CN 113408353A CN 202110542757 A CN202110542757 A CN 202110542757A CN 113408353 A CN113408353 A CN 113408353A
- Authority
- CN
- China
- Prior art keywords
- obstacle
- module
- depth
- obstacle avoidance
- rgb
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
The invention discloses a real-time obstacle avoidance system based on RGB-D. In the system, a graphic acquisition module is used for acquiring original depth information and RGB information of camera equipment; the calibration module is used for calibrating a ground background and calibrating a camera installation angle so as to acquire ground information and camera installation angle information; the common obstacle detection module is used for processing the information output by the image acquisition module to extract an obstacle and outputting position information of the obstacle; the verification module is used for secondarily judging whether the barrier is misdetected; the special obstacle detection module is used for detecting pedestrians and other intelligent mobile robots and outputting specific position information; the obstacle avoidance decision module is used for processing the information output by the common obstacle detection module and the obstacle detection module to make a corresponding obstacle avoidance strategy; and the mobile control module is used for processing the obstacle avoidance strategy output by the obstacle avoidance decision module to control the movement of the robot to realize the obstacle avoidance function. Through the cooperation of all modules, the real-time obstacle avoidance system with reliability, rapidness, high precision and low false detection is realized.
Description
Technical Field
The invention relates to the technical field of image recognition and mobile robots, in particular to a real-time obstacle avoidance system based on RGB-D.
Background
At present, intelligent unmanned carrying robots are rapidly developing, and mobile robot technology is closely related to industrial production. When the intelligent transfer robot is in an actual complex and changeable factory environment, accurately detecting obstacles and effectively avoiding the obstacles are one of basic capabilities required by the mobile robot. Especially for critical obstacles such as people, extra attention is needed to avoid accidents. In the prior art, obstacle avoidance usually depends on sensors such as laser radar and ultrasonic radar. However, the laser radar is expensive, and the size of the obstacle which cannot be detected by the ultrasonic radar is very limited.
Compared with a vision camera of a sensor such as a laser radar and an ultrasonic wave, the vision camera is low in price, can obtain RGB information and depth information of the whole view plane in real time, has the advantages of wide detection range, large information capacity and the like, and is widely applied to a vision obstacle avoidance technology. At present, a binocular camera, an RGB-D camera and a TOF camera are commonly adopted by a mobile robot to carry out visual obstacle avoidance. Compared with a binocular camera, the RGB-D camera and the TOF camera have the characteristics of being slightly influenced by object colors and obtaining a high-resolution depth map, so that the RGB-D camera is widely applied to the aspect of visual obstacle detection. However, as is known to require fast computing power for visual image processing, the current obstacle avoidance method of a mobile robot that employs an RGB-D camera to avoid obstacles has the disadvantages of insufficient processing speed, neglecting the obstacle avoidance of space obstacles, and meanwhile, the visual obstacle detection is easily affected by the external environment, has a high false detection rate, and cannot implement different obstacle avoidance strategies for different obstacles. Therefore, for the existing visual obstacle avoidance method, a real-time obstacle avoidance system based on an RGB-D camera needs to be provided.
Disclosure of Invention
The invention provides a real-time obstacle avoidance system based on RGB-D (red, green and blue) -D, aiming at the problem that the existing mobile robot is high in detection false detection rate and insufficient in real-time property due to the fact that space obstacles are not considered and the existing mobile robot is easily influenced by the outside. The system can make different obstacle avoidance decisions aiming at obstacles with different priorities, can set different ROI (region of interest) areas according to different terrains to reduce calculated amount, can perform secondary judgment on the basis of obstacle detection, can improve the detection precision of the camera and reduce the false detection rate of the obstacle detection, and the whole system module has good real-time performance through multi-thread technology parallel processing.
The purpose of the invention is realized by the following technical scheme: the invention provides a real-time obstacle avoidance system based on RGB-D, which comprises:
the system comprises an image acquisition module, a calibration module, a common obstacle detection module, a special obstacle detection module, a verification module, an obstacle avoidance decision module and a movement control module.
The image acquisition module is used for acquiring original depth information of the RGB-D camera and RGB information of the barrier and respectively outputting the depth information and the RGB information to the calibration module, the common barrier detection module and the special barrier detection module;
the calibration module is used for acquiring the depth information acquired by the image acquisition module to calibrate the ground background and calibrate the camera installation angle so as to acquire the ground depth information and the camera installation angle information;
the common obstacle detection module is used for processing the depth information and the RGB information output by the image acquisition module, extracting obstacles and performing secondary judgment through the verification module to ensure that the obstacles are true obstacles and not false detections, and finally accurately detecting the obstacles and outputting position information of the obstacles;
the special obstacle detection module is used for detecting travelers and other robots according to the depth information and the RGB information output by the image acquisition module and outputting obstacle position information;
the obstacle avoidance decision module is used for processing obstacle position information output by the common obstacle detection module and the special obstacle detection module to generate an obstacle avoidance strategy;
the mobile control module is used for controlling the movement of the robot according to the obstacle avoidance strategy output by the obstacle avoidance decision module to realize the obstacle avoidance function.
Further, the RGB-D camera is installed at the top of the mobile carrying robot, and the inclination angle of the camera ensures that any part of the robot body cannot be seen in the visual field range of the camera.
Furthermore, the real-time obstacle avoidance system adopts a multithreading technology, modules perform parallel calculation, and the detection frame rate is automatically adjusted according to the running speed of the mobile carrying robot.
Further, the calibration module acquires ground depth information and camera installation angle information, and comprises the following steps:
step 1-1: the RGB-D camera faces a flat ground, and the camera depth information is acquired through the image acquisition module.
Step 1-2: and starting to calibrate the installation angle of the camera, randomly selecting four pixel values in the middle of the depth map, and converting and solving the Y value under the depth camera coordinate system corresponding to the pixel points.
Step 1-3: the camera mounting angle is selected starting from 0 and incrementing by 1 degree each time until 180 degrees ends.
Step 1-4: and calculating Y values of the four pixel points under a horizontal depth camera coordinate system through the camera installation angle. If the absolute value of the difference value between the Y of the four pixel points is within the threshold range, the angle is saved, otherwise, the step 1-2 is returned, wherein the horizontal depth camera coordinate system is obtained by rotating the depth camera coordinate system around the origin of the coordinate system until the Z axis is parallel to the ground.
Step 1-5: and starting to calibrate the ground background information after the calibration angle is finished.
Step 1-6: and obtaining a depth map, limiting the depth value with the numerical value exceeding 7000 through a threshold filtering and hole filling method, and setting the depth value with the filling numerical value being zero as the upper limit of the depth numerical value.
Step 1-7: and counting 600 frames of processed depth maps, calculating the mean value of the depth values of each pixel point, wherein each pixel point corresponds to the Y value under the coordinate system of the horizontal depth camera and the maximum difference value of the pixel points in each row of pixels and respectively storing the Y value and the maximum difference value.
Further, the specific calculation process of calculating the Y values of the four pixel points in the horizontal depth camera coordinate system in step 1-2 and step 1-4 is as follows:
whereinIs a depth camera reference matrix for converting the camera coordinate system to the pixel coordinate system, (f)x,fy) Representing the relationship of the camera coordinate system to the imaging plane, representing the scaling in the u-axis and the v-axis, (C)x,Cy) Is the camera optical center, theta is the mounting tilt angle of the camera (right hand coordinate system),is a pixel point (u) on the depth imaged,vd) Three-dimensional coordinates in a horizontal depth camera coordinate system. ZdIs that (u)d,vd) The depth values on the pixel points.
Further, the common obstacle detection module comprises a preprocessing module, a contour extraction module and an obstacle coordinate output module;
the preprocessing module comprises the following steps:
step 2-1: three consecutive depth maps are acquired from the image acquisition module.
Step 2-2: and performing double down sampling and morphological dilation processing on the three acquired depth maps.
Step 2-3: and the three depth maps are respectively differed with the calibrated ground background information in the set ROI area, the numerical value of the point meeting the threshold value of the difference value is set to be 255, and the numerical value of the point not meeting the threshold value is set to be 0.
Step 2-4: and extracting a second depth map named as a binary map P1, and superposing the three depth maps to obtain a binary map P2.
Step 2-5: and performing morphological closing operation processing on the obtained binary map P1 and binary map P2.
The contour extraction module comprises the following steps:
step 3-1: the contour of the obstacle and the convex hull information in the binary map P1 and the binary map P2 are calculated, respectively.
Step 3-2: the areas of the convex hulls in the binary image P1 and the binary image P2 and the pixel positions of the centers of the convex hulls are respectively calculated, and the convex hulls below different area threshold filtering thresholds are set according to different sections where the pixels at the centers of the convex hulls are located.
Step 3-3: and calculating the Y value of the central pixel of the convex hull in the horizontal depth camera coordinate system and the Y value of the same pixel point in the stored ground information, and reserving the convex hull which is smaller than the ground Y and reaches a certain threshold value, otherwise, filtering. (ii) a
The obstacle coordinate output module includes the steps of:
step 4-1: and (3) calculating the minimum circumscribed frame of the residual convex hull after filtering in the step (3-2).
And 4-2, removing the overlapped rectangular frames and combining the connected rectangular frames.
And 4-3, when a plurality of rectangular frames exist, only keeping the rectangular frame with the minimum depth average value in the rectangular frame.
And 4-4, mapping the rectangular box on the depth map into the RGB map.
The mapping formula is as follows:
wherein ZdIs that (u)d,vd) Depth values on the pixel points;is a depth map pixel (u)d,vd) Three-dimensional coordinates under a depth camera coordinate system;is a pixel point (u)d,vd) Three-dimensional coordinates under an RGB camera coordinate system; (u)c,vc) Is thatPixel coordinates on an RGB image. R and T are the rotation matrix and translation matrix of the depth camera to RGB camera, respectively. KdIs a depth camera internal reference matrix, KcIs the internal reference matrix of the RGB camera.
And 4-5: and calculating the area of the rectangular frame, and sending the rectangular frame in RGB with the area size meeting the threshold condition into a verification module to verify whether the rectangular frame is the ground or the barrier.
And 4-6: if the output of the verification module is 0, the step 4-1 is returned if the object in the rectangular frame is judged to be a non-obstacle, and the next step 4-7 is executed if the output 1 indicates that the object in the rectangular frame is an obstacle.
And 4-7: and calculating the value of each pixel point in the rectangular frame of the obstacle on the X coordinate axis under the coordinate system of the horizontal depth camera, removing the pixel points of which the numerical values on the X coordinate axis are out of the threshold range, finding the nearest distance of the obstacle in the rest pixel points, and outputting the nearest distance to the obstacle avoidance decision module.
Further, the processing procedure of the verification module comprises:
step 5-1: and loading the trained second gshost classification model, constructing an inference engine by using TensorRT, storing the engine, and loading and deploying the engine by using c + +.
Step 5-2: when image data is sent in, the engine is used for reasoning and judging whether the image data is a non-obstacle output 0 or not, otherwise, the image data is judged to be an obstacle output 1.
Further, the special obstacle detection module detects an obstacle, including the steps of:
step 6-1: loading a trained yolov5 model (the model identification categories are pedestrians and mobile transfer robots), constructing an inference engine through TensorRT inference, and storing the engine.
Step 6-2: and the loading and deployment engine receives the RGB image of the image acquisition module and carries out reasoning.
Step 6-3: and acquiring pixel coordinates and length and width of the upper left corner of the target rectangular frame in the RGB image obtained after reasoning.
Step 6-4: and converting the RGB pixels into depth image pixels, and acquiring three-dimensional coordinates of pixel values in the rectangular frame in a depth camera coordinate system.
Step 6-5: and removing pixel points of which the numerical values on the X axis do not meet the threshold value under the horizontal depth camera coordinate system, and obtaining the three-dimensional coordinates of the obstacle points under the depth camera coordinate system through filtering waves in the remaining pixel points and outputting the three-dimensional coordinates to the obstacle avoidance decision module.
Furthermore, the obstacle avoidance decision module is respectively provided with different obstacle avoidance ranges for the common obstacle and the special obstacle detection module, and three obstacle avoidance grade areas are divided in the obstacle avoidance ranges. The first-stage obstacle avoidance is decelerated by 30 percent, the second-stage obstacle avoidance is decelerated by 60 percent, and the third-stage obstacle avoidance is stopped directly.
Further, the obstacle avoidance decision module processes the obstacle avoidance information to make an obstacle avoidance decision, and the specific process is as follows:
step 7-1: and receiving the coordinate information of the obstacles output by the common obstacle detection module and the special obstacle detection module.
Step 7-2: if the obstacle appears in the three-level obstacle avoidance area, the obstacle information output by the common obstacle detection module and the special obstacle detection module is processed, and if the obstacle appears in the one-level or two-level obstacle avoidance area, the obstacle information output by the special obstacle detection module is preferentially processed.
And 7-3: and carrying out median filtering and amplitude limiting filtering processing on the distance information of the obstacle.
And 7-4: and judging which one of the three levels of obstacle avoidance areas the obstacle is in according to the obstacle distance information obtained after final processing, judging to use a corresponding obstacle avoidance strategy according to the area where the obstacle is located, outputting the strategy to a motion control module to control the mobile carrying robot to execute the obstacle avoidance strategy, and finally realizing the obstacle avoidance function.
The invention has the beneficial effects that: the invention provides a reliable, rapid, high-precision and low-false-detection real-time obstacle avoidance system. The system can make different obstacle avoidance decisions aiming at obstacles with different priorities, can set different ROI (region of interest) areas according to different terrains to reduce calculated amount, can perform secondary judgment on the basis of obstacle detection, can improve the detection precision of the camera and reduce the false detection rate of the obstacle detection, and the whole system module has good real-time performance through multi-thread technology parallel processing.
Drawings
Fig. 1 is a general flow chart of the implementation of the present invention.
Fig. 2 is a schematic view of an intelligent mobile handling machine and camera mounting and camera coordinate system.
Fig. 3 is a flow chart of pretreatment of a general obstacle detection module.
Fig. 4 is a flowchart of obstacle distance output of the general obstacle detection module.
Fig. 5 is a flow chart of a special obstacle handling module.
Fig. 6 is a flow chart of an obstacle avoidance decision module.
Detailed Description
The technical solutions of the present invention are further described in detail below with reference to the accompanying drawings, but the scope of the present invention is not limited to the following.
The invention provides a real-time obstacle avoidance system based on RGB-D. The system comprises an image acquisition module, a calibration module, a common obstacle detection module, a verification module, a special obstacle detection module, an obstacle avoidance decision module and a movement control module as shown in figure 1.
As shown in FIG. 2, the RGB-D camera sensor is mounted on the top of the intelligent transfer robot in actual engineering, and the mounting angle of the RGB-D camera sensor is inclined downwards, and any part of the vehicle body cannot be seen in the visual field range of the camera.
The calibration module is used for acquiring the depth information obtained by the image acquisition module, calibrating the ground background and calibrating the camera installation angle so as to acquire the ground depth information and the camera installation angle information;
the common obstacle detection module is used for processing the depth information and the RGB information output by the image acquisition module, extracting obstacles through the common obstacle detection module, performing secondary judgment through the verification module, and finally accurately detecting the obstacles and outputting position information of the obstacles;
the verification module is used for carrying out secondary judgment on the obstacles detected by the common obstacle detection module to ensure that the obstacles are true obstacles and not false detections;
the special obstacle detection module is used for pedestrians and other carrying robots to output specific position information;
the obstacle avoidance decision module is used for processing obstacle position information output by the common obstacle detection module and the special obstacle detection module and indicating the mobile carrying robot to execute obstacle avoidance operation so as to realize an obstacle avoidance function;
the mobile control module is used for processing the obstacle avoidance strategy output by the obstacle avoidance decision module to control the movement of the robot to realize an obstacle avoidance function;
the system firstly operates a calibration module to calibrate ground background information and an installation angle and stores data in an industrial personal computer of the intelligent transfer robot, meanwhile, a light filter is attached to a camera to filter visible light in actual engineering application, and light rays emitted by no light source in an operation field can be guaranteed to be directly emitted to the camera.
The calibration module for acquiring the ground depth information and the camera installation angle information comprises the following steps:
step 1-1: the camera faces a flat ground, and the camera depth information is acquired through the image acquisition module.
Step 1-2: and starting to calibrate the installation angle of the camera, randomly selecting four pixel values in the middle of the depth map, and converting and solving the Y value of the space coordinate under the depth camera coordinate system corresponding to the pixel point.
Step 1-3: the camera mounting angle is selected starting from 0 and incrementing by 1 degree each time until 180 degrees ends.
Step 1-4: and calculating the Y values of the four pixel points under the coordinate system of the horizontal depth camera through the angle.
Step 1-5: if the absolute value of the difference value between the Y of the four pixel points is within the threshold range, the angle is saved, otherwise, the step 1-3 is returned, wherein the horizontal depth camera coordinate system is obtained by rotating the depth camera coordinate system around the origin of the coordinate system until the Z axis is parallel to the ground.
Step 1-6: and starting to calibrate the ground background information after the calibration angle is finished.
Step 1-7: obtaining a depth map, limiting the depth value with the numerical value exceeding 7000 through threshold filtering and a hole filling method, wherein the depth value with the filling numerical value of zero is the upper limit of the depth value, and the depth value exceeding 7000 can be changed into 7000 in practical implementation.
Step 1-8: and counting 600 frames of processed depth maps, calculating the mean value of the depth values of each pixel point, wherein each pixel point corresponds to the Y value under the coordinate system of the horizontal depth camera and the maximum difference value of the pixel points in each row of pixels and respectively storing the Y value and the maximum difference value.
Specifically, the specific calculation process of the Y value of the pixel points in the step 1-4 under the coordinate system of the horizontal depth camera is
WhereinIs a depth camera reference matrix for converting the camera coordinate system to the pixel coordinate system, (f)x,fy) Representing the relationship of the camera coordinate system to the imaging plane, representing the scaling in the u-axis and the v-axis, (C)x,Cy) Is the camera optical center, theta is the mounting tilt angle of the camera (right hand coordinate system),is a pixel point (u) on the depth imaged,vd) Three-dimensional coordinates in a horizontal depth camera coordinate system. ZdIs that (u)d,vd) The depth values on the pixel points.
After the calibration is finished, subsequent common obstacle and special obstacle detection and obstacle avoidance decision processing can be carried out. The common obstacle detection and the special obstacle detection are carried out simultaneously without mutual interference.
The common obstacle detection module divides the obstacle through the preprocessing module according to the depth information of the RGB-D camera, extracts the outline and the convex hull of the obstacle through the outline extraction module, and finally outputs the three-dimensional position information of the obstacle through the obstacle coordinate output module.
The common obstacle detection module comprises a preprocessing module, a contour extraction module and an obstacle coordinate output module.
As shown in fig. 3, which is a flowchart of the preprocessing module and the contour extraction module, the depth map sent by the image acquisition module is received and twice downsampled to reduce the processing data amount, and then the original depth map is divided into two processing depth maps according to different thresholds, one is responsible for detecting short and short obstacles, and the other is responsible for detecting higher obstacles, so that the false detection rate can be reduced, and the detection efficiency can be improved.
The specific treatment steps are as follows:
step 2-1: three consecutive depth maps are acquired from the image acquisition module.
Step 2-2: and performing double down sampling and morphological dilation processing on the three acquired depth maps.
Step 2-3: the three depth maps are respectively differed with the calibrated ground background information in the set ROI area, the numerical value of the point meeting the threshold value is set to be 255, and the numerical value of the point not meeting the threshold value is set to be 0
Step 2-4: and extracting a second depth map named as a binary map P1, and superposing the three depth maps to obtain a binary map P2.
The principle of stacking the three depth maps is as follows:
the number of the same pixel point is set to 255 when the number of the same pixel point is only 255, and the value of the same pixel point is set to 0 if the number of the 255 is less than 3.
Step 2-5: and performing morphological closing operation processing on the obtained binary images P1 and P2.
Step 2-6: and respectively calculating the outline of the obstacle and convex hull information in the binary image.
Step 2-7: and respectively calculating the area of the convex hull and the pixel position of the center of the convex hull in the two binary images, and setting the convex hull below different area threshold filtering thresholds according to different intervals of the pixel at the center of the convex hull.
Step 2-8: and calculating Y of the convex hull center pixel under the horizontal depth camera coordinate system, comparing the Y with the same pixel point Y value in the stored ground information, and reserving the convex hull which is smaller than the ground Y and reaches a certain threshold value, otherwise, filtering.
And starting to calculate the three-dimensional coordinate information of the obstacle in the space range conforming to the obstacle avoidance after the obstacle convex hull information exists.
Fig. 4 is a flowchart of the obstacle coordinate output module, which includes the following steps:
step 3-1: and loading the trained second gshost classification model, constructing an inference engine by using TensorRT, and storing the engine.
Step 3-2: the engine is loaded and deployed in c + +.
Step 3-3: and calculating the minimum outline frame of the residual convex hull after filtering.
Step 3-4: and removing the overlapped rectangular frames and combining the connected rectangular frames.
Step 3-5: when a plurality of rectangular frames exist, only the rectangular frame with the minimum depth average value in the rectangular frame is reserved.
Step 3-6: and mapping the rectangular box on the depth map into the RGB map.
The mapping formula is as follows:
wherein ZdIs that (u)d,vd) Depth values on the pixel points;is a depth map pixel (u)d,vd) Three-dimensional coordinates under a depth camera coordinate system;is a pixel point (u)d,vd) Three-dimensional coordinates under an RGB camera coordinate system; (u)c,vc) Is thatPixel coordinates on an RGB image. R and T are the rotation matrix and translation matrix of the depth camera to RGB camera, respectively. KdIs a depth camera internal reference matrix, KcIs the internal reference matrix of the RGB camera.
Step 3-7: and calculating the area of the rectangular frame, and sending the rectangular frame in RGB with the area size meeting the condition to the verification module.
Step 3-8: when the image data enters the verification module, the engine deployed in the step 3-2 is used for reasoning and judging whether the image data is a non-obstacle output 0 or not, and otherwise, judging that the image data is an obstacle output 1.
Step 3-9: if the output of the verification module is 0, the step 3-3 is returned to if the object in the rectangular frame is judged to be a non-obstacle, and the next step 3-10 is executed if the output 1 indicates that the object in the rectangular frame is an obstacle.
Step 3-10: and calculating the value of each pixel point in the rectangular frame of the obstacle on the X coordinate axis under the coordinate system of the depth camera, removing the pixel points of which the numerical values on the X coordinate axis are out of the threshold range, finding the nearest distance of the obstacle in the rest pixel points, and outputting the nearest distance to the obstacle avoidance decision module.
Special obstacle detection is performed simultaneously.
Fig. 5 is a flowchart of the special obstacle detection module.
The special obstacle detection module for detecting the obstacle comprises the following steps:
step 4-1: loading a trained yolov5 model (the model identification category is pedestrian and the model identification category is transfer robot), and establishing an inference engine and storing the engine through TensorRT inference.
Step 4-2: and the loading deployment engine is used for receiving the RGB pictures of the image acquisition module for reasoning.
Step 4-3: and acquiring pixel coordinates and length and width of the upper left corner of the target rectangular frame in the RGB image obtained after reasoning.
Step 4-4: and converting the RGB pixels into depth image pixels, and acquiring three-dimensional coordinates of pixel values in the rectangular frame in a horizontal depth camera coordinate system.
And 4-5: and removing pixel points of which the numerical values on the X axis do not meet the threshold value under the horizontal depth camera coordinate system, and obtaining the three-dimensional coordinates of the closest point of the obstacle under the horizontal depth camera coordinate system through filtering waves in the remaining pixel points and outputting the three-dimensional coordinates to the obstacle avoidance decision module.
The obstacle avoidance decision module is provided with different obstacle avoidance ranges for the common obstacle and the special obstacle detection module respectively, and three obstacle avoidance grade areas are divided in the obstacle avoidance ranges. The first-stage obstacle avoidance is decelerated by 30 percent, the second-stage obstacle avoidance is decelerated by 60 percent, and the third-stage obstacle avoidance is stopped directly.
As shown in fig. 6, a flowchart of processing obstacle avoidance information by the obstacle decision module includes the following specific steps:
step 5-1: and receiving the coordinate information of the obstacles output by the common obstacle detection module and the special obstacle detection module.
Step 5-2: if the obstacle appears in the three-level obstacle avoidance area, the obstacle information output by the common obstacle detection module and the obstacle detection module is processed, and if the obstacles appear in the first-level obstacle avoidance area and the second-level obstacle avoidance area, the obstacle information output by the special obstacle detection module is preferentially processed.
Step 5-3: and carrying out median filtering and amplitude limiting filtering processing on the distance information of the obstacle.
Step 5-4: and judging which region of the three-stage obstacle avoidance regions the obstacle is in according to the finally processed obstacle distance information, judging to use a corresponding obstacle avoidance strategy according to the region where the obstacle is located, outputting the strategy to the motion control module to control the carrying robot to execute the obstacle avoidance strategy, and finally realizing the obstacle avoidance function.
The above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, several modifications and variations can be made without departing from the technical principle of the present invention, and these modifications and variations should also be regarded as the protection scope of the present invention.
Claims (10)
1. An RGB-D based real-time obstacle avoidance system, the system comprising:
the system comprises an image acquisition module, a calibration module, a common obstacle detection module, a special obstacle detection module, a verification module, an obstacle avoidance decision module and a movement control module.
The image acquisition module is used for acquiring original depth information of the RGB-D camera and RGB information of the barrier and respectively outputting the depth information and the RGB information to the calibration module, the common barrier detection module and the special barrier detection module;
the calibration module is used for acquiring the depth information acquired by the image acquisition module to calibrate the ground background and calibrate the camera installation angle so as to acquire the ground depth information and the camera installation angle information;
the common obstacle detection module is used for processing the depth information and the RGB information output by the image acquisition module, extracting obstacles and performing secondary judgment through the verification module to ensure that the obstacles are true obstacles and not false detections, and finally accurately detecting the obstacles and outputting position information of the obstacles;
the special obstacle detection module is used for detecting travelers and other robots according to the depth information and the RGB information output by the image acquisition module and outputting obstacle position information;
the obstacle avoidance decision module is used for processing obstacle position information output by the common obstacle detection module and the special obstacle detection module to generate an obstacle avoidance strategy;
the mobile control module is used for controlling the movement of the robot according to the obstacle avoidance strategy output by the obstacle avoidance decision module to realize the obstacle avoidance function.
2. The RGB-D based real-time obstacle avoidance system according to claim 1, wherein: the RGB-D camera is installed at the top of the mobile transfer robot, and the inclination angle of the camera ensures that any part of the robot body cannot be seen in the visual field range of the camera.
3. The RGB-D based real-time obstacle avoidance system according to claim 1, wherein: the real-time obstacle avoidance system adopts a multithreading technology, parallel calculation is carried out among modules, and the detection frame rate is automatically adjusted according to the running speed of the mobile carrying robot.
4. The RGB-D based real-time obstacle avoidance system according to claim 1, wherein: the calibration module acquires ground depth information and camera installation angle information, and comprises the following steps:
step 1-1: the RGB-D camera faces a flat ground, and the camera depth information is acquired through the image acquisition module.
Step 1-2: and starting to calibrate the installation angle of the camera, randomly selecting four pixel values in the middle of the depth map, and converting and solving the Y value under the depth camera coordinate system corresponding to the pixel points.
Step 1-3: the camera mounting angle is selected starting from 0 and incrementing by 1 degree each time until 180 degrees ends.
Step 1-4: and calculating Y values of the four pixel points under a horizontal depth camera coordinate system through the camera installation angle. If the absolute value of the difference value between the Y of the four pixel points is within the threshold range, the angle is saved, otherwise, the step 1-2 is returned, wherein the horizontal depth camera coordinate system is obtained by rotating the depth camera coordinate system around the origin of the coordinate system until the Z axis is parallel to the ground.
Step 1-5: and starting to calibrate the ground background information after the calibration angle is finished.
Step 1-6: and obtaining a depth map, limiting the depth value with the numerical value exceeding 7000 through a threshold filtering and hole filling method, and setting the depth value with the filling numerical value being zero as the upper limit of the depth numerical value.
Step 1-7: and counting 600 frames of processed depth maps, calculating the mean value of the depth values of each pixel point, wherein each pixel point corresponds to the Y value under the coordinate system of the horizontal depth camera and the maximum difference value of the pixel points in each row of pixels and respectively storing the Y value and the maximum difference value.
5. The RGB-D based real-time obstacle avoidance system according to claim 4, wherein the specific calculation process of calculating Y values of four pixel points in the horizontal depth camera coordinate system in steps 1-2 and 1-4 is as follows:
whereinIs a depth camera reference matrix for converting the camera coordinate system to the pixel coordinate system, (f)x,fy) Representing the relationship of the camera coordinate system to the imaging plane, representing the scaling in the u-axis and the v-axis, (C)x,Cy) Is the camera optical center, theta is the mounting tilt angle of the camera (right hand coordinate system),is a pixel point (u) on the depth imaged,vd) Three-dimensional coordinates in a horizontal depth camera coordinate system. ZdIs that (u)d,vd) The depth values on the pixel points.
6. The RGB-D based real-time obstacle avoidance system according to claim 1, wherein: the common obstacle detection module comprises a preprocessing module, a contour extraction module and an obstacle coordinate output module;
the preprocessing module comprises the following steps:
step 2-1: three consecutive depth maps are acquired from the image acquisition module.
Step 2-2: and performing double down sampling and morphological dilation processing on the three acquired depth maps.
Step 2-3: and the three depth maps are respectively differed with the calibrated ground background information in the set ROI area, the numerical value of the point meeting the threshold value of the difference value is set to be 255, and the numerical value of the point not meeting the threshold value is set to be 0.
Step 2-4: and extracting a second depth map named as a binary map P1, and superposing the three depth maps to obtain a binary map P2.
Step 2-5: and performing morphological closing operation processing on the obtained binary map P1 and binary map P2.
The contour extraction module comprises the following steps:
step 3-1: the contour of the obstacle and the convex hull information in the binary map P1 and the binary map P2 are calculated, respectively.
Step 3-2: the areas of the convex hulls in the binary image P1 and the binary image P2 and the pixel positions of the centers of the convex hulls are respectively calculated, and the convex hulls below different area threshold filtering thresholds are set according to different sections where the pixels at the centers of the convex hulls are located.
Step 3-3: and calculating the Y value of the central pixel of the convex hull in the horizontal depth camera coordinate system and the Y value of the same pixel point in the stored ground information, and reserving the convex hull which is smaller than the ground Y and reaches a certain threshold value, otherwise, filtering. (ii) a
The obstacle coordinate output module includes the steps of:
step 4-1: and (3) calculating the minimum circumscribed frame of the residual convex hull after filtering in the step (3-2).
And 4-2, removing the overlapped rectangular frames and combining the connected rectangular frames.
And 4-3, when a plurality of rectangular frames exist, only keeping the rectangular frame with the minimum depth average value in the rectangular frame.
And 4-4, mapping the rectangular box on the depth map into the RGB map.
The mapping formula is as follows:
wherein ZdIs that (u)d,vd) Depth values on the pixel points;is a depth map pixel (u)d,vd) Three-dimensional coordinates under a depth camera coordinate system;is a pixel point (u)d,vd) Three-dimensional coordinates under an RGB camera coordinate system; (u)c,vc) Is thatPixel coordinates on an RGB image. R and T are the rotation matrix and translation matrix of the depth camera to RGB camera, respectively. KdIs a depth camera internal reference matrix, KcIs the internal reference matrix of the RGB camera.
And 4-5: and calculating the area of the rectangular frame, and sending the rectangular frame in RGB with the area size meeting the threshold condition into a verification module to verify whether the rectangular frame is the ground or the barrier.
And 4-6: if the output of the verification module is 0, the step 4-1 is returned if the object in the rectangular frame is judged to be a non-obstacle, and the next step 4-7 is executed if the output 1 indicates that the object in the rectangular frame is an obstacle.
And 4-7: and calculating the value of each pixel point in the rectangular frame of the obstacle on the X coordinate axis under the coordinate system of the horizontal depth camera, removing the pixel points of which the numerical values on the X coordinate axis are out of the threshold range, finding the nearest distance of the obstacle in the rest pixel points, and outputting the nearest distance to the obstacle avoidance decision module.
7. The RGB-D based real-time obstacle avoidance system according to claim 1, wherein: the processing procedure of the verification module comprises the following steps:
step 5-1: and loading the trained second gshost classification model, constructing an inference engine by using TensorRT, storing the engine, and loading and deploying the engine by using c + +.
Step 5-2: when image data is sent in, the engine is used for reasoning and judging whether the image data is a non-obstacle output 0 or not, otherwise, the image data is judged to be an obstacle output 1.
8. The RGB-D based real-time obstacle avoidance system according to claim 1, wherein: the special obstacle detection module for detecting obstacles comprises the following steps:
step 6-1: loading a trained yolov5 model (the model identification categories are pedestrians and mobile transfer robots), constructing an inference engine through TensorRT inference, and storing the engine.
Step 6-2: and the loading and deployment engine receives the RGB image of the image acquisition module and carries out reasoning.
Step 6-3: and acquiring pixel coordinates and length and width of the upper left corner of the target rectangular frame in the RGB image obtained after reasoning.
Step 6-4: and converting the RGB pixels into depth image pixels, and acquiring three-dimensional coordinates of pixel values in the rectangular frame in a depth camera coordinate system.
Step 6-5: and removing pixel points of which the numerical values on the X axis do not meet the threshold value under the horizontal depth camera coordinate system, and obtaining the three-dimensional coordinates of the obstacle points under the depth camera coordinate system through filtering waves in the remaining pixel points and outputting the three-dimensional coordinates to the obstacle avoidance decision module.
9. The RGB-D based real-time obstacle avoidance system according to claim 1, wherein: the obstacle avoidance decision module is provided with different obstacle avoidance ranges for the common obstacle and the special obstacle detection module respectively, and three obstacle avoidance grade areas are divided in the obstacle avoidance ranges. The first-stage obstacle avoidance is decelerated by 30 percent, the second-stage obstacle avoidance is decelerated by 60 percent, and the third-stage obstacle avoidance is stopped directly.
10. The RGB-D based real-time obstacle avoidance system of claim 9, wherein: the obstacle avoidance decision module processes the obstacle avoidance information to make an obstacle avoidance decision, and the specific process is as follows:
step 7-1: and receiving the coordinate information of the obstacles output by the common obstacle detection module and the special obstacle detection module.
Step 7-2: if the obstacle appears in the three-level obstacle avoidance area, the obstacle information output by the common obstacle detection module and the special obstacle detection module is processed, and if the obstacle appears in the one-level or two-level obstacle avoidance area, the obstacle information output by the special obstacle detection module is preferentially processed.
And 7-3: and carrying out median filtering and amplitude limiting filtering processing on the distance information of the obstacle.
And 7-4: and judging which one of the three levels of obstacle avoidance areas the obstacle is in according to the obstacle distance information obtained after final processing, judging to use a corresponding obstacle avoidance strategy according to the area where the obstacle is located, outputting the strategy to a motion control module to control the mobile carrying robot to execute the obstacle avoidance strategy, and finally realizing the obstacle avoidance function.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110542757.3A CN113408353B (en) | 2021-05-18 | 2021-05-18 | Real-time obstacle avoidance system based on RGB-D |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110542757.3A CN113408353B (en) | 2021-05-18 | 2021-05-18 | Real-time obstacle avoidance system based on RGB-D |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113408353A true CN113408353A (en) | 2021-09-17 |
CN113408353B CN113408353B (en) | 2023-04-07 |
Family
ID=77678779
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110542757.3A Active CN113408353B (en) | 2021-05-18 | 2021-05-18 | Real-time obstacle avoidance system based on RGB-D |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113408353B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115154074A (en) * | 2022-05-30 | 2022-10-11 | 上海炬佑智能科技有限公司 | Intelligent wheelchair with obstacle avoidance function |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103335635A (en) * | 2013-07-17 | 2013-10-02 | 中测新图(北京)遥感技术有限责任公司 | Method for adjusting tilt angles of auxiliary cameras of aerial camera |
CN104899869A (en) * | 2015-05-14 | 2015-09-09 | 浙江大学 | Plane and barrier detection method based on RGB-D camera and attitude sensor |
CN105843223A (en) * | 2016-03-23 | 2016-08-10 | 东南大学 | Mobile robot three-dimensional mapping and obstacle avoidance method based on space bag of words model |
CN107767456A (en) * | 2017-09-22 | 2018-03-06 | 福州大学 | A kind of object dimensional method for reconstructing based on RGB D cameras |
US20180133895A1 (en) * | 2016-11-17 | 2018-05-17 | Samsung Electronics Co., Ltd. | Mobile robot system, mobile robot, and method of controlling the mobile robot system |
CN110442120A (en) * | 2018-05-02 | 2019-11-12 | 深圳市优必选科技有限公司 | Method for controlling robot to move in different scenes, robot and terminal equipment |
CN110766761A (en) * | 2019-10-21 | 2020-02-07 | 北京百度网讯科技有限公司 | Method, device, equipment and storage medium for camera calibration |
US20210103299A1 (en) * | 2017-12-29 | 2021-04-08 | SZ DJI Technology Co., Ltd. | Obstacle avoidance method and device and movable platform |
-
2021
- 2021-05-18 CN CN202110542757.3A patent/CN113408353B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103335635A (en) * | 2013-07-17 | 2013-10-02 | 中测新图(北京)遥感技术有限责任公司 | Method for adjusting tilt angles of auxiliary cameras of aerial camera |
CN104899869A (en) * | 2015-05-14 | 2015-09-09 | 浙江大学 | Plane and barrier detection method based on RGB-D camera and attitude sensor |
CN105843223A (en) * | 2016-03-23 | 2016-08-10 | 东南大学 | Mobile robot three-dimensional mapping and obstacle avoidance method based on space bag of words model |
US20180133895A1 (en) * | 2016-11-17 | 2018-05-17 | Samsung Electronics Co., Ltd. | Mobile robot system, mobile robot, and method of controlling the mobile robot system |
CN107767456A (en) * | 2017-09-22 | 2018-03-06 | 福州大学 | A kind of object dimensional method for reconstructing based on RGB D cameras |
US20210103299A1 (en) * | 2017-12-29 | 2021-04-08 | SZ DJI Technology Co., Ltd. | Obstacle avoidance method and device and movable platform |
CN110442120A (en) * | 2018-05-02 | 2019-11-12 | 深圳市优必选科技有限公司 | Method for controlling robot to move in different scenes, robot and terminal equipment |
CN110766761A (en) * | 2019-10-21 | 2020-02-07 | 北京百度网讯科技有限公司 | Method, device, equipment and storage medium for camera calibration |
Non-Patent Citations (4)
Title |
---|
MINJIE HUA, YIBING NAN, SHIGUO LIAN: "Small Obstacle Avoidance Based on RGB-D Semantic Segmentation", 《PROCEEDINGS OF THE IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV)》 * |
ZHANG_CHEN_: "【图像处理】像素坐标系、像平面坐标系、相机坐标系、世界坐标系、内参矩阵、外参矩阵_Zhang_Chen_的博客-CSDN博客_像平面坐标系", 《HTTPS://BLOG.CSDN.NET/ZHANG_CHEN_/ARTICLE/DETAILS/103724048?SPM=1001.2101.3001.6661.1&UTM_MEDIUM=DISTRIBUTE.PC_RELEVANT_T0.NONE-TASK-BLOG-2~DEFAULT~CTRLIST~DEFAULT-1-103724048-BLOG-119849286》 * |
王增喜; 张庆余; 贾通; 张苏林: "基于RGB-D摄像头的SLAM导航算法研究", 《自动化与仪表》 * |
王超: "基于RGB-D相机的移动机器人导航研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115154074A (en) * | 2022-05-30 | 2022-10-11 | 上海炬佑智能科技有限公司 | Intelligent wheelchair with obstacle avoidance function |
Also Published As
Publication number | Publication date |
---|---|
CN113408353B (en) | 2023-04-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109948661B (en) | 3D vehicle detection method based on multi-sensor fusion | |
CN113111887B (en) | Semantic segmentation method and system based on information fusion of camera and laser radar | |
CN109283538B (en) | Marine target size detection method based on vision and laser sensor data fusion | |
CN106951879B (en) | Multi-feature fusion vehicle detection method based on camera and millimeter wave radar | |
Yan et al. | A method of lane edge detection based on Canny algorithm | |
CN105460009B (en) | Automobile control method and device | |
CN108960183A (en) | A kind of bend target identification system and method based on Multi-sensor Fusion | |
CN108303096B (en) | Vision-assisted laser positioning system and method | |
CN107578012B (en) | Driving assistance system for selecting sensitive area based on clustering algorithm | |
Youjin et al. | A robust lane detection method based on vanishing point estimation | |
CN112287860A (en) | Training method and device of object recognition model, and object recognition method and system | |
CN104915642B (en) | Front vehicles distance measuring method and device | |
TWI673190B (en) | Vehicle detection method based on optical radar | |
CN115372990A (en) | High-precision semantic map building method and device and unmanned vehicle | |
TWI745204B (en) | High-efficiency LiDAR object detection method based on deep learning | |
Huang et al. | Robust lane marking detection under different road conditions | |
CN110733039A (en) | Automatic robot driving method based on VFH + and vision auxiliary decision | |
CN112990049A (en) | AEB emergency braking method and device for automatic driving of vehicle | |
CN113688738A (en) | Target identification system and method based on laser radar point cloud data | |
CN113408353B (en) | Real-time obstacle avoidance system based on RGB-D | |
CN115457358A (en) | Image and point cloud fusion processing method and device and unmanned vehicle | |
Zhang et al. | Vessel detection and classification fusing radar and vision data | |
US20230009925A1 (en) | Object detection method and object detection device | |
CN114445793A (en) | Intelligent driving auxiliary system based on artificial intelligence and computer vision | |
CN113139986A (en) | Integrated environment perception and multi-target tracking system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |