CN113408353B - Real-time obstacle avoidance system based on RGB-D - Google Patents
Real-time obstacle avoidance system based on RGB-D Download PDFInfo
- Publication number
- CN113408353B CN113408353B CN202110542757.3A CN202110542757A CN113408353B CN 113408353 B CN113408353 B CN 113408353B CN 202110542757 A CN202110542757 A CN 202110542757A CN 113408353 B CN113408353 B CN 113408353B
- Authority
- CN
- China
- Prior art keywords
- obstacle
- module
- depth
- obstacle avoidance
- rgb
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000001514 detection method Methods 0.000 claims abstract description 83
- 238000012545 processing Methods 0.000 claims abstract description 31
- 230000004888 barrier function Effects 0.000 claims abstract description 19
- 238000009434 installation Methods 0.000 claims abstract description 19
- 238000012795 verification Methods 0.000 claims abstract description 17
- 238000001914 filtration Methods 0.000 claims description 21
- 239000011159 matrix material Substances 0.000 claims description 15
- 238000000034 method Methods 0.000 claims description 14
- 230000000007 visual effect Effects 0.000 claims description 8
- 238000000605 extraction Methods 0.000 claims description 7
- 238000007781 pre-processing Methods 0.000 claims description 7
- 238000013507 mapping Methods 0.000 claims description 6
- 230000000877 morphologic effect Effects 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims description 5
- 238000005516 engineering process Methods 0.000 claims description 5
- 238000012546 transfer Methods 0.000 claims description 4
- 238000013145 classification model Methods 0.000 claims description 3
- 238000003384 imaging method Methods 0.000 claims description 3
- 230000003287 optical effect Effects 0.000 claims description 3
- 238000005070 sampling Methods 0.000 claims description 3
- 238000013519 translation Methods 0.000 claims description 3
- 230000010339 dilation Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000009776 industrial production Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
The invention discloses a real-time obstacle avoidance system based on RGB-D. In the system, a graphic acquisition module is used for acquiring original depth information and RGB information of camera equipment; the calibration module is used for calibrating a ground background and calibrating a camera installation angle so as to acquire ground information and camera installation angle information; the common obstacle detection module is used for processing the information output by the image acquisition module to extract an obstacle and outputting position information of the obstacle; the verification module is used for secondarily judging whether the barrier is misdetected; the special obstacle detection module is used for detecting pedestrians and other intelligent mobile robots and outputting specific position information; the obstacle avoidance decision module is used for processing the information output by the common obstacle detection module and the obstacle detection module to make a corresponding obstacle avoidance strategy; and the mobile control module is used for processing the obstacle avoidance strategy output by the obstacle avoidance decision module to control the movement of the robot to realize the obstacle avoidance function. Through the cooperation of all modules, the real-time obstacle avoidance system with reliability, rapidness, high precision and low false detection is realized.
Description
Technical Field
The invention relates to the technical field of image recognition and mobile robots, in particular to a real-time obstacle avoidance system based on RGB-D.
Background
At present, intelligent unmanned carrying robots are rapidly developing, and mobile robot technology is closely related to industrial production. When the intelligent transfer robot is in an actual complex and changeable factory environment, accurately detecting obstacles and effectively avoiding the obstacles are one of basic capabilities required by the mobile robot. Especially for critical obstacles such as people, extra attention is needed to avoid accidents. In the prior art, obstacle avoidance usually depends on sensors such as laser radar and ultrasonic radar. However, the laser radar is expensive, and the size of the obstacle which cannot be detected by the ultrasonic radar is very limited.
Compared with a vision camera of a sensor such as a laser radar and an ultrasonic wave, the vision camera is low in price, can obtain RGB information and depth information of the whole view plane in real time, has the advantages of wide detection range, large information capacity and the like, and is widely applied to a vision obstacle avoidance technology. At present, binocular cameras, RGB-D cameras and TOF cameras are commonly adopted by mobile robots for visual obstacle avoidance. Compared with a binocular camera, the RGB-D camera and the TOF camera have the characteristics of being slightly influenced by object colors and obtaining a high-resolution depth map, so that the RGB-D camera is widely applied to the aspect of visual obstacle detection. However, as is known to require fast computing power for visual image processing, the current obstacle avoidance method of a mobile robot that employs an RGB-D camera to avoid obstacles has the disadvantages of insufficient processing speed, neglecting the obstacle avoidance of space obstacles, and meanwhile, the visual obstacle detection is easily affected by the external environment, has a high false detection rate, and cannot implement different obstacle avoidance strategies for different obstacles. Therefore, for the existing visual obstacle avoidance method, a real-time obstacle avoidance system based on an RGB-D camera needs to be provided.
Disclosure of Invention
The invention provides a real-time obstacle avoidance system based on RGB-D (red, green and blue) -D, aiming at the problem that the existing mobile robot is high in detection false detection rate and insufficient in real-time property due to the fact that space obstacles are not considered and the existing mobile robot is easily influenced by the outside. The system can make different obstacle avoidance decisions aiming at obstacles with different priorities, can set different ROI (region of interest) areas according to different terrains to reduce calculated amount, can perform secondary judgment on the basis of obstacle detection, can improve the detection precision of the camera and reduce the false detection rate of the obstacle detection, and the whole system module has good real-time performance through multi-thread technology parallel processing.
The purpose of the invention is realized by the following technical scheme: the invention provides a real-time obstacle avoidance system based on RGB-D, which comprises:
the system comprises an image acquisition module, a calibration module, a common obstacle detection module, a special obstacle detection module, a verification module, an obstacle avoidance decision module and a movement control module.
The image acquisition module is used for acquiring original depth information of the RGB-D camera and RGB information of the barrier and respectively outputting the depth information and the RGB information to the calibration module, the common barrier detection module and the special barrier detection module;
the calibration module is used for acquiring the depth information acquired by the image acquisition module to calibrate the ground background and calibrate the camera installation angle so as to acquire the ground depth information and the camera installation angle information;
the common obstacle detection module is used for processing the depth information and the RGB information output by the image acquisition module, extracting obstacles and performing secondary judgment through the verification module to ensure that the obstacles are true obstacles and not false detections, and finally accurately detecting the obstacles and outputting position information of the obstacles;
the special obstacle detection module is used for detecting travelers and other robots according to the depth information and the RGB information output by the image acquisition module and outputting obstacle position information;
the obstacle avoidance decision module is used for processing obstacle position information output by the common obstacle detection module and the special obstacle detection module to generate an obstacle avoidance strategy;
the mobile control module is used for controlling the movement of the robot according to the obstacle avoidance strategy output by the obstacle avoidance decision module to realize the obstacle avoidance function.
Further, the RGB-D camera is installed at the top of the mobile carrying robot, and the inclination angle of the camera ensures that any part of the robot body cannot be seen in the visual field range of the camera.
Furthermore, the real-time obstacle avoidance system adopts a multithreading technology, modules perform parallel calculation, and the detection frame rate is automatically adjusted according to the running speed of the mobile carrying robot.
Further, the calibration module acquires ground depth information and camera installation angle information, and comprises the following steps:
step 1-1: the RGB-D camera faces a flat ground, and the camera depth information is acquired through the image acquisition module.
Step 1-2: and starting to calibrate the installation angle of the camera, randomly selecting four pixel values in the middle of the depth map, and converting and solving the Y value under the depth camera coordinate system corresponding to the pixel points.
Step 1-3: the camera mounting angle is selected starting at 0 and incrementing 1 degree each time until 180 degrees ends.
Step 1-4: and calculating Y values of the four pixel points under a horizontal depth camera coordinate system through the camera installation angle. If the absolute value of the difference value between the Y of the four pixel points is within the threshold range, the angle is saved, otherwise, the step 1-2 is returned, wherein the horizontal depth camera coordinate system is obtained by rotating the depth camera coordinate system around the origin of the coordinate system until the Z axis is parallel to the ground.
Step 1-5: and starting to calibrate the ground background information after the calibration angle is finished.
Step 1-6: and obtaining a depth map, limiting the depth value with the numerical value exceeding 7000 through a threshold filtering and hole filling method, and setting the depth value with the filling numerical value being zero as the upper limit of the depth numerical value.
Step 1-7: and counting 600 frames of processed depth maps, calculating the mean value of the depth value of each pixel point, and storing the Y value of each pixel point under the coordinate system of the horizontal depth camera and the maximum difference value of the pixel points in each row of pixels respectively.
Further, the specific calculation process of calculating the Y values of the four pixel points in the horizontal depth camera coordinate system in step 1-2 and step 1-4 is as follows:
whereinIs a depth camera reference matrix for converting the camera coordinate system to the pixel coordinate system, (f) x ,f y ) Representing the relationship of the camera coordinate system to the imaging plane, representing the scaling in the u-axis and the v-axis, (C) x ,C y ) Is the optical center of the camera, theta is the installation tilt angle (right-hand coordinate system) of the camera, and/or>Is a pixel point (u) on the depth image d ,v d ) Three-dimensional coordinates in a horizontal depth camera coordinate system. Z is a linear or branched member d Is at (u) d ,v d ) The depth values on the pixel points.
Further, the common obstacle detection module comprises a preprocessing module, a contour extraction module and an obstacle coordinate output module;
the preprocessing module comprises the following steps:
step 2-1: three consecutive depth maps are acquired from the image acquisition module.
Step 2-2: and performing double down sampling and morphological dilation processing on the three acquired depth maps.
Step 2-3: and the three depth maps are respectively differed with the calibrated ground background information in the set ROI area, the numerical value of the point meeting the threshold value of the difference value is set to be 255, and the numerical value of the point not meeting the threshold value is set to be 0.
Step 2-4: and extracting a second depth map named as a binary map P1, and superposing the three depth maps to obtain a binary map P2.
Step 2-5: and performing morphological closing operation processing on the obtained binary image P1 and the binary image P2.
The contour extraction module comprises the following steps:
step 3-1: and respectively calculating the outline of the obstacle and convex hull information in the binary image P1 and the binary image P2.
Step 3-2: and respectively calculating the areas of the convex hulls in the binary image P1 and the binary image P2 and the pixel positions of the centers of the convex hulls, and setting the convex hulls below different area threshold filtering thresholds according to different intervals of the pixels at the centers of the convex hulls.
Step 3-3: and calculating the Y value of the central pixel of the convex hull in the horizontal depth camera coordinate system and the Y value of the same pixel point in the stored ground information, and reserving the convex hull which is smaller than the ground Y and reaches a certain threshold value, otherwise, filtering. (ii) a
The obstacle coordinate output module includes the steps of:
step 4-1: and (3) calculating the minimum circumscribed frame of the residual convex hull after filtering in the step (3-2).
And 4-2, removing the overlapped rectangular frames and combining the connected rectangular frames.
And 4-3, when a plurality of rectangular frames exist, only keeping the rectangular frame with the minimum depth average value in the rectangular frame.
And 4-4, mapping the rectangular box on the depth map into the RGB map.
The mapping formula is as follows:
wherein Z d Is that (u) d ,v d ) Depth values on the pixel points;is a depth map pixel (u) d ,v d ) Three-dimensional coordinates under a depth camera coordinate system;Is a pixel point (u) d ,v d ) Three-dimensional coordinates under an RGB camera coordinate system; (u) c ,v c ) IsPixel coordinates on an RGB image. R and T are the rotation matrix and translation matrix of the depth camera to RGB camera, respectively. K d Is a depth camera internal reference matrix, K c Is the internal reference matrix of the RGB camera.
And 4-5: and calculating the area of the rectangular frame, and sending the rectangular frame in RGB with the area size meeting the threshold condition into a verification module to verify whether the rectangular frame is the ground or the barrier.
And 4-6: if the output of the verification module is 0, the step 4-1 is returned if the object in the rectangular frame is judged to be a non-obstacle, and the next step 4-7 is executed if the output 1 indicates that the object in the rectangular frame is an obstacle.
And 4-7: and calculating the value of each pixel point in the rectangular frame of the obstacle on the X coordinate axis under the coordinate system of the horizontal depth camera, removing the pixel points of which the numerical values on the X coordinate axis are out of the threshold range, finding the nearest distance of the obstacle in the rest pixel points, and outputting the nearest distance to the obstacle avoidance decision module.
Further, the processing procedure of the verification module comprises:
step 5-1: and loading the trained second gshost classification model, constructing an inference engine by using TensorRT, storing the engine, and loading and deploying the engine by using c + +.
Step 5-2: when image data is sent in, the engine is used for reasoning and judging whether the image data is a non-obstacle output 0 or not, otherwise, the image data is judged to be an obstacle output 1.
Further, the special obstacle detection module detects an obstacle, including the steps of:
step 6-1: loading a trained yolov5 model (the model identification categories are pedestrians and mobile carrying robots), constructing an inference engine through TensorRT inference, and storing the engine.
Step 6-2: and the loading and deployment engine receives the RGB image of the image acquisition module and carries out reasoning.
And 6-3: and acquiring pixel coordinates and length and width of the upper left corner of the target rectangular frame in the RGB image obtained after inference.
And 6-4: and converting the RGB pixels into depth image pixels, and acquiring three-dimensional coordinates of pixel values in the rectangular frame in a depth camera coordinate system.
Step 6-5: and removing pixel points of which the numerical values on the X axis do not meet the threshold value under the horizontal depth camera coordinate system, and obtaining the three-dimensional coordinates of the obstacle points under the depth camera coordinate system through filtering waves in the remaining pixel points and outputting the three-dimensional coordinates to the obstacle avoidance decision module.
Furthermore, the obstacle avoidance decision module is respectively provided with different obstacle avoidance ranges for the common obstacle and the special obstacle detection module, and three obstacle avoidance grade areas are divided in the obstacle avoidance ranges. The first-stage obstacle avoidance is decelerated by 30 percent, the second-stage obstacle avoidance is decelerated by 60 percent, and the third-stage obstacle avoidance is stopped directly.
Further, the obstacle avoidance decision module processes the obstacle avoidance information to make an obstacle avoidance decision, and the specific process is as follows:
step 7-1: and receiving the coordinate information of the obstacles output by the common obstacle detection module and the special obstacle detection module.
Step 7-2: if the obstacle appears in the three-level obstacle avoidance area, the obstacle information output by the common obstacle detection module and the special obstacle detection module is processed, and if the obstacle appears in the one-level or two-level obstacle avoidance area, the obstacle information output by the special obstacle detection module is preferentially processed.
And 7-3: and carrying out median filtering and amplitude limiting filtering processing on the distance information of the obstacle.
And 7-4: and judging which one of the three levels of obstacle avoidance areas the obstacle is in according to the obstacle distance information obtained after final processing, judging to use a corresponding obstacle avoidance strategy according to the area where the obstacle is located, outputting the strategy to a motion control module to control the mobile carrying robot to execute the obstacle avoidance strategy, and finally realizing the obstacle avoidance function.
The invention has the beneficial effects that: the invention provides a reliable, rapid, high-precision and low-false-detection real-time obstacle avoidance system. The system can make different obstacle avoidance decisions aiming at obstacles with different priorities, can set different ROI (region of interest) areas according to different terrains to reduce calculated amount, can perform secondary judgment on the basis of obstacle detection, can improve the detection precision of a camera, reduces the false detection rate of the obstacle detection, and has good real-time performance through multithread parallel processing.
Drawings
Fig. 1 is a general flow chart of the implementation of the present invention.
Fig. 2 is a schematic view of an intelligent mobile handling machine and camera mounting and camera coordinate system.
Fig. 3 is a flow chart of pretreatment of a general obstacle detection module.
Fig. 4 is a flowchart of obstacle distance output of the general obstacle detection module.
Fig. 5 is a flow chart of a special obstacle handling module.
Fig. 6 is a flow chart of an obstacle avoidance decision module.
Detailed Description
The technical solutions of the present invention are further described in detail below with reference to the accompanying drawings, but the scope of the present invention is not limited to the following descriptions.
The invention provides a real-time obstacle avoidance system based on RGB-D. The system comprises an image acquisition module, a calibration module, a common obstacle detection module, a verification module, a special obstacle detection module, an obstacle avoidance decision module and a mobile control module as shown in figure 1.
As shown in FIG. 2, the RGB-D camera sensor is mounted on the top of the intelligent transfer robot in actual engineering, and the mounting angle of the RGB-D camera sensor is inclined downwards, and any part of the vehicle body cannot be seen in the visual field range of the camera.
The calibration module is used for acquiring the depth information obtained by the image acquisition module, calibrating the ground background and calibrating the camera installation angle so as to acquire the ground depth information and the camera installation angle information;
the common barrier detection module is used for processing the depth information and RGB information output by the image acquisition module, extracting a barrier through the common barrier detection module, performing secondary judgment through the verification module, and finally accurately detecting the barrier and outputting position information of the barrier;
the verification module is used for carrying out secondary judgment on the obstacles detected by the common obstacle detection module to ensure that the obstacles are true obstacles and not false detections;
the special obstacle detection module is used for pedestrians and other carrying robots to output specific position information;
the obstacle avoidance decision module is used for processing obstacle position information output by the common obstacle detection module and the special obstacle detection module and indicating the mobile carrying robot to execute obstacle avoidance operation so as to realize an obstacle avoidance function;
the mobile control module is used for processing the obstacle avoidance strategy output by the obstacle avoidance decision module to control the movement of the robot to realize an obstacle avoidance function;
the system firstly operates a calibration module to calibrate ground background information and an installation angle and stores data in an industrial personal computer of the intelligent transfer robot, meanwhile, a light filter is attached to a camera to filter visible light in actual engineering application, and light rays emitted by no light source in an operation field can be guaranteed to be directly emitted to the camera.
The calibration module for acquiring the ground depth information and the camera installation angle information comprises the following steps:
step 1-1: the camera faces a flat ground, and the camera depth information is acquired through the image acquisition module.
Step 1-2: and starting to calibrate the installation angle of the camera, randomly selecting four pixel values in the middle of the depth map, and converting and solving the Y value of the space coordinate under the depth camera coordinate system corresponding to the pixel point.
Step 1-3: the camera mounting angle is selected starting at 0 and incrementing 1 degree each time until 180 degrees ends.
Step 1-4: and calculating the Y values of the four pixel points in the coordinate system of the horizontal depth camera through the angle.
Step 1-5: and if the absolute value of the difference value between the Y of the four pixel points is within the threshold range, storing the angle, otherwise, returning to the step 1-3, wherein the horizontal depth camera coordinate system is obtained by rotating the depth camera coordinate system around the origin of the coordinate system until the Z axis is parallel to the ground.
Step 1-6: and after the calibration angle is finished, the ground background information is calibrated.
Step 1-7: obtaining a depth map, limiting the depth value with the numerical value exceeding 7000 through threshold filtering and a hole filling method, wherein the depth value with the filling numerical value of zero is the upper limit of the depth value, and the depth value exceeding 7000 can be changed into 7000 in practical implementation.
Step 1-8: and counting 600 frames of processed depth maps, calculating the mean value of the depth values of each pixel point, wherein each pixel point corresponds to the Y value under the coordinate system of the horizontal depth camera and the maximum difference value of the pixel points in each row of pixels and respectively storing the Y value and the maximum difference value.
Specifically, the specific calculation process of the Y value of the pixel points in the step 1-4 under the coordinate system of the horizontal depth camera is
WhereinIs a depth camera reference matrix for converting the camera coordinate system to the pixel coordinate system, (f) x ,f y ) Representing the relationship of the camera coordinate system to the imaging plane, representing the scaling in the u-axis and the v-axis, (C) x ,C y ) Is the optical center of the camera, theta is the installation tilt angle (right-hand coordinate system) of the camera, and/or>Is a pixel point (u) on the depth image d ,v d ) Three-dimensional coordinates in a horizontal depth camera coordinate system. Z is a linear or branched member d Is that (u) d ,v d ) The depth values on the pixel points.
After the calibration is finished, subsequent common obstacle and special obstacle detection and obstacle avoidance decision processing can be carried out. The common obstacle detection and the special obstacle detection are carried out simultaneously and do not interfere with each other.
The common obstacle detection module divides the obstacle through the preprocessing module according to the depth information of the RGB-D camera, extracts the outline and the convex hull of the obstacle through the outline extraction module, and finally outputs the three-dimensional position information of the obstacle through the obstacle coordinate output module.
The common obstacle detection module comprises a preprocessing module, a contour extraction module and an obstacle coordinate output module.
As shown in fig. 3, which is a flowchart of the preprocessing module and the contour extraction module, the depth map sent by the image acquisition module is received and twice downsampled to reduce the processing data amount, and then the original depth map is divided into two processing depth maps according to different thresholds, one is responsible for detecting short and short obstacles, and the other is responsible for detecting higher obstacles, so that the false detection rate can be reduced, and the detection efficiency can be improved.
The specific treatment steps are as follows:
step 2-1: three consecutive depth maps are acquired from the image acquisition module.
Step 2-2: and performing double down sampling and morphological dilation processing on the three acquired depth maps.
Step 2-3: the three depth maps are respectively differed with the calibrated ground background information in the set ROI area, the numerical value of the point meeting the threshold value is set to be 255, and the numerical value of the point not meeting the threshold value is set to be 0
Step 2-4: and extracting a second depth map named as a binary map P1, and superposing the three depth maps to obtain a binary map P2.
The principle of stacking the three depth maps is as follows:
the number of the same pixel point is set to 255 when the number of the same pixel point is only 255, and the value of the same pixel point is set to 0 if the number of the 255 is less than 3.
Step 2-5: and performing morphological closing operation processing on the obtained binary images P1 and P2.
Step 2-6: and respectively calculating the outline of the obstacle and convex hull information in the binary image.
Step 2-7: and respectively calculating the area of the convex hull and the pixel position of the center of the convex hull in the two binary images, and setting the convex hull below different area threshold filtering thresholds according to different intervals of the pixel at the center of the convex hull.
Step 2-8: and calculating Y of the convex hull central pixel under the horizontal depth camera coordinate system, comparing the Y with the Y value of the same pixel point in the stored ground information, reserving the convex hull which is smaller than the ground Y and reaches a certain threshold value, and filtering the convex hull which is smaller than the ground Y and reaches the certain threshold value.
And starting to calculate the three-dimensional coordinate information of the obstacle in the obstacle avoiding space range after the obstacle convex hull information exists.
Fig. 4 is a flowchart of the obstacle coordinate output module, which includes the following steps:
step 3-1: and loading the trained second gshost classification model, constructing an inference engine by using TensorRT, and storing the engine.
Step 3-2: the engine is loaded and deployed in c + +.
Step 3-3: and calculating the minimum outline frame of the residual convex hull after filtering.
Step 3-4: and removing the overlapped rectangular frames and combining the connected rectangular frames.
Step 3-5: when a plurality of rectangular frames exist, only the rectangular frame with the minimum depth average value in the rectangular frame is reserved.
Step 3-6: and mapping the rectangular box on the depth map into the RGB map.
The mapping formula is as follows:
wherein Z d Is at (u) d ,v d ) Depth values on the pixel points;is a depth map pixel (u) d ,v d ) Three-dimensional coordinates under a depth camera coordinate system;Is a pixel point (u) d ,v d ) Three-dimensional coordinates in an RGB camera coordinate system; (u) c ,v c ) IsImage on RGB imageThe coordinates of the elements. R and T are the rotation matrix and translation matrix of the depth camera to RGB camera, respectively. K is d Is a depth camera internal reference matrix, K c Is the internal reference matrix of the RGB camera.
Step 3-7: and calculating the area of the rectangular frame, and sending the rectangular frame in the RGB with the area size meeting the condition into a verification module.
Step 3-8: when the image data enters the verification module, the engine deployed in the step 3-2 is used for reasoning and judging whether the image data is a non-obstacle output 0 or not, and otherwise, judging that the image data is an obstacle output 1.
Step 3-9: if the output of the verification module is 0, the step 3-3 is returned to if the object in the rectangular frame is judged to be a non-obstacle, and the next step 3-10 is executed if the output 1 indicates that the object in the rectangular frame is an obstacle.
Step 3-10: and calculating the value of each pixel point in the rectangular frame of the obstacle on the X coordinate axis under the coordinate system of the depth camera, removing the pixel points of which the numerical values on the X coordinate axis are out of the threshold range, finding the nearest distance of the obstacle in the rest pixel points, and outputting the nearest distance to the obstacle avoidance decision module.
Special obstacle detection is performed simultaneously.
Fig. 5 is a flowchart of the special obstacle detection module.
The special obstacle detection module for detecting the obstacle comprises the following steps:
step 4-1: loading a trained yolov5 model (the model identification category is pedestrian and the model identification category is a carrying robot), establishing an inference engine through TensorRT inference, and storing the engine.
Step 4-2: and the loading deployment engine receives the RGB pictures of the image acquisition module for reasoning.
Step 4-3: and acquiring pixel coordinates and length and width of the upper left corner of the target rectangular frame in the RGB image obtained after reasoning.
Step 4-4: and converting the RGB pixels into depth image pixels, and acquiring three-dimensional coordinates of pixel values in the rectangular frame in a horizontal depth camera coordinate system.
And 4-5: and removing pixel points of which the numerical values on the X axis do not meet the threshold value under the horizontal depth camera coordinate system, and obtaining the three-dimensional coordinates of the closest point of the obstacle under the horizontal depth camera coordinate system through filtering waves in the remaining pixel points and outputting the three-dimensional coordinates to the obstacle avoidance decision module.
The obstacle avoidance decision module is provided with different obstacle avoidance ranges for the common obstacle and the special obstacle detection module respectively, and three obstacle avoidance grade areas are divided in the obstacle avoidance ranges. The first-stage obstacle avoidance is decelerated by 30 percent, the second-stage obstacle avoidance is decelerated by 60 percent, and the third-stage obstacle avoidance is stopped directly.
As shown in fig. 6, a flowchart of processing obstacle avoidance information by the obstacle decision module includes the following specific steps:
step 5-1: and receiving the coordinate information of the obstacles output by the common obstacle detection module and the special obstacle detection module.
Step 5-2: if the obstacle appears in the three-level obstacle avoidance area, the obstacle information output by the common obstacle detection module and the obstacle detection module is processed, and if the obstacles appear in the first-level obstacle avoidance area and the second-level obstacle avoidance area, the obstacle information output by the special obstacle detection module is preferentially processed.
Step 5-3: and carrying out median filtering and amplitude limiting filtering processing on the distance information of the obstacle.
Step 5-4: and judging which region of the three-stage obstacle avoidance regions the obstacle is in according to the finally processed obstacle distance information, judging to use a corresponding obstacle avoidance strategy according to the region where the obstacle is located, outputting the strategy to the motion control module to control the carrying robot to execute the obstacle avoidance strategy, and finally realizing the obstacle avoidance function.
The above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, several modifications and variations can be made without departing from the technical principle of the present invention, and these modifications and variations should also be regarded as the protection scope of the present invention.
Claims (9)
1. An RGB-D based real-time obstacle avoidance system, the system comprising:
the system comprises an image acquisition module, a calibration module, a common obstacle detection module, a special obstacle detection module, a verification module, an obstacle avoidance decision module and a movement control module;
the image acquisition module is used for acquiring original depth information of the RGB-D camera and RGB information of the obstacle and respectively outputting the depth information and the RGB information to the calibration module, the common obstacle detection module and the special obstacle detection module;
the calibration module is used for acquiring the depth information acquired by the image acquisition module to calibrate the ground background and calibrate the camera installation angle so as to acquire the ground depth information and the camera installation angle information;
the common barrier detection module is used for processing the depth information and RGB information output by the image acquisition module, extracting a barrier, performing secondary judgment through the verification module, ensuring that the barrier is a true barrier instead of false detection, and finally accurately detecting the barrier and outputting position information of the barrier; the common obstacle detection module comprises a preprocessing module, a contour extraction module and an obstacle coordinate output module;
the preprocessing module comprises the following steps:
step 2-1: acquiring three continuous depth maps from an image acquisition module;
step 2-2: carrying out two-time down-sampling and morphological expansion processing on the three acquired depth maps;
step 2-3: the three depth maps are respectively differenced with calibrated ground background information in a set ROI area, the numerical value of a point meeting a difference threshold value is set to be 255, and the numerical value of a point not meeting the threshold value is set to be 0;
step 2-4: extracting a second depth map named as a binary map P1, and superposing the three depth maps together to obtain a binary map P2;
step 2-5: performing morphological closing operation processing on the obtained binary image P1 and the binary image P2;
the contour extraction module comprises the following steps:
step 3-1: respectively calculating the outline and convex hull information of the obstacles in the binary image P1 and the binary image P2;
step 3-2: respectively calculating the areas of the convex hulls and the pixel positions of the centers of the convex hulls in the binary image P1 and the binary image P2, and setting the convex hulls below different area threshold filtering thresholds according to different intervals of the pixels at the centers of the convex hulls;
step 3-3: calculating the Y value of the central pixel of the convex hull in the horizontal depth camera coordinate system and the Y value of the same pixel point in the stored ground information, and reserving the convex hull which is smaller than the ground Y and reaches a certain threshold value, and filtering the convex hull which is smaller than the ground Y and reaches the certain threshold value;
the obstacle coordinate output module includes the steps of:
step 4-1: calculating the minimum circumscribed frame of the residual convex hull after filtering in the step 3-2;
step 4-2, removing the overlapped rectangular frames and combining the connected rectangular frames;
4-3, when a plurality of rectangular frames exist, only keeping the rectangular frame with the minimum depth average value in the rectangular frame;
step 4-4, mapping the rectangular frame on the depth map to the RGB map;
the mapping formula is as follows:
wherein Z d Is that (u) d ,v d ) Depth values on the pixel points;is a depth map pixel (u) d ,v d ) Three-dimensional coordinates under a depth camera coordinate system;Is a pixel point (u) d ,v d ) In RGB cameraThree-dimensional coordinates under a coordinate system; (u) c ,v c ) Is->Pixel coordinates on the RGB image; r and T are respectively a rotation matrix and a translation matrix from the depth camera to the RGB camera; k d Is a depth camera internal reference matrix, K c Is an internal reference matrix of the RGB camera;
and 4-5: calculating the area of the rectangular frame, sending the rectangular frame in RGB with the area size meeting the threshold condition into a verification module, and verifying whether the rectangular frame is the ground or the barrier;
and 4-6: if the output of the verification module is 0, the step 4-1 is returned if the object in the rectangular frame is judged to be a non-obstacle, and if the output 1 indicates that the object in the rectangular frame is an obstacle, the next step 4-7 is executed;
and 4-7: calculating the value of each pixel point in the rectangular frame of the obstacle on an X coordinate axis under a horizontal depth camera coordinate system, removing the pixel points of which the values on the X coordinate axis are out of a threshold range, finding the nearest distance of the obstacle in the rest pixel points, and outputting the nearest distance to an obstacle avoidance decision module;
the special barrier detection module is used for detecting travelers and other robots according to the depth information and the RGB information output by the image acquisition module and outputting barrier position information;
the obstacle avoidance decision module is used for processing obstacle position information output by the common obstacle detection module and the special obstacle detection module to generate an obstacle avoidance strategy;
and the mobile control module is used for controlling the movement of the robot according to the obstacle avoidance strategy output by the obstacle avoidance decision module to realize an obstacle avoidance function.
2. The RGB-D based real-time obstacle avoidance system according to claim 1, wherein: the RGB-D camera is installed at the top of the mobile transfer robot, and the inclination angle of the camera ensures that any part of the robot body cannot be seen in the visual field range of the camera.
3. The real-time obstacle avoidance system based on RGB-D as claimed in claim 1, wherein: the real-time obstacle avoidance system adopts a multithreading technology, modules perform parallel calculation, and the detection frame rate is automatically adjusted according to the running speed of the mobile carrying robot.
4. The real-time obstacle avoidance system based on RGB-D as claimed in claim 1, wherein: the calibration module acquires ground depth information and camera installation angle information, and comprises the following steps:
step 1-1: enabling the RGB-D camera to face a flat ground, and acquiring camera depth information through an image acquisition module;
step 1-2: starting to calibrate the installation angle of the camera, randomly selecting four pixel values in the middle of the depth map, and converting and solving Y values under a depth camera coordinate system corresponding to the pixel points;
step 1-3: starting from 0, selecting the camera installation angle to increase by 1 degree each time until 180 degrees are finished;
step 1-4: calculating Y values of the four pixel points in a horizontal depth camera coordinate system through the camera mounting angle; if the absolute value of the difference value between the Y of the four pixel points is within the threshold range, the angle is saved, otherwise, the step 1-2 is returned, wherein the horizontal depth camera coordinate system is obtained by rotating the depth camera coordinate system around the origin of the coordinate system until the Z axis is parallel to the ground;
step 1-5: after the calibration angle is finished, the ground background information is calibrated;
step 1-6: obtaining a depth map, limiting the depth value with the numerical value exceeding 7000 through a threshold filtering and hole filling method, and setting the depth value with the filling numerical value being zero as the upper limit of the depth numerical value;
step 1-7: and counting 600 frames of processed depth maps, calculating the mean value of the depth values of each pixel point, wherein each pixel point corresponds to the Y value under the coordinate system of the horizontal depth camera and the maximum difference value of the pixel points in each row of pixels and respectively storing the Y value and the maximum difference value.
5. The RGB-D based real-time obstacle avoidance system according to claim 4, wherein the specific calculation process of calculating Y values of four pixel points in a horizontal depth camera coordinate system in steps 1-2 and 1-4 is as follows:
whereinIs a depth camera reference matrix for converting the camera coordinate system to the pixel coordinate system, (f) x ,f y ) Representing the relationship of the camera coordinate system to the imaging plane, representing the scaling in the u-axis and the v-axis, (C) x ,C y ) Is the optical center of the camera, theta is the mounting tilt angle of the camera, and>is a pixel point (u) on the depth image d ,v d ) Three-dimensional coordinates in a horizontal depth camera coordinate system; z is a radical of d Is that (u) d ,v d ) The depth values on the pixel points.
6. The RGB-D based real-time obstacle avoidance system according to claim 1, wherein: the processing procedure of the verification module comprises the following steps:
step 5-1: loading a trained ghost two classification model, constructing an inference engine by using TensorRT, storing the engine, loading by using c + + and deploying the engine;
step 5-2: when image data is sent in, the engine is used for reasoning and judging whether the image data is a non-obstacle output 0 or not, otherwise, the image data is judged to be an obstacle output 1.
7. The RGB-D based real-time obstacle avoidance system according to claim 1, wherein: the special obstacle detection module for detecting obstacles comprises the following steps:
step 6-1: loading a trained yolov5 model, constructing an inference engine through TensorRT inference, and storing the engine;
step 6-2: loading and deploying an engine, receiving the RGB image of the image acquisition module and carrying out reasoning;
and 6-3: acquiring pixel coordinates and length and width of the upper left corner of a target rectangular frame in the RGB image obtained after reasoning;
step 6-4: converting the RGB pixels into depth image pixels, and acquiring three-dimensional coordinates of pixel values in a rectangular frame under a depth camera coordinate system;
step 6-5: and removing pixel points of which the numerical values on the X axis do not meet the threshold value under the horizontal depth camera coordinate system, and obtaining the three-dimensional coordinates of the obstacle points under the depth camera coordinate system through filtering waves in the remaining pixel points and outputting the three-dimensional coordinates to the obstacle avoidance decision module.
8. The RGB-D based real-time obstacle avoidance system according to claim 1, wherein: the obstacle avoidance decision module is provided with different obstacle avoidance ranges for the common obstacle and the special obstacle detection module respectively, and three obstacle avoidance grade areas are divided in the obstacle avoidance ranges simultaneously; the first-stage obstacle avoidance is decelerated by 30 percent, the second-stage obstacle avoidance is decelerated by 60 percent, and the third-stage obstacle avoidance is stopped directly.
9. The RGB-D based real-time obstacle avoidance system of claim 8, wherein: the obstacle avoidance decision module processes the obstacle avoidance information to make an obstacle avoidance decision, and the specific process is as follows:
step 7-1: receiving obstacle coordinate information output by a common obstacle detection module and a special obstacle detection module;
step 7-2: if the obstacle appears in the third-level obstacle avoidance area, the obstacle information output by the common obstacle detection module and the special obstacle detection module is processed, and if the obstacle appears in the first-level or second-level obstacle avoidance area, the obstacle information output by the special obstacle detection module is preferentially processed;
and 7-3: carrying out median filtering and amplitude limiting filtering processing on the distance information of the obstacles;
and 7-4: and judging which one of the three levels of obstacle avoidance areas the obstacle is in according to the obstacle distance information obtained after final processing, judging to use a corresponding obstacle avoidance strategy according to the area where the obstacle is located, outputting the strategy to a motion control module to control the mobile carrying robot to execute the obstacle avoidance strategy, and finally realizing the obstacle avoidance function.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110542757.3A CN113408353B (en) | 2021-05-18 | 2021-05-18 | Real-time obstacle avoidance system based on RGB-D |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110542757.3A CN113408353B (en) | 2021-05-18 | 2021-05-18 | Real-time obstacle avoidance system based on RGB-D |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113408353A CN113408353A (en) | 2021-09-17 |
CN113408353B true CN113408353B (en) | 2023-04-07 |
Family
ID=77678779
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110542757.3A Active CN113408353B (en) | 2021-05-18 | 2021-05-18 | Real-time obstacle avoidance system based on RGB-D |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113408353B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115154074A (en) * | 2022-05-30 | 2022-10-11 | 上海炬佑智能科技有限公司 | Intelligent wheelchair with obstacle avoidance function |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103335635B (en) * | 2013-07-17 | 2015-09-30 | 中测新图(北京)遥感技术有限责任公司 | Aerial surveying camera camera tilt angles degree control method |
CN104899869B (en) * | 2015-05-14 | 2017-09-01 | 浙江大学 | Plane and disorder detection method based on RGB D cameras and attitude transducer |
CN105843223B (en) * | 2016-03-23 | 2018-11-20 | 东南大学 | A kind of mobile robot three-dimensional based on space bag of words builds figure and barrier-avoiding method |
KR102567525B1 (en) * | 2016-11-17 | 2023-08-16 | 삼성전자주식회사 | Mobile Robot System, Mobile Robot And Method Of Controlling Mobile Robot System |
CN107767456A (en) * | 2017-09-22 | 2018-03-06 | 福州大学 | A kind of object dimensional method for reconstructing based on RGB D cameras |
CN109196556A (en) * | 2017-12-29 | 2019-01-11 | 深圳市大疆创新科技有限公司 | Barrier-avoiding method, device and moveable platform |
CN110442120B (en) * | 2018-05-02 | 2022-08-05 | 深圳市优必选科技有限公司 | Method for controlling robot to move in different scenes, robot and terminal equipment |
CN110766761B (en) * | 2019-10-21 | 2023-09-26 | 北京百度网讯科技有限公司 | Method, apparatus, device and storage medium for camera calibration |
-
2021
- 2021-05-18 CN CN202110542757.3A patent/CN113408353B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN113408353A (en) | 2021-09-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109948661B (en) | 3D vehicle detection method based on multi-sensor fusion | |
CN109283538B (en) | Marine target size detection method based on vision and laser sensor data fusion | |
CN113111887B (en) | Semantic segmentation method and system based on information fusion of camera and laser radar | |
CN106951879B (en) | Multi-feature fusion vehicle detection method based on camera and millimeter wave radar | |
Yan et al. | A method of lane edge detection based on Canny algorithm | |
CN108303096B (en) | Vision-assisted laser positioning system and method | |
CN105460009B (en) | Automobile control method and device | |
WO2020099015A1 (en) | System and method for identifying an object in water | |
Youjin et al. | A robust lane detection method based on vanishing point estimation | |
CN114694011A (en) | Fog penetrating target detection method and device based on multi-sensor fusion | |
TWI673190B (en) | Vehicle detection method based on optical radar | |
Huang et al. | Robust lane marking detection under different road conditions | |
CN112990049A (en) | AEB emergency braking method and device for automatic driving of vehicle | |
CN113688738A (en) | Target identification system and method based on laser radar point cloud data | |
CN117111055A (en) | Vehicle state sensing method based on thunder fusion | |
CN113408353B (en) | Real-time obstacle avoidance system based on RGB-D | |
CN115457358A (en) | Image and point cloud fusion processing method and device and unmanned vehicle | |
Zhang et al. | Vessel detection and classification fusing radar and vision data | |
CN112529011B (en) | Target detection method and related device | |
CN114445793A (en) | Intelligent driving auxiliary system based on artificial intelligence and computer vision | |
CN116778262B (en) | Three-dimensional target detection method and system based on virtual point cloud | |
US20230009925A1 (en) | Object detection method and object detection device | |
CN113139986A (en) | Integrated environment perception and multi-target tracking system | |
CN114661051A (en) | Front obstacle avoidance system based on RGB-D | |
CN113611008B (en) | Vehicle driving scene acquisition method, device, equipment and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |