CN117169872B - Robot autonomous navigation system based on stereo camera and millimeter wave radar information fusion - Google Patents

Robot autonomous navigation system based on stereo camera and millimeter wave radar information fusion Download PDF

Info

Publication number
CN117169872B
CN117169872B CN202311081136.5A CN202311081136A CN117169872B CN 117169872 B CN117169872 B CN 117169872B CN 202311081136 A CN202311081136 A CN 202311081136A CN 117169872 B CN117169872 B CN 117169872B
Authority
CN
China
Prior art keywords
image
robot
millimeter wave
wave radar
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311081136.5A
Other languages
Chinese (zh)
Other versions
CN117169872A (en
Inventor
李军
丘仕林
朱志华
葛建华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Zhuguan Technology Co ltd
Original Assignee
Guangzhou Zhuguan Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Zhuguan Technology Co ltd filed Critical Guangzhou Zhuguan Technology Co ltd
Priority to CN202311081136.5A priority Critical patent/CN117169872B/en
Publication of CN117169872A publication Critical patent/CN117169872A/en
Application granted granted Critical
Publication of CN117169872B publication Critical patent/CN117169872B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Radar Systems Or Details Thereof (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a robot autonomous navigation system and computer equipment based on stereo camera and millimeter wave radar information fusion. The invention provides a robot autonomous navigation system based on stereo camera and millimeter wave radar information fusion, which comprises: visual perception module: the distance data of the preset object and the robot are calculated by acquiring the image of the preset object; millimeter wave radar perception module: the device is used for transmitting millimeter wave signals and receiving signals reflected by obstacles to obtain obstacle data; vision and millimeter wave radar fusion module: performing data processing and fusion according to the acquired data to generate environmental perception information; and a robot module: and performing autonomous navigation according to the environment sensing information. The robot autonomous navigation system and the computer equipment based on the stereo camera and millimeter wave radar information fusion can improve the accuracy and efficiency of sewer inspection and reduce the occurrence of human errors.

Description

Robot autonomous navigation system based on stereo camera and millimeter wave radar information fusion
Technical Field
The invention relates to the field of autonomous navigation, in particular to a robot autonomous navigation system based on stereo camera and millimeter wave radar information fusion and computer equipment.
Background
Urban sewer systems are complex pipe network systems containing various sewage such as rainwater, production and domestic sewage, etc., however, the regional urban government reports indicate that the flow of these sewage accounts for about 30% of the total flow. Due to various reasons such as ageing of the pipeline walls, rolling of vehicles, chemical reactions and the like, pipelines in a sewer system are easy to damage, so that sewage is leaked into the surrounding environment, and urban environment and public health safety are seriously threatened. Therefore, maintenance and management of the sewer system should be enhanced to repair damaged pipes in time to reduce the impact of sewage on the surrounding environment.
Currently, many teams utilize a cable tie robot equipped with an onboard video camera system for sewer inspection. The cable tie robot cannot autonomously navigate and check the condition of the sewer, so the robot is remotely controlled by an operator to move and visually check the video system, and then record any obvious damage or abnormality by video. The reliability of such systems depends on the experience of the operator and is prone to human error and inefficiency.
Disclosure of Invention
The invention provides a robot autonomous navigation system and computer equipment based on stereo camera and millimeter wave radar information fusion.
A robot autonomous navigation system based on stereo camera and millimeter wave radar information fusion, comprising:
visual perception module: the method comprises the steps of acquiring an image of a preset object, and calculating distance data between the preset object and a robot according to the acquired image;
millimeter wave radar perception module: the device is used for transmitting millimeter wave signals, receiving signals reflected by the obstacle and obtaining obstacle data according to the received signals;
vision and millimeter wave radar fusion module: performing data processing and fusion according to the distance data acquired by the visual perception module and the obstacle data acquired by the millimeter wave radar perception module to generate environment perception information;
and a robot module: and performing autonomous navigation according to the environment sensing information.
Preferably, the obstacle data includes a distance of the obstacle from the robot, a speed of the obstacle itself, and an angle at which the robot looks at the obstacle.
Preferably, in the vision and millimeter wave radar fusion module, the data processing and fusion includes data denoising, calibration and coordinate system conversion, then data alignment and data association are performed according to the processed data, fusion is performed according to the aligned and associated data, and finally environment perception information is generated.
Preferably, in the visual perception module, the acquiring a preset image specifically includes: and processing characteristic pixels of the preset object according to the deep learning network method, then realizing automatic imaging of the preset object, and finally acquiring an image of the preset object.
Preferably, after a preset image is acquired, edge detection is carried out on the preset image by using a canny edge detection algorithm, the number of edge points in the image is judged, if the number of edge points Nedge in the image is more than T, the robot is judged to be very close to a preset object, and finally, LC matching measurement is used for calculating distance data between the preset object and the robot; if the number of edge points Nedge in the image is less than T, stopping and processing the next image.
Preferably, the LC matching metric calculation specifically includes the following steps:
f1, calculating the difference value between two adjacent pixels in the left eye image of the robot according to the following formula:
D L (x,y)=I L (x,y)-I L (x-1,y),
wherein D is L For the left eye image adjacent pixel difference, I L For a left eye image, x is the position of the pixel in the x-axis (horizontal direction) of the image, and y is the position of the pixel in the y-axis (vertical direction) of the image;
f2, calculating the difference value between two adjacent pixels in the right eye image of the robot according to the following formula:
D R (x,y)=I R (x,y)-I R (x-1,y)
wherein D is R For the right eye image adjacent pixel difference, I R Is a right eye image;
f3, calculating the sum of the difference between two adjacent pixels in the left eye and the right eye of the robot to represent the distinctiveness according to the following formula, wherein the distinctiveness is shown as follows:
C d (x,y)=|D L (x,y)|+|D R (x+d,y)|
wherein d is the horizontal displacement difference value, C d Calculating the sum of the difference between two adjacent pixels in the left eye and the right eye of the robot;
f4, calculating the absolute value of the difference value between two adjacent pixels in the left eye and the right eye of the robot according to the following formula to represent the similarity at the displacement d, wherein the formula is as follows:
G d (x,y)=|D L (x,y)-D R (x+d,y)|
wherein G is d Calculating the absolute value of the difference value between two adjacent pixels in the left eye and the right eye of the robot;
f5, calculating a given displacement d according to the following formula, wherein the matching metric Ed is:
E d (x,y)=|D L (x,y)|+|D R (x+d,y)|-D L (x,y)-D R (x+d,y) (1)
wherein E is d To calculate the correlation of two pixels at a horizontal displacement difference d;
if the matched pixel is a non-feature pixel, the brightness difference between the horizontally adjacent pixels is small, D L (x, y) and D R (x+d, y) all have smaller values, and thus E d The value of (2) is also small; conversely, if the matching pixel is a feature pixel, it has a large luminance difference in the horizontal direction, which will result in a large E d
Preferably, in the millimeter wave radar sensing module, a signal reflected from an obstacle is received and subjected to Fast Fourier Transform (FFTs) to obtain an RF image, and radar points are extracted from the RF image according to a peak detection algorithm.
Preferably, the specific step of receiving the signal reflected from the obstacle and performing a Fast Fourier Transform (FFTs) to obtain the RF image includes: the reflected signal is processed, a Fast Fourier Transform (FFT) is performed on the samples to estimate the reflection range of the reflected signal, then a Low Pass Filter (LPF) is used to process the reflected signal again, and finally a second FFT is performed on the samples from different receiver antennas to estimate the azimuth angle of the reflection and obtain the final RF image.
Preferably, three dimensions of angle, distance and chirp are obtained according to the obtained final RF image, then three-dimensional (n, 1) convolution operation is carried out according to the three dimensions of data, chirp is compressed to one dimension through maximum pooling to obtain an angle-distance characteristic map, then the same operation is carried out on each frame of RF image, and finally the characteristic radar points extracted from all frames in the RF fragment are connected according to the extracted characteristic radar points in time sequence.
Preferably, the peak values of the characteristic radar points are obtained according to the extracted characteristic radar points through time sequence connection, and the peak values are marked for data fusion.
According to the robot autonomous navigation system based on the stereo camera and millimeter wave radar information fusion, manual control is not needed, the current position is calculated according to the characteristic pixels of the preset object, accurate navigation is achieved, accuracy and efficiency of sewer inspection can be improved, occurrence of human errors is reduced, and therefore urban environment and public health safety are guaranteed better.
Drawings
The foregoing and other objects, features and advantages of the invention will be apparent from the following more particular description of preferred embodiments of the invention, as illustrated in the accompanying drawings.
Fig. 1 is a schematic view of an image joint of a robot in a sewer.
Fig. 2 is a schematic view of an image manhole of the robot in a sewer.
Fig. 3 is a schematic view of a robot provided by the present invention.
FIG. 4 is a flow chart of the fusion of data provided by the present invention.
Fig. 5 is a schematic diagram of automatic imaging (joint) in a stereo camera according to the present invention.
Fig. 6 is a flow chart of distance data required for generating a robot navigation provided by the present invention.
Fig. 7 is a schematic diagram of a radar signal generating RF image flow provided by the present invention.
Detailed Description
The following detailed description of the present invention is provided in connection with the accompanying drawings and specific embodiments so that those skilled in the art may better understand the present invention and practice it, but the examples are not to be construed as limiting the present invention.
The invention provides a robot autonomous navigation system based on stereo camera and millimeter wave radar information fusion, which comprises:
visual perception module: the method comprises the steps of acquiring an image of a preset object, and calculating distance data between the preset object and a robot according to the acquired image; the preset object refers to a manhole (shown in fig. 1) and a joint (shown in fig. 2) when the robot enters the sewer to see, and the preset object is used as a landmark in autonomous navigation of the robot, so that autonomous navigation of the robot is facilitated. The device with the visual perception module is a stereoscopic camera.
Millimeter wave radar perception module: the device is used for transmitting millimeter wave signals, receiving signals reflected by the obstacle and obtaining obstacle data according to the received signals; the device with the millimeter wave radar sensing module is referred to as a millimeter wave radar.
Vision and millimeter wave radar fusion module: performing data processing and fusion according to the distance data acquired by the visual perception module and the obstacle data acquired by the millimeter wave radar perception module to generate environment perception information;
and a robot module: autonomous navigation is performed according to the environment sensing information, and the device having the robot is referred to as a robot (as shown in fig. 3).
In a preferred embodiment, the obstacle data includes the distance of the obstacle from the robot, the speed of the obstacle itself and the angle at which the robot looks at the obstacle.
In a preferred embodiment, in the vision and millimeter wave radar fusion module, data processing and fusion include data denoising, calibration and coordinate system conversion, then data alignment and data association are performed according to the processed data, fusion is performed according to the aligned and associated data, and finally environment perception information is generated.
Referring to fig. 4, specifically, the data obtained by the vision and millimeter wave radar fusion module refers to preset object position information and information such as obstacle distance, speed and angle, wherein the specific steps of fusing the data are as follows:
h1, aligning data: and aligning the data obtained by the visual perception module and the millimeter wave radar perception module, and ensuring that the information obtained by the visual perception module and the millimeter wave radar perception module is in the same coordinate system.
h2, data association: and correlating the data obtained by the visual perception module and the millimeter wave radar perception module, and matching the object detected by the binocular stereo camera with the obstacle detected by the millimeter wave radar.
h3, data fusion: and fusing information such as obstacle distance, speed, angle and the like obtained by data obtained by the visual perception module and the millimeter wave radar perception module to generate more accurate and complete environment perception information. The specific fusion method can adopt Kalman filtering, extended Kalman filtering and other methods.
h4, environment perception output: and outputting the fused environment sensing information for the sewer robot to make decision and control, wherein the output information comprises preset object position information, obstacle distance, speed, angle and the like.
Referring to fig. 6, in the present invention, the specific step of calculating distance data between a preset object and a robot:
s1: starting;
s2: acquiring characteristic pixels of a preset object;
s3: according to the obtained characteristic pixels of the preset object, automatic imaging of the preset object is realized;
s4: performing edge detection on an image of a preset object by using a canny edge detection algorithm;
s5: judging whether the number of edge points Nedge in the image is larger than T;
s6: if the number of edge points Nedge in the image is not more than T, stopping processing the image by the visual perception module, and starting to process the next image;
s7: if the number of edge points Nedge in the image is larger than T, matching with characteristic pixels of a preset object;
s8: and finally, calculating distance data between the preset object and the robot by using the LC matching metric.
Wherein, stereo matching (characteristic pixels of a preset object are matched) is to simulate visual difference when human eyes observe the object, so that people can perceive the depth and shape of the object. In stereo matching, images taken from two or more different viewpoints or times are typically referred to as stereo images. There is a slight parallax between these images, i.e. a difference between pixel positions of the same object in different images. By analyzing these disparities, depth information for different points on the object surface can be deduced.
Referring to fig. 6, in the preferred embodiment, in the visual perception module, the acquiring a preset image is specifically: and processing characteristic pixels of the preset object according to the deep learning network method, then realizing automatic imaging of the preset object, and finally acquiring an image of the preset object. The deep learning network method refers to a method of learning image features of a preset object, wherein an automatic imaging image of the preset object is classified into a man-hole automatic imaging image (not shown) and a joint automatic imaging image (as shown in fig. 5).
Referring to fig. 6, in a preferred embodiment, after a preset image is acquired, edge detection is performed on the preset image by using a canny edge detection algorithm, and the number of edge points in the image is determined, if the number of edge points Nedge in the image is greater than T, it is determined that the robot is very close to the preset object, and finally distance data between the preset object and the robot is calculated by using an LC matching metric; if the number of edge points Nedge in the image is less than T, stopping and processing the next image.
Referring to fig. 6, in a preferred embodiment, the LC matching metric calculation specifically includes the steps of:
f1, calculating the difference value between two adjacent pixels in the left eye image of the robot according to the following formula:
D L (x,y)=I L (x,y)-I L (x-1,y),
wherein D is L For the left eye image adjacent pixel difference, I L For a left eye image, x is the position of the pixel in the x-axis (horizontal direction) of the image, and y is the position of the pixel in the y-axis (vertical direction) of the image;
f2, calculating the difference value between two adjacent pixels in the right eye image of the robot according to the following formula:
D R (x,y)=I R (x,y)-I R (x-1,y)
wherein D is R For the right eye image adjacent pixel difference, I R Is a right eye image;
f3, calculating the sum of the difference between two adjacent pixels in the left eye and the right eye of the robot to represent the distinctiveness according to the following formula, wherein the distinctiveness is shown as follows:
C d (x,y)=|D L (x,y)|+|D R (x+d,y)|
wherein d is the horizontal displacement difference value, C d Calculating the sum of the difference between two adjacent pixels in the left eye and the right eye of the robot;
f4, calculating the absolute value of the difference value between two adjacent pixels in the left eye and the right eye of the robot according to the following formula to represent the similarity at the displacement d, wherein the formula is as follows:
G d (x,y)=|D L (x,y)-D R (x+d,y)|
wherein G is d Calculating the absolute value of the difference value between two adjacent pixels in the left eye and the right eye of the robot;
f5, calculating a given displacement d according to the following formula, wherein the matching metric Ed is:
E d (x,y)=|D L (x,y)|+|D R (x+d,y)|-D L (x,y)-D R (x+d,y) (1)
wherein E is d To calculate the correlation of two pixels at a horizontal displacement difference d;
wherein, to maximize the consideration relative to the fixed displacement d, specifically: if the matching pixel is not a feature pixel of the predetermined object, the brightness between horizontally adjacent pixelsThe difference is small, D L (x, y) and D R (x+d, y) all have smaller values, and thus E d The value of (2) is also small. Conversely, if the matching pixel is a characteristic pixel of the preset object, it has a large luminance difference in the horizontal direction, which will result in a large E d An isolated pixel is found to match a feature pixel of the predetermined object.
Referring to fig. 7, in the preferred embodiment, in the millimeter wave radar sensing module, a signal reflected from an obstacle is received and subjected to Fast Fourier Transform (FFTs) to obtain an RF image, and radar points are extracted from the RF image according to a peak detection algorithm.
Referring to fig. 7, in a preferred embodiment, the specific step of receiving signals reflected from obstacles and performing Fast Fourier Transforms (FFTs) to obtain RF images includes: the reflected signals are processed, fast Fourier Transform (FFT) is performed on the samples to estimate the reflection range of the reflected signals, then a Low Pass Filter (LPF) is used to process all the reflected signals again, finally a second FFT is performed on the samples from different receiver antennas to estimate the azimuth angle of the reflection and obtain the final RF image. Wherein processing the reflected signal includes denoising and filtering. The Low Pass Filter (LPF) is used to process all reflected signals again, specifically: a Low Pass Filter (LPF) removes high frequency noise from all reflected signals at a rate of 30 FPS.
Referring to fig. 7, a, B, C show different objects in the field of view of a millimeter wave radar, t-f shows a time-frequency diagram, ts shows a chirp (frequency modulated signal) duration, B in the time-frequency diagram shows the signal bandwidth, τ shows the time after the transmitted signal is emitted, until it is reflected back by the object, and a delay time τ is passed to reach the receiving antenna. The sample of the receiver antenna means that an array antenna is arranged in the millimeter wave sensor, and the azimuth angle of the object can be calculated by calculating echo signals (samples) of the same object received by different antennas.
In a preferred embodiment, three dimensions of angle, distance and chirp are obtained according to the obtained final RF image, then a three-dimensional (n, 1) convolution operation is performed according to the three dimensions of data, chirp is compressed to one dimension through maximum pooling to obtain an angle-distance feature map, then the same operation is performed on each frame of RF image, and finally features extracted from all frames in the input RF fragment are connected in time sequence according to the extracted features. Wherein the three-dimensional (n, 1) convolution operation in the present invention employs a Deformable Convolution Network (DCN) for processing deformed objects in an image based on object detection of the image.
In a preferred example, the peak values of the characteristic radar points are obtained according to the extracted characteristic radar points in a time sequence connection mode, and the peak values are marked for data fusion. The data fusion is to use a visual and millimeter wave radar fusion module (CRF) method to detect the category and the 3D position of a preset object by a visual perception module, and firstly, the detection is projected from the visual perception module coordinate to the range azimuth coordinate transformation of the millimeter wave radar perception module through conversion can be expressed as follows:
reaching a projection position in the range-azimuth coordinates; (x) c ,z c ) Representing the object position in the visual perception module coordinates; (x) or ,z or ) Representing the position of the millimeter wave radar sensing module in the visual sensing module target, calibrating alignment from the sensor system. Peak detection from CFAR algorithms also involves the same millimeter wave radar sensing module range-azimuth coordinates. Finally, a fusion algorithm is applied to estimate the input RF image
The foregoing description is only of the preferred embodiments of the present invention, and is not intended to limit the scope of the invention, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein or in the alternative, which may be employed directly or indirectly in other related arts.

Claims (7)

1. A robot autonomous navigation system based on stereo camera and millimeter wave radar information fusion is characterized by comprising:
visual perception module: the method comprises the steps of acquiring an image of a preset object, and calculating distance data between the preset object and a robot according to the acquired image;
millimeter wave radar perception module: the device is used for transmitting millimeter wave signals, receiving signals reflected by the obstacle and obtaining obstacle data according to the received signals;
vision and millimeter wave radar fusion module: performing data processing and fusion according to the distance data acquired by the visual perception module and the obstacle data acquired by the millimeter wave radar perception module to generate environment perception information;
and a robot module: autonomous navigation is carried out according to the environment sensing information;
in the visual perception module, the image obtaining of the preset object specifically comprises the following steps: processing characteristic pixels of a preset object according to a deep learning network method, then realizing automatic imaging of the preset object, and finally obtaining an image of the preset object;
after an image of a preset object is acquired, edge detection is carried out on the preset image by using a canny edge detection algorithm, the number of edge points in the image is judged, if the number of edge points Nedge in the image is more than T, the robot is judged to be very close to the preset object, and finally the LC matching measurement is used for calculating distance data between the preset object and the robot; if the number of edge points Nedge in the image is less than T, stopping and processing the next image;
the LC matching metric calculation specifically comprises the following steps:
f1, calculating the difference value between two adjacent pixels in the left eye image of the robot according to the following formula:
D L (x,y)=I L (x,y)-I L (x-1,y),
wherein D is L For the left eye image adjacent pixel difference, I L For a left eye image, x is the position of a pixel on the x axis in the horizontal direction of the image, and y is the position of the pixel on the y axis in the vertical direction of the image;
f2, calculating the difference between two adjacent pixels in the right eye image of the robot according to the following formula:
D R (x,y)=I R (x,y)-I R (x-1,y)
wherein D is R For the right eye image adjacent pixel difference, I R Is a right eye image;
f3, calculating the sum of the difference between two adjacent pixels in the left eye and the right eye of the robot to represent the distinctiveness according to the following formula, wherein the distinctiveness is shown as follows:
C d (x,y)=|D L (x,y)|+|D R (x+d,y)|
wherein d is the horizontal displacement difference value, C d Calculating the sum of the difference between two adjacent pixels in the left eye and the right eye of the robot;
f4, calculating the absolute value of the difference value between two adjacent pixels in the left eye and the right eye of the robot according to the following formula to represent the similarity at the displacement d, wherein the formula is as follows:
G d (x,y)=|D L (x,y)+D R (x+d,y)|
wherein G is d Calculating the absolute value of the difference value between two adjacent pixels in the left eye and the right eye of the robot;
f5, calculating a given displacement d according to the following formula, wherein the matching metric Ed is:
E d (x,y)=|D L (x,y)|+|D R (x+d,y)|-D L (x,y)-D R (x+d,y) (1)
wherein Ed is the correlation calculated at two pixels with a horizontal displacement difference d;
if the matched pixel is a non-feature pixel, the brightness difference between the horizontally adjacent pixels is small, D L (x, y) and D R (x+d, y) have smaller values and therefore, ed has little value; in contrast, if the matching pixel is a feature pixel, it has a large luminance difference in the horizontal direction, which results in a large Ed.
2. The autonomous navigation system of a robot based on stereo camera and millimeter wave radar information fusion of claim 1, wherein the obstacle data includes a distance of the obstacle from the robot, a speed of the obstacle itself, and an angle at which the robot looks at the obstacle.
3. The robot autonomous navigation system based on the stereo camera and millimeter wave radar information fusion according to claim 1, wherein in the vision and millimeter wave radar fusion module, data processing and fusion comprise data denoising, calibration and coordinate system conversion, then data alignment and data association are performed according to the processed data, fusion is performed according to the aligned and associated data, and finally environment perception information is generated.
4. The autonomous robot navigation system based on the fusion of stereo camera and millimeter wave radar information according to claim 1, wherein in the millimeter wave radar sensing module, a signal reflected from an obstacle is received and subjected to Fast Fourier Transform (FFT) to obtain an RF image, and radar points are extracted from the RF image according to a peak detection algorithm.
5. The autonomous robot navigation system based on the fusion of stereo camera and millimeter wave radar information according to claim 4, wherein the specific step of receiving the signal reflected from the obstacle and performing Fast Fourier Transform (FFT) to obtain the RF image comprises: the reflected signal is processed, a Fast Fourier Transform (FFT) is performed on the samples to estimate the reflection range of the reflected signal, then a Low Pass Filter (LPF) is used to process the reflected signal again, and finally a second FFT is performed on the samples from different receiver antennas to estimate the azimuth angle of the reflection and obtain the final RF image.
6. The autonomous robot navigation system based on the combination of stereo camera and millimeter wave radar information according to claim 5, wherein three dimensions of angle, distance and chirp are obtained according to the obtained final RF image, then three-dimensional (n, 1) convolution operation is performed according to the three dimensions of data, the chirp is compressed to one dimension by maximum pooling to obtain an angle-distance feature map, then the same operation is performed on each frame of RF image, and finally the feature radar points extracted from all frames in the RF fragment are connected in time sequence according to the extracted feature radar points.
7. The autonomous robot navigation system based on the fusion of stereo camera and millimeter wave radar information according to claim 6, wherein the peak values of the feature radar points are obtained by connecting the extracted feature radar points in time sequence, and the peak values are marked for data fusion.
CN202311081136.5A 2023-08-25 2023-08-25 Robot autonomous navigation system based on stereo camera and millimeter wave radar information fusion Active CN117169872B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311081136.5A CN117169872B (en) 2023-08-25 2023-08-25 Robot autonomous navigation system based on stereo camera and millimeter wave radar information fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311081136.5A CN117169872B (en) 2023-08-25 2023-08-25 Robot autonomous navigation system based on stereo camera and millimeter wave radar information fusion

Publications (2)

Publication Number Publication Date
CN117169872A CN117169872A (en) 2023-12-05
CN117169872B true CN117169872B (en) 2024-03-26

Family

ID=88934982

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311081136.5A Active CN117169872B (en) 2023-08-25 2023-08-25 Robot autonomous navigation system based on stereo camera and millimeter wave radar information fusion

Country Status (1)

Country Link
CN (1) CN117169872B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015024407A1 (en) * 2013-08-19 2015-02-26 国家电网公司 Power robot based binocular vision navigation system and method based on
CN104463890A (en) * 2014-12-19 2015-03-25 北京工业大学 Stereoscopic image significance region detection method
CN106030614A (en) * 2014-04-22 2016-10-12 史內普艾德有限公司 System and method for controlling a camera based on processing an image captured by other camera
CN107817488A (en) * 2017-09-28 2018-03-20 西安电子科技大学昆山创新研究院 The unmanned plane obstacle avoidance apparatus and barrier-avoiding method merged based on millimetre-wave radar with vision
CN108898575A (en) * 2018-05-15 2018-11-27 华南理工大学 A kind of NEW ADAPTIVE weight solid matching method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015024407A1 (en) * 2013-08-19 2015-02-26 国家电网公司 Power robot based binocular vision navigation system and method based on
CN106030614A (en) * 2014-04-22 2016-10-12 史內普艾德有限公司 System and method for controlling a camera based on processing an image captured by other camera
CN104463890A (en) * 2014-12-19 2015-03-25 北京工业大学 Stereoscopic image significance region detection method
CN107817488A (en) * 2017-09-28 2018-03-20 西安电子科技大学昆山创新研究院 The unmanned plane obstacle avoidance apparatus and barrier-avoiding method merged based on millimetre-wave radar with vision
CN108898575A (en) * 2018-05-15 2018-11-27 华南理工大学 A kind of NEW ADAPTIVE weight solid matching method

Also Published As

Publication number Publication date
CN117169872A (en) 2023-12-05

Similar Documents

Publication Publication Date Title
US11099275B1 (en) LiDAR point cloud reflection intensity complementation method and system
CN108352056B (en) System and method for correcting erroneous depth information
EP3517997B1 (en) Method and system for detecting obstacles by autonomous vehicles in real-time
US10591594B2 (en) Information processing apparatus, information processing method, and program
US8120644B2 (en) Method and system for the dynamic calibration of stereovision cameras
Zhao et al. Global correlation based ground plane estimation using v-disparity image
Oh et al. A comparative study on camera-radar calibration methods
EP3007099A1 (en) Image recognition system for a vehicle and corresponding method
JP2009014445A (en) Range finder
US9055284B2 (en) Stereo image processing apparatus and stereo image processing method
KR101076406B1 (en) Apparatus and Method for Extracting Location and velocity of Obstacle
US8675047B2 (en) Detection device of planar area and stereo camera system
CN105809706A (en) Global calibration method of distributed multi-camera system
CN112507774A (en) Method and system for obstacle detection using resolution adaptive fusion of point clouds
US8103056B2 (en) Method for target geo-referencing using video analytics
CN111856445B (en) Target detection method, device, equipment and system
CN117169872B (en) Robot autonomous navigation system based on stereo camera and millimeter wave radar information fusion
Drews Jr et al. Tracking system for underwater inspection using computer vision
KR102195040B1 (en) Method for collecting road signs information using MMS and mono camera
Mai et al. 3D reconstruction of line features using multi-view acoustic images in underwater environment
KR20180097004A (en) Method of position calculation between radar target lists and vision image ROI
EP3575829B1 (en) A method of determining a transformation matrix
CN112991372A (en) 2D-3D camera external parameter calibration method based on polygon matching
CN114754732B (en) Distance measurement method based on multi-eye vision
CN113792755B (en) Wavelet depth image fusion environment sensing and target recognition method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant