CN111813114A - Intelligent car visual navigation method - Google Patents
Intelligent car visual navigation method Download PDFInfo
- Publication number
- CN111813114A CN111813114A CN202010645328.4A CN202010645328A CN111813114A CN 111813114 A CN111813114 A CN 111813114A CN 202010645328 A CN202010645328 A CN 202010645328A CN 111813114 A CN111813114 A CN 111813114A
- Authority
- CN
- China
- Prior art keywords
- point
- image
- value
- pixel
- window
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000000034 method Methods 0.000 title claims abstract description 30
- 230000000007 visual effect Effects 0.000 title claims abstract description 26
- 230000009466 transformation Effects 0.000 claims abstract description 31
- 238000006243 chemical reaction Methods 0.000 claims description 19
- 239000011159 matrix material Substances 0.000 claims description 18
- 230000002776 aggregation Effects 0.000 claims description 13
- 238000004220 aggregation Methods 0.000 claims description 13
- 238000004364 calculation method Methods 0.000 claims description 11
- 238000012887 quadratic function Methods 0.000 claims description 8
- 230000008569 process Effects 0.000 claims description 7
- 238000005457 optimization Methods 0.000 claims description 5
- WHXSMMKQMYFTQS-UHFFFAOYSA-N Lithium Chemical compound [Li] WHXSMMKQMYFTQS-UHFFFAOYSA-N 0.000 claims description 3
- 238000013459 approach Methods 0.000 claims description 3
- 230000001174 ascending effect Effects 0.000 claims description 3
- 229910052744 lithium Inorganic materials 0.000 claims description 3
- 238000010606 normalization Methods 0.000 claims description 3
- 238000012163 sequencing technique Methods 0.000 claims description 3
- 238000011524 similarity measure Methods 0.000 claims description 3
- 230000003068 static effect Effects 0.000 claims description 3
- 238000013519 translation Methods 0.000 claims description 3
- NAWXUBYGYWOOIX-SFHVURJKSA-N (2s)-2-[[4-[2-(2,4-diaminoquinazolin-6-yl)ethyl]benzoyl]amino]-4-methylidenepentanedioic acid Chemical compound C1=CC2=NC(N)=NC(N)=C2C=C1CCC1=CC=C(C(=O)N[C@@H](CC(=C)C(O)=O)C(O)=O)C=C1 NAWXUBYGYWOOIX-SFHVURJKSA-N 0.000 claims description 2
- 230000007613 environmental effect Effects 0.000 abstract description 3
- 238000005286 illumination Methods 0.000 abstract description 3
- 230000000694 effects Effects 0.000 abstract description 2
- 238000005516 engineering process Methods 0.000 description 4
- 206010063385 Intellectualisation Diseases 0.000 description 1
- 238000009825 accumulation Methods 0.000 description 1
- 230000004888 barrier function Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0214—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0223—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0238—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
- G05D1/024—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0246—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0255—Control of position or course in two dimensions specially adapted to land vehicles using acoustic signals, e.g. ultra-sonic singals
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
- G06T2207/10012—Stereo images
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Aviation & Aerospace Engineering (AREA)
- Automation & Control Theory (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Electromagnetism (AREA)
- Computer Graphics (AREA)
- Software Systems (AREA)
- Optics & Photonics (AREA)
- Geometry (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses an intelligent vehicle visual navigation method, which solves the problems of weak image texture, low matching precision and low matching speed based on a three-dimensional matching algorithm of a minimum spanning tree structure, and provides environmental information required by path planning, thereby realizing three-dimensional accurate reconstruction of a path and providing necessary prior knowledge for subsequent path planning and obstacle avoidance. The foreground image belongs to the weak texture area under the condition of dark light, the traditional matching algorithm has poor effect and high mismatching rate on the weak texture area, the CENSUS transformation algorithm has better robustness on illumination and interference, the position characteristics of pixels in a window are reserved, the CENSUS transformation algorithm is combined with a tree structure, and the problem of high matching error rate of the traditional stereo matching algorithm on the weak texture area is solved.
Description
Technical Field
The invention relates to an intelligent trolley, in particular to an intelligent trolley visual navigation method, and belongs to the technical field of robots.
Background
The mill often uses the dolly to carry out the transport of work piece, goods, and prior art's floor truck carries out wireless control through remote controllers such as infrared ray, bluetooth usually, realizes the automation mechanized work of dolly. The control of the trolley by wireless remote control does not completely liberate the user, and the user still needs to give instructions to the trolley by a remote controller in real time in the running process of the trolley. In order to further realize the intellectualization of the movement of the trolley, the trolley has the functions of identifying and judging the current road condition and the movement state by means of various sensing devices and carries out self-decision, so that the trolley can realize the completely unmanned operation and can run according to a specified route in the working process, or a reasonable route is automatically planned after environment identification according to a task target, and the intellectualized running is completed. With the development of a moving robot (an intelligent trolley), the application field of the moving robot is increasingly wide, and the navigation modes of the moving robot at present mainly include magnetic stripe navigation, inertial navigation, laser navigation and the like. The magnetic stripe navigation needs to lay a magnetic stripe on a moving path of the robot, so that the precision is low, and the magnetic stripe protruding out of the ground is easy to damage; the accumulated error of the inertial navigation increases along with the accumulation of time, other equipment needs to be assisted for correction, and the high-precision inertial navigation device has higher cost; the laser navigation needs to add reflectors on two sides of a moving path, has high requirements on the installation accuracy of the reflectors, is sensitive to other light sources, is not easy to operate outdoors, and has high cost.
The visual navigation is a hot spot in the field of the current mobile robot, and the technical problem provided in the background technology can be solved by a visual navigation method based on the foreground image texture. The visual navigation method based on the foreground image texture does not need to additionally process the ground, has wide application range and convenience, is calibrated by the aid of ground pictures, and intermittently corrects accumulated errors so as to achieve high-precision positioning.
In the running process of the intelligent trolley, a plurality of key technologies influence the running state, wherein the path planning determines whether the intelligent trolley can effectively avoid obstacles or not, and a safe and collision-free running path is planned, so that the working task of the intelligent trolley is efficiently and safely realized. The three-dimensional reconstruction of the foreground view field is the basis and precondition of path planning, provides prior knowledge for path planning in a complex environment, and provides environment information for real-time online planning. The intelligent trolley control system adopts optical vision navigation, a binocular camera is used as a sensor, a trolley foreground path appears in the view field of the binocular camera, at the moment, three-dimensional reconstruction is carried out to assist binocular vision navigation, navigation positioning is more accurate, the operation accuracy of the intelligent trolley is improved, and obstacle avoidance performance is improved.
The three-dimensional reconstruction of the foreground view field of the trolley is mainly divided into the steps of image acquisition preprocessing, stereo matching, three-dimensional reconstruction and the like, wherein the stereo matching is a key technology for realizing effective three-dimensional reconstruction.
Because the images shot by the binocular camera in the environments with bad weather and dark light have the characteristics of low illumination, weak texture and the like, the existing stereo matching algorithm has the following defects: in the low texture area, the fuzzy pixel points are not enough to be distinguished, so that wrong matching is caused; under the condition that the image texture is insufficient and noise exists, a point to be matched with low texture cannot capture a correct texture area on correct matching, so that an incorrect matching result is easily caused, and a reconstructed point result is unsmooth and a boundary is unclear.
Disclosure of Invention
The invention aims to provide an intelligent vehicle visual navigation method, which solves the problems of weak image texture, low matching precision and low matching speed based on a three-dimensional matching algorithm of a minimum spanning tree structure, and provides environmental information required by path planning, thereby realizing three-dimensional accurate path reconstruction and providing necessary prior knowledge for subsequent path planning and obstacle avoidance.
The purpose of the invention is realized by the following technology:
a visual navigation method of an intelligent trolley comprises the steps that the intelligent trolley comprises a power supply module 4, a controller 5, a trolley body 6, driving wheels 7, guide wheels 8, a steering engine 9, a navigation positioning device 11 and an obstacle avoidance device 12, wherein four wheels are installed at the bottom of the trolley body 6, two driving wheels 7 are installed at the rear part and used for driving the trolley to run, and two guide wheels 8 are installed at the front part and used for steering; the power module 4 is a lithium battery and is used for providing electric energy; the navigation positioning device 11 comprises 2 cameras 1 and 2 steering engines 9, the steering engines 9 are arranged at the front part of the vehicle body 6, the cameras 1 are arranged on the steering engines 9, and the steering engines 9 drive the cameras 1 to rotate; the obstacle avoidance device 12 comprises a laser range finder 2 and an ultrasonic sensor 3, the laser range finder 2, the ultrasonic sensor 3 and the camera 1 transmit collected signals to a controller 5, and the controller 5 controls a steering engine 9; when the ultrasonic sensor detects that an obstacle exists, the trolley stops moving, the laser range finder starts working, two beams of laser are emitted for ranging within delta t, the measured obstacle distances are respectively L1 and L2, and the obstacle can be judged to be in a state of being static, moving towards the trolley or moving away from the trolley by calculating the values of L1-L2; calculating the running speed of the obstacle through a formula (L1-L2)/delta t, and taking the running speed as the basis of the movement control of the trolley, namely, when the obstacle approaches to a certain distance L, the trolley retreats to avoid collision; otherwise, continuing to advance; the obstacle avoidance and steering driving of the trolley are realized through differential control of the driving wheels, in the driving process, when an obstacle appears in a visual field range, the position information of the obstacle is judged by extracting image characteristics shot by a camera and is transmitted to the controller, the controller outputs a control signal, the rotating speed of the driving wheel close to one side of the obstacle is larger than that of the driving wheel on the other side of the obstacle, the trolley is steered to avoid the obstacle, after the trolley rotates for a certain angle, the obstacle cannot be detected in the visual field range, the controller changes the control signal, the two driving wheels synchronously rotate, and the trolley returns to a straight line driving state;
the foreground view field is imaged on the left camera and the right camera respectively to form a left image and a right image, and the foreground view field three-dimensional reconstruction comprises the following steps:
acquiring images of a left camera and a right camera, and calibrating the cameras to obtain an intrinsic parameter matrix, a distortion coefficient matrix, an intrinsic matrix, a basic matrix, a rotation matrix and a translation matrix of the cameras;
secondly, for the collected images, firstly adopting a CENSUS transformation to distinguish weak texture areas of the images in the air corresponding to the underwater images to realize matching cost calculation, then adopting a three-dimensional matching algorithm based on a minimum spanning tree to carry out cost aggregation, then adopting a WTA (WINNER-TAKES-ALL) algorithm to calculate to obtain a parallax value with the minimum cost, and finally adopting sub-pixel refinement to carry out smooth optimization on the parallax to realize the output of a final parallax map;
1. and (3) distinguishing weak texture regions of the image by adopting CENSUS transformation to realize matching cost calculation:
firstly, converting the obtained image left image into a corresponding gray image, taking a point in the left gray image, taking a window with the size of n × n pixels by taking the point as the center, comparing each point except the center point in the window with the center point of the window, and recording the gray value as 0 if the gray value is greater than the gray value of the pixel at the center point, otherwise, the gray value is 1, thereby obtaining a binary sequence with the length of 8, which is called a CENSUS conversion value, wherein the gray value of the pixel at the center point in the window is replaced by the CENSUS conversion value:
CENSUS transformation formula is as follows:
wherein p is the center point of the window with N × N pixel sizes, q is the rest points except the center point in the window, and NpThe binary sequence is a CENSUS conversion value, zeta represents the result obtained by comparing each point in the window except the central point with the central point of the window according to bit connection, I (p) represents the gray value of the central point pixel, and I (q) represents the gray values of the other points in the window except the central point;
moving a window, traversing pixel points in a gray scale image corresponding to the whole obtained image left image in the air, and completing CENSUS conversion of the left image;
thirdly, converting the right image of the image in the air into a corresponding gray image, taking any point in the right gray image, taking a window with the size of n multiplied by n pixels by taking the point as the center, comparing each point except the center point in the window with the center point of the window, and recording the gray value of the pixel with the length being greater than that of the center point as 0, otherwise, 1, thereby obtaining a binary sequence with the length being 8, which is called as a CENSUS conversion value, wherein the gray value of the pixel with the center point in the window is replaced by the CENSUS conversion value:
CENSUS transformation formula is as follows:
wherein p is the center point of the window with N × N pixel sizes, q is the rest points except the center point in the window, and NpThe binary sequence is a CENSUS conversion value, zeta represents the result obtained by comparing each point in the window except the central point with the central point of the window according to bit connection, I (p) represents the gray value of the central point pixel, and I (q) represents the gray values of the other points in the window except the central point;
moving the window, traversing pixel points in the gray scale image corresponding to the right image of the image in the air, and completing CENSUS conversion of the right image;
using Hamming distance to calculate the similarity of the left and right images of the image after CENSUS transformation, and calculating the Hamming distance of two points when the parallax between the two points in the left and right images is d: carrying out exclusive OR operation on CENSUS transformation values of the two points bit by bit, and then calculating the number of the result to be 1, namely the Hamming distance between the two points, wherein the calculation formula of the Hamming distance is as follows:
HH(p,d)=Hamming(CCTL(p),CCTR(p,d)) (5)
wherein, CCTL、CCTRRespectively, the left and right gray level images are transformed by CENSUS, CCTL(p) represents an arbitrary point p, C in the left gray scale mapCTR(p, d) represents a point having a parallax d from the point p in the right gray scale image, Ham ming (C)CTL(p),CCTR(p, d)) represents the point CCTL(p) and point CCTRThe CENSUS transformed values of (p, d) are subjected to bitwise XOR operation and the number of 1 s, H, is calculatedH(p, d) represents the Hamming distance between two points in the left and right images with the parallax d;
therefore, the matching cost calculation model of CENSUS transformation is:
Cd(p,d)=1-exp(-λHH(p,d)) (6)
wherein, Cd(p, d) is the matching cost of point p when the disparity is d, HH(p, d) is the Hamming distance between two points with parallax of d in the left and right images, and lambda is a normalization constant and is taken as 0.1;
2. and (3) performing cost aggregation by adopting a stereo matching algorithm based on a minimum spanning tree:
firstly, a left image RGB image of the image is expressed into a connected undirected graph G (V, E), wherein V expresses all pixel points in the image, E expresses an edge connecting two adjacent pixel points, and the weight of the edge is the similarity measure of the two adjacent pixel points. And calculating the weight value of the edge connecting the two adjacent pixel points e and k by adopting three-channel color and gradient characteristics:
where k is a neighbor of e, Ii(e),Ii(k) I-channel values for point e and point k, respectively, i ∈ { R, G, B }, R representing a red color channel, G representing a green color channel, B representing a blue color channel, representing the gradient of the image in the x and y directions, respectively, rc,rgWeight values, r, representing color information and gradient information, respectivelyc+r g1, r (e, k) represents the weight of the edge connecting point e and point k, i.e. the similarity between point e and point k, and r (k, e) represents the similarity between point k and point e, and the value is equal to r (e, k);
secondly, setting n pixel points in a left image RGB image of the obtained image, marking the pixel points as n nodes, firstly constructing a non-connected image T without edges as { V, empty }, wherein each node in the image is a connected component, and sequencing all edges connecting two adjacent nodes in an ascending order according to the weight;
thirdly, selecting one edge every time according to the sequence of the weight values from small to large, and adding the edge into the minimum spanning tree if two nodes of the edge fall on different connected components; otherwise, the edge is discarded and not selected thereafter;
repeating the step (c) until n-1 edges are screened out by the connected graph G (V, E) with n nodes, wherein the screened edges and all the nodes form the minimum spanning tree of the RGB graph;
recording the sum of the minimum weights of the connecting edges between the two nodes of the minimum spanning tree midpoint u and the point v as D (u, v), and then the similarity s (u, v) between the two nodes is as follows:
wherein alpha is a constant for adjusting the pixel similarity and is set to 0.1;
sixthly, according to the tree structure of the minimum spanning tree, the cost aggregation value of any point u when the parallax is d can be obtainedComprises the following steps:
where s (u, v) represents the similarity of points u and v in the minimum spanning tree, Cd(u, d) represents the matching cost of point u when the disparity is d, and v traverses each pixel except point u in the graph;
3. and calculating the minimal-cost parallax value d (u) by adopting a WTA (WINNER-TAKES-ALL) algorithm, wherein the expression formula is as follows:
wherein,represents the cost aggregate value of the point u when the disparity is d, d (u) represents the final disparity result of stereo matching,express get rightObtaining the value of the parallax d at the minimum value;
4. and performing smooth optimization on the parallax by adopting sub-pixel refinement:
selecting one value in the WTA algorithm result as d0Selecting d_=d0-1,d+=d0+1, whose corresponding cost aggregation value is known;
② selecting quadratic polynomial as shown in formula (9) according to d0、d_、d+f(d0)、f(d_) And f (d)+) Calculating parameters a, b and c of a quadratic function;
f(x)=ax2+bx+c (11)
thirdly, according to the formula (10), the x corresponding to the minimum quadratic function value is calculatedminThe value is the minimum parallax of the quadratic function f (x), namely the sub-pixel value;
and thirdly, performing three-dimensional reconstruction according to the parameters calibrated in the first step.
The object of the invention can be further achieved by the following technical measures:
in the visual navigation method for the intelligent trolley, the model of the steering engine 9 is MG 995.
The intelligent vehicle visual navigation method is characterized in that the model of the laser range finder 2 is KLH-01T-20 hz; the ultrasonic sensor 3 is of the type HC-SR 04.
In the visual navigation method for the intelligent trolley, the controller 5 is a single chip microcomputer with the model number of LPC 2148.
The third step of the intelligent vehicle visual navigation method is; and (3) performing three-dimensional reconstruction by using a Point Cloud Library (PCL) according to the parameters calibrated in the step one:
(1) a depth map and a Point cloud of the PointCloud < PointT > type are initialized first for storing the image and the point cloud.
(2) And traversing pixel coordinates in the depth map to obtain a single-channel depth value in a depth map pixel area.
(3) And (4) calibrating the obtained internal and external parameters by using a camera, and calculating a three-dimensional coordinate to obtain a PCL point cloud point coordinate of 3D.
(4) And extracting the RGB information of each pixel point in the original image, and assigning the RGB information to an RGB color channel in the PCL point cloud.
Compared with the prior art, the invention has the beneficial effects that:
(1) the minimum spanning tree can establish the relationship between the pixels in the whole image, so that the relationship between the pixels is complete, the constraint of a window is avoided, the time spent on cost aggregation can be greatly reduced, and the stereo matching precision is improved.
(2) The method solves the problem of high mismatching rate caused by the fact that the traditional local stereo matching algorithm only considers the pixel values in the neighborhood of the window and neglects the influence of other pixels on the central pixel, and compared with the global stereo matching algorithm, the improved stereo matching algorithm based on the minimum spanning tree does not need to carry out iterative refinement, can greatly improve the speed of cost aggregation, and can ensure the accuracy of the stereo matching algorithm and the real-time performance of the algorithm.
(3) The foreground image belongs to the weak texture area under the condition of dark light, the traditional matching algorithm has poor effect and high mismatching rate on the weak texture area, the CENSUS transformation algorithm has better robustness on illumination and interference, the position characteristics of pixels in a window are reserved, the CENSUS transformation algorithm is combined with a tree structure, and the problem of high matching error rate of the traditional stereo matching algorithm on the weak texture area is solved.
Drawings
FIG. 1 is a schematic diagram of the structure of the intelligent trolley of the invention;
FIG. 2 is a flow chart of the intelligent vehicle visual navigation method of the invention.
Detailed Description
The invention is further described with reference to the following figures and specific examples.
As shown in fig. 1, the intelligent trolley comprises a power module 4, a controller 5, a trolley body 6, driving wheels 7, guide wheels 8, a steering engine 9, a navigation positioning device 11 and an obstacle avoidance device 12, wherein four wheels are installed at the bottom of the trolley body 6, two driving wheels 7 are installed at the rear part and used for driving the trolley to run, and two guide wheels 8 are installed at the front part and used for steering; the power module 4 is a lithium battery and is used for providing electric energy; the navigation positioning device 11 comprises 2 cameras 1 and 2 steering engines 9, the steering engines 9 are arranged at the front part of the vehicle body 6, the cameras 1 are arranged on the steering engines 9, and the steering engines 9 drive the cameras 1 to rotate; keep away barrier device 12 and include laser range finder 2, ultrasonic sensor 3, camera 1 carry the signal of gathering to controller 5, controller 5 controls steering wheel 9. The model of the steering engine 9 is MG995, the model of the laser range finder 2 is KLH-01T-20hz, the model of the ultrasonic sensor 3 is HC-SR04, and the controller 5 is a singlechip with the model of LPC 2148. The camera is used for shooting and identifying the current surrounding road conditions, is fixed on the vehicle body through the steering engine, and can realize 360-degree working condition information acquisition on the periphery of the trolley by controlling the steering engine to move.
The car body is used as a mounting platform for each part of the trolley and can provide a platform for bearing goods. The two driving wheels are arranged at the rear end of the bottom of the trolley body and are used for driving the trolley to move forwards, backwards and turn. The two guide wheels are arranged at the front end of the bottom of the car body, are connected with the car body through a rotating pair, can rotate freely around a rotating shaft and are used for guiding the movement of the car. When the trolley stops moving, the brake fixes the wheels, so that the phenomenon of sliding is avoided under the interference of external force or when the trolley is parked on a slope.
The laser range finder 2, the ultrasonic sensor 3 and the camera 1 are connected with the controller 5, collected signals are transmitted to the controller 5, the ultrasonic sensor is used for detecting obstacles, and the laser sensor is used for ranging the obstacles. The controller is a single chip microcomputer, the single chip microcomputer receives environment information from the navigation positioning device, the obstacle avoidance device and detected information and the trolley motion state fed back, the driving information is output after judgment and processing are carried out on the environment information and the trolley motion state is controlled.
As shown in fig. 2, the intelligent car visual navigation method includes the following steps:
firstly, acquiring images of a left camera and a right camera, and calibrating the cameras by using a Zhang's plane calibration method to obtain an internal parameter matrix, a distortion coefficient matrix, an intrinsic matrix, a basic matrix, a rotation matrix and a translation matrix of the cameras, and providing parameters for final three-dimensional reconstruction;
adopting an improved three-dimensional matching method based on a minimum spanning tree structure for the image, firstly adopting a CENSUS transformation to distinguish a weak texture region of the image to realize matching cost calculation, then adopting a three-dimensional matching algorithm based on a minimum spanning tree to carry out cost aggregation, then adopting a WTA (WINNER-TAKES-ALL) algorithm to calculate to obtain a parallax value with minimum cost, and finally adopting sub-pixel refinement to carry out smooth optimization on the parallax so as to realize the output of a final parallax map;
(1) converting the left image into a corresponding gray map, taking a point in the left gray map, taking a window with the size of 3 × 3 (or 5 × 5) pixels with the point as the center, comparing each point except the center point in the window with the center point of the window, and recording the gray value as 0 if the gray value is greater than the gray value of the pixel at the center point, otherwise, as 1, thereby obtaining a binary sequence with the length of 8, which is called a CENSUS transform value, and replacing the gray value of the pixel at the center point in the window with the CENSUS transform value:
CENSUS transformation formula is as follows:
wherein p is the center point of a window of 3 × 3 pixel sizes, q is the remaining points in the window except the center point, and NpThe binary sequence is a 3 x 3 pixel size window with p as the center, B (p) represents a binary sequence obtained after CENSUS transformation, namely a CENSUS transformation value, zeta represents the bitwise connection of the result obtained by comparing each point except the central point in the window with the central point of the window, I (p) represents the gray value of the pixel of the central point, and I (q) represents the gray values of the pixels of the other points except the central pixel point in the window.
(2) And moving the window, traversing pixel points in the gray scale image corresponding to the left image of the image, and completing CENSUS conversion of the left image.
(3) Converting the right image into a corresponding gray image, taking any point in the right gray image, taking a window with the size of 3 × 3 pixels by taking the point as the center, comparing each point except the center point in the window with the center point of the window, wherein the gray value is greater than the gray value of the center point pixel, namely the gray value is recorded as 0, otherwise the gray value is 1, thereby obtaining a binary sequence with the length of 8, which is called a CENSUS conversion value, and the gray value of the center point pixel in the window is replaced by the CENSUS conversion value:
CENSUS transformation formula is as follows:
wherein p is the center point of a window of 3 × 3 pixel sizes, q is the remaining points in the window except the center point, and NpThe binary sequence is a 3 x 3 pixel size window with p as the center, B (p) represents a binary sequence obtained after CENSUS transformation, namely a CENSUS transformation value, zeta represents the bitwise connection of the result obtained by comparing each point except the central point in the window with the central point of the window, I (p) represents the gray value of the pixel of the central point, and I (q) represents the gray values of the pixels of the other points except the central pixel point in the window.
(4) And moving the window, traversing pixel points in the gray level image corresponding to the right image of the whole image, and completing the CENSUS conversion of the right image.
(5) And (3) calculating the similarity of the left image and the right image by using the Hamming distance of the images after CENSUS transformation. Calculating the Hamming distance of two points when the parallax of the two points in the left image and the right image is d: and carrying out exclusive OR operation on CENSUS transformation values of the two points bit by bit, and then calculating the number of the result to be 1, namely the Hamming distance between the two points. The Hamming distance calculation formula is as follows:
HH(p,d)=Hamming(CCTL(p),CCTR(p,d)) (17)
wherein, CCTL、CCTRImages of left and right gray scale images transformed by CENSUS, CCTL(p) represents an arbitrary point p, C in the left gray scale mapCTR(p, d) represents a point having a parallax d from the point p in the right gray scale image, Hamming (C)CTL(p),CCTR(p, d)) represents the point CCTL(p) and point CCTRThe CENSUS transformed values of (p, d) are subjected to bitwise XOR operation and the number of 1 s, H, is calculatedH(p, d) represents the Hamming distance between two points in the left and right figures with the parallax d.
Therefore, the matching cost calculation model of CENSUS transformation is:
Cd(p,d)=1-exp(-λHH(p,d)) (18)
wherein, Cd(p, d) is the matching cost of point p when the disparity is d, HH(p, d) Hamming distance between two points in the left and right figures whose parallax is d, λ is a normalization constant, and is taken to be 0.1.
(6) And representing the left image RGB image of the image into a connected undirected graph G (V, E), wherein V represents all pixel points in the image, E represents an edge connecting two adjacent pixel points, and the weight of the edge is the similarity measure of the two adjacent pixel points. And (3) calculating the weight value of the edge connecting the adjacent two pixel points e and k by adopting three-channel color and gradient characteristics:
where k is a neighbor of e, Ii(e),Ii(k) I-channel values representing points e and k, respectively, i ∈ { R, G, B }, R representing the red color channel, G representing greenThe color channel, B denotes the blue color channel, representing the gradient of the image in the x and y directions, respectively, rc,rgWeight values, r, representing color information and gradient information, respectivelyc+r g1, r (e, k) represents the weight of the edge connecting point e and point k, i.e. the similarity between point e and point k, and r (k, e) represents the similarity between point k and point e, and its value is equal to r (e, k).
(7) And (3) setting n pixel points in the left image RGB image of the obtained image as n nodes, constructing a non-connected image T (V, empty) with only n nodes and no edge, wherein each node in the image is a connected component, and sequencing all edges connecting two adjacent nodes in an ascending order according to the weight.
(8) Selecting one edge every time according to the order of the weight values from small to large, and adding the edge into the minimum spanning tree if two nodes of the edge fall on different connected components; otherwise, this edge is discarded and is not selected thereafter.
(9) And (5) repeating the step (8) until the n-1 edges are screened out by the connected graph G (V, E) with n nodes, wherein the screened out edges and all the nodes form the minimum spanning tree of the RGB graph.
(10) And D (u, v) is taken as the sum of the minimum weights of the connecting edges between the two nodes of the minimum spanning tree point u and the point v, and the similarity s (u, v) between the two points is as follows:
where α is a constant that adjusts the pixel similarity, set to 0.1.
(11) According to the tree structure of the minimum spanning tree, the cost aggregation value of any point u when the parallax is d can be obtainedComprises the following steps:
where s (u, v) represents the similarity of points u and v in the minimum spanning tree, Cd(u, d) represents the matching cost of point u at disparity d, v traversing every pixel in the graph except for point u.
(12) Calculating to obtain a parallax value with the minimum cost by adopting a WTA (WINNER-TAKES-ALL) algorithm, selecting the parallax with the minimum matching cost as a final parallax d (u) by adopting a WINNER full selection mode, wherein the expression is as follows:
wherein,represents the cost aggregate value of the point u when the disparity is d, d (u) represents the final disparity result of stereo matching,express get rightThe value of the parallax d at the minimum value is obtained.
(13) Randomly selecting one value in the WTA algorithm result and marking the value as d0Selecting d_=d0-1, d+=d0+1, whose corresponding cost aggregation value is known.
(14) Selecting a quadratic polynomial as shown in formula (23) according to d0、d_、d+f(d0)、f(d_) And f (d)+) Parameters a, b and c of the quadratic function are calculated.
f(x)=ax2+bx+c (23)
(15) According to the formula (24), x corresponding to the minimum quadratic function value is calculatedminValue of being quadraticThe minimum disparity of the number f (x) is the sub-pixel value.
Thirdly, performing three-dimensional reconstruction by using a Point Cloud Library (PCL) according to the parameters calibrated in the first step:
(1) a depth map and a Point cloud of the PointCloud < PointT > type are initialized first for storing the image and the point cloud.
(2) And traversing pixel coordinates in the depth map to obtain a single-channel depth value in a depth map pixel area.
(3) And (4) calibrating the obtained internal and external parameters by using a camera, and calculating a three-dimensional coordinate to obtain a PCL point cloud point coordinate of 3D.
(4) And extracting the RGB information of each pixel point in the original image, and assigning the RGB information to an RGB color channel in the PCL point cloud.
The three-dimensional matching algorithm based on the minimum spanning tree structure solves the problems of weak image texture, low matching precision and low matching speed, provides environmental information required by path planning, realizes three-dimensional accurate reconstruction of a path, provides necessary prior knowledge for subsequent path planning and obstacle avoidance, and obtains a good navigation result.
In order to identify obstacles in time and facilitate avoidance in time, the invention also comprises an obstacle avoidance control method, when the ultrasonic sensor detects that the obstacle exists, the trolley stops moving, the laser range finder starts working, two beams of laser are emitted for ranging within the time of delta t, the measured obstacle distances are respectively L1 and L2, and the obstacle can be judged to be in a state of being static, moving towards the trolley or moving away from the trolley by calculating the values of L1-L2; calculating the running speed of the obstacle through a formula (L1-L2)/delta t, and taking the running speed as the basis of the movement control of the trolley, namely, when the obstacle approaches to a certain distance L, the trolley backs up to avoid collision; otherwise, continuing to advance;
the obstacle avoidance and steering driving of the trolley are realized through differential control of the driving wheels, in the driving process, when an obstacle appears in a visual field range, the position information of the obstacle is judged by extracting image characteristics shot by a camera and is transmitted to the controller, the controller outputs a control signal, the rotating speed of the driving wheel close to one side of the obstacle is larger than that of the driving wheel on the other side of the obstacle, the trolley is steered to avoid the obstacle, after the trolley rotates for a certain angle, the obstacle cannot be detected in the visual field range, the controller changes the control signal, the two driving wheels synchronously rotate, and the trolley returns to a straight driving state.
In addition to the above embodiments, the present invention may have other embodiments, and any technical solutions formed by equivalent substitutions or equivalent transformations fall within the scope of the claims of the present invention.
Claims (5)
1. The intelligent trolley visual navigation method is characterized in that the intelligent trolley comprises a power module, a controller, a trolley body, driving wheels, guide wheels, a steering engine, a navigation positioning device and an obstacle avoidance device, wherein four wheels are mounted at the bottom of the trolley body, two driving wheels are mounted at the rear part and used for driving the trolley to run, and two guide wheels are mounted at the front part and used for steering; the power module is a lithium battery and is used for providing electric energy; the navigation positioning device comprises 2 cameras and 2 steering engines, the steering engines are arranged at the front part of the vehicle body, the cameras are arranged on the steering engines, and the steering engines drive the cameras to rotate; the obstacle avoidance device comprises a laser range finder and an ultrasonic sensor, the laser range finder, the ultrasonic sensor and a camera transmit collected signals to a controller, and the controller controls a steering engine; when the ultrasonic sensor detects that an obstacle exists, the trolley stops moving, the laser range finder starts working, two beams of laser are emitted for ranging within delta t, the measured obstacle distances are respectively L1 and L2, and the obstacle can be judged to be in a state of being static, moving towards the trolley or moving away from the trolley by calculating the values of L1-L2; calculating the running speed of the obstacle through a formula (L1-L2)/delta t, and taking the running speed as the basis of the movement control of the trolley, namely, when the obstacle approaches to a certain distance L, the trolley backs up to avoid collision; otherwise, continuing to advance; the obstacle avoidance and steering driving of the trolley are realized through differential control of the driving wheels, in the driving process, when an obstacle appears in a visual field range, the position information of the obstacle is judged by extracting image characteristics shot by a camera and is transmitted to the controller, the controller outputs a control signal, the rotating speed of the driving wheel close to one side of the obstacle is larger than that of the driving wheel on the other side of the obstacle, the trolley is steered to avoid the obstacle, after the trolley rotates for a certain angle, the obstacle cannot be detected in the visual field range, the controller changes the control signal, the two driving wheels synchronously rotate, and the trolley returns to a straight line driving state;
the foreground view field is imaged on the left camera and the right camera respectively to form a left image and a right image, and the foreground view field three-dimensional reconstruction comprises the following steps:
acquiring images of a left camera and a right camera, and calibrating the cameras to obtain an intrinsic parameter matrix, a distortion coefficient matrix, an intrinsic matrix, a basic matrix, a rotation matrix and a translation matrix of the cameras;
secondly, for the collected images, firstly adopting a CENSUS transformation to distinguish weak texture areas of the images in the air corresponding to the underwater images to realize matching cost calculation, then adopting a three-dimensional matching algorithm based on a minimum spanning tree to carry out cost aggregation, then adopting a WTA (WINNER-TAKES-ALL) algorithm to calculate to obtain a parallax value with the minimum cost, and finally adopting sub-pixel refinement to carry out smooth optimization on the parallax so as to realize the output of a final parallax map;
1. and (3) distinguishing weak texture regions of the image by adopting CENSUS transformation to realize matching cost calculation:
firstly, converting the obtained image left image into a corresponding gray image, taking a point in the left gray image, taking a window with the size of n × n pixels by taking the point as the center, comparing each point except the center point in the window with the center point of the window, and recording the gray value as 0 if the gray value is greater than the gray value of the pixel at the center point, otherwise, the gray value is 1, thereby obtaining a binary sequence with the length of 8, which is called a CENSUS conversion value, wherein the gray value of the pixel at the center point in the window is replaced by the CENSUS conversion value:
CENSUS transformation formula is as follows:
wherein p is the center point of the window with N × N pixel sizes, q is the rest points except the center point in the window, and NpThe binary sequence is a CENSUS conversion value, zeta represents the result obtained by comparing each point in the window except the central point with the central point of the window according to bit connection, I (p) represents the gray value of the central point pixel, and I (q) represents the gray values of the other points in the window except the central point;
moving a window, traversing pixel points in a gray scale image corresponding to the left image of the whole obtained image in the air, and completing CENSUS conversion of the left image;
thirdly, the right image of the obtained image in the air is converted into a corresponding gray image, any point in the right gray image is taken, a window with the size of n multiplied by n pixels is taken by taking the point as the center, each point except the center point in the window is compared with the center point of the window, the gray value which is larger than the gray value of the pixel at the center point is recorded as 0, otherwise, the gray value is 1, so that a binary sequence with the length of 8 is obtained and is called as a CENSUS conversion value, and the gray value of the pixel at the center point in the window is replaced by the CENSUS conversion value:
CENSUS transformation formula is as follows:
wherein p is the center point of the window with N × N pixel sizes, q is the rest points except the center point in the window, and NpIs a window with n × n pixel sizes and centered on p, B (p) represents a binary sequence obtained by CENSUS transformation, namely a CENSUS transformation value, and zeta represents each point except the central point in the windowComparing the central points of the windows to obtain results, connecting the results according to bits, wherein I (p) represents the gray value of the pixel at the central point, and I (q) represents the gray values of the pixels at other points except the central pixel point in the window;
moving the window, traversing pixel points in the gray scale image corresponding to the right image of the image in the air, and completing CENSUS conversion of the right image;
using Hamming distance to calculate the similarity of the left and right images of the image after CENSUS transformation, and calculating the Hamming distance of two points when the parallax between the two points in the left and right images is d: carrying out exclusive OR operation on CENSUS transformation values of the two points bit by bit, and then calculating the number of the result to be 1, namely the Hamming distance between the two points, wherein the calculation formula of the Hamming distance is as follows:
HH(p,d)=Hamming(CCTL(p),CCTR(p,d)) (5)
wherein, CCTL、CCTRRespectively, the left and right gray level images are transformed by CENSUS, CCTL(p) represents an arbitrary point p, C in the left gray scale mapCTR(p, d) represents a point having a parallax d from the point p in the right gray scale image, Hamming (CCTL (p), CCTR (p, d)) represents that the point C is located atCTLThe CENSUS transform values of (p) and point CCTR (p, d) are subjected to bitwise XOR operation and the number of 1's, H, is calculatedH(p, d) indicates the hamming distance between two points with parallax d in the left and right images;
therefore, the matching cost calculation model of CENSUS transformation is:
Cd(p,d)=1-exp(-λHH(p,d)) (6)
wherein, Cd(p, d) is the matching cost of point p when the disparity is d, HH(p, d) is the Hamming distance between two points with parallax of d in the left and right images, and lambda is a normalization constant and is taken as 0.1;
2. and (3) performing cost aggregation by adopting a stereo matching algorithm based on a minimum spanning tree:
firstly, a left image RGB image of the image is expressed into a connected undirected graph G (V, E), wherein V expresses all pixel points in the image, E expresses an edge connecting two adjacent pixel points, and the weight of the edge is the similarity measure of the two adjacent pixel points. And calculating the weight value of the edge connecting the two adjacent pixel points e and k by adopting three-channel color and gradient characteristics:
where k is a neighbor of e, Ii(e),Ii(k) I-channel values for point e and point k, respectively, i ∈ { R, G, B }, R representing a red color channel, G representing a green color channel, B representing a blue color channel, representing the gradient of the image in the x and y directions, respectively, rc,rgWeights, r, representing color information and gradient information, respectivelyc+rg1, r (e, k) represents the weight of the edge connecting point e and point k, i.e. the similarity between point e and point k, and r (k, e) represents the similarity between point k and point e, and the value is equal to r (e, k);
firstly, constructing a non-connected graph T which only has n nodes and has no edge as { V, empty }, wherein each node in the graph is a connected component, and sequencing all edges connecting two adjacent nodes in an ascending order according to the weight;
thirdly, selecting one edge every time according to the sequence of the weight values from small to large, and adding the edge into the minimum spanning tree if two nodes of the edge fall on different connected components; otherwise, the edge is discarded and is not selected any more thereafter;
repeating the step (c) until n-1 edges are screened out by the connected graph G (V, E) with n nodes, wherein the screened edges and all the nodes form the minimum spanning tree of the RGB graph;
recording the sum of the minimum weights of the connecting edges between the two nodes of the minimum spanning tree midpoint u and the point v as D (u, v), and then the similarity s (u, v) between the two nodes is as follows:
wherein alpha is a constant for adjusting the pixel similarity and is set to 0.1;
sixthly, according to the tree structure of the minimum spanning tree, the cost aggregation value of any point u when the parallax is d can be obtainedComprises the following steps:
s (u, v) represents the similarity of a point u and a point v in the minimum spanning tree, Cd (u, d) represents the matching cost of the point u when the parallax is d, and v traverses each pixel except the point u in the graph;
3. and calculating to obtain a minimal-cost parallax value d (u) by adopting a WTA algorithm, wherein the expression is as follows:
wherein,represents the cost aggregate value of the point u when the disparity is d, d (u) represents the final disparity result of stereo matching,express get rightObtaining the value of the parallax d at the minimum value;
4. and performing smooth optimization on the parallax by adopting sub-pixel refinement:
selecting one value in the WTA algorithm result as d0Selecting d_=d0-1,d+=d0+1, corresponding theretoThe cost aggregation value is known;
② selecting quadratic polynomial as shown in formula (9) according to d0、d_、d+f(d0)、f(d_) And f (d)+) Calculating parameters a, b and c of a quadratic function;
f(x)=ax2+bx+c (11)
thirdly, according to the formula (10), the x corresponding to the minimum quadratic function value is calculatedminThe value is the minimum parallax of the quadratic function f (x), namely the sub-pixel value;
and thirdly, performing three-dimensional reconstruction according to the parameters calibrated in the first step.
2. The visual navigation method for the intelligent vehicle as claimed in claim 1, wherein the model number of the steering engine 9 is MG 995.
3. The visual navigation method for the intelligent vehicle as claimed in claim 1, wherein the model of the laser range finder 2 is KLH-01T-20 hz; the ultrasonic sensor 3 is of the type HC-SR 04.
4. The visual navigation method for the intelligent vehicle as claimed in claim 1, wherein the controller 5 is a single chip microcomputer with model number LPC 2148.
5. The intelligent vehicle visual navigation method according to claim 1, comprising the third step of: and (3) performing three-dimensional reconstruction by using a point cloud base according to the parameters calibrated in the step one:
(1) firstly, initializing a depth map and a point cloud of a PointCloud < PointT > type for storing images and point cloud;
(2) traversing pixel coordinates in the depth map to obtain a single-channel depth value in a depth map pixel area;
(3) calibrating the obtained internal and external parameters by using a camera, and calculating a three-dimensional coordinate to obtain a PCL point cloud point coordinate of 3D;
(4) and extracting the RGB information of each pixel point in the original image, and assigning the RGB information to an RGB color channel in the PCL point cloud.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010645328.4A CN111813114A (en) | 2020-07-07 | 2020-07-07 | Intelligent car visual navigation method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010645328.4A CN111813114A (en) | 2020-07-07 | 2020-07-07 | Intelligent car visual navigation method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111813114A true CN111813114A (en) | 2020-10-23 |
Family
ID=72843436
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010645328.4A Withdrawn CN111813114A (en) | 2020-07-07 | 2020-07-07 | Intelligent car visual navigation method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111813114A (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112561996A (en) * | 2020-12-08 | 2021-03-26 | 江苏科技大学 | Target detection method in autonomous underwater robot recovery docking |
CN113312710A (en) * | 2020-12-18 | 2021-08-27 | 南京理工大学紫金学院 | Virtual simulation test method for front vehicle identification and ranging |
CN114115349A (en) * | 2021-12-02 | 2022-03-01 | 深圳慧源创新科技有限公司 | Binocular auxiliary obstacle avoidance method and device, unmanned aerial vehicle and storage medium |
CN115307640A (en) * | 2022-07-29 | 2022-11-08 | 西安现代控制技术研究所 | Unmanned vehicle binocular vision navigation method based on improved artificial potential field method |
CN115503023A (en) * | 2022-11-23 | 2022-12-23 | 济南奥普瑞思智能装备有限公司 | Industrial robot removes chassis |
CN116185043A (en) * | 2023-04-17 | 2023-05-30 | 北京航空航天大学 | Robot global path planning method and device based on non-connected graph |
CN117237240A (en) * | 2023-11-15 | 2023-12-15 | 湖南蚁为软件有限公司 | Data intelligent acquisition method and system based on data characteristics |
CN117542220A (en) * | 2024-01-10 | 2024-02-09 | 深圳市拓安科技有限公司 | Driving safety induction method and system applied to severe weather |
-
2020
- 2020-07-07 CN CN202010645328.4A patent/CN111813114A/en not_active Withdrawn
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112561996A (en) * | 2020-12-08 | 2021-03-26 | 江苏科技大学 | Target detection method in autonomous underwater robot recovery docking |
CN113312710A (en) * | 2020-12-18 | 2021-08-27 | 南京理工大学紫金学院 | Virtual simulation test method for front vehicle identification and ranging |
CN114115349A (en) * | 2021-12-02 | 2022-03-01 | 深圳慧源创新科技有限公司 | Binocular auxiliary obstacle avoidance method and device, unmanned aerial vehicle and storage medium |
CN115307640A (en) * | 2022-07-29 | 2022-11-08 | 西安现代控制技术研究所 | Unmanned vehicle binocular vision navigation method based on improved artificial potential field method |
CN115503023A (en) * | 2022-11-23 | 2022-12-23 | 济南奥普瑞思智能装备有限公司 | Industrial robot removes chassis |
CN116185043A (en) * | 2023-04-17 | 2023-05-30 | 北京航空航天大学 | Robot global path planning method and device based on non-connected graph |
CN117237240A (en) * | 2023-11-15 | 2023-12-15 | 湖南蚁为软件有限公司 | Data intelligent acquisition method and system based on data characteristics |
CN117237240B (en) * | 2023-11-15 | 2024-02-02 | 湖南蚁为软件有限公司 | Data intelligent acquisition method and system based on data characteristics |
CN117542220A (en) * | 2024-01-10 | 2024-02-09 | 深圳市拓安科技有限公司 | Driving safety induction method and system applied to severe weather |
CN117542220B (en) * | 2024-01-10 | 2024-04-16 | 深圳市拓安科技有限公司 | Driving safety induction method and system applied to severe weather |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111813114A (en) | Intelligent car visual navigation method | |
JP7073315B2 (en) | Vehicles, vehicle positioning systems, and vehicle positioning methods | |
CN111126269B (en) | Three-dimensional target detection method, device and storage medium | |
US10916035B1 (en) | Camera calibration using dense depth maps | |
CN110148185B (en) | Method and device for determining coordinate system conversion parameters of imaging equipment and electronic equipment | |
CN109509230B (en) | SLAM method applied to multi-lens combined panoramic camera | |
CN110988912B (en) | Road target and distance detection method, system and device for automatic driving vehicle | |
Royer et al. | Monocular vision for mobile robot localization and autonomous navigation | |
US11555903B1 (en) | Sensor calibration using dense depth maps | |
CN111856491B (en) | Method and apparatus for determining geographic position and orientation of a vehicle | |
CN102298070B (en) | Method for assessing the horizontal speed of a drone, particularly of a drone capable of hovering on automatic pilot | |
CN112734765B (en) | Mobile robot positioning method, system and medium based on fusion of instance segmentation and multiple sensors | |
CN111862673B (en) | Parking lot vehicle self-positioning and map construction method based on top view | |
CN114842438A (en) | Terrain detection method, system and readable storage medium for autonomous driving vehicle | |
Balta et al. | Terrain traversability analysis for off-road robots using time-of-flight 3d sensing | |
CN114693787B (en) | Parking garage map building and positioning method, system and vehicle | |
CN113624223B (en) | Indoor parking lot map construction method and device | |
CN112669354A (en) | Multi-camera motion state estimation method based on vehicle incomplete constraint | |
CN106556395A (en) | A kind of air navigation aid of the single camera vision system based on quaternary number | |
Jun et al. | Autonomous driving system design for formula student driverless racecar | |
CN113625271B (en) | Simultaneous positioning and mapping method based on millimeter wave radar and binocular camera | |
WO2023070113A1 (en) | Validating an sfm map using lidar point clouds | |
CN115409965A (en) | Mining area map automatic generation method for unstructured roads | |
Mharolkar et al. | RGBDTCalibNet: End-to-end online extrinsic calibration between a 3D LiDAR, an RGB camera and a thermal camera | |
CN111784753A (en) | Three-dimensional reconstruction stereo matching method for autonomous underwater robot recovery butt joint foreground view field |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20201023 |