CN113447948B - Camera and multi-laser-radar fusion method based on ROS robot - Google Patents

Camera and multi-laser-radar fusion method based on ROS robot Download PDF

Info

Publication number
CN113447948B
CN113447948B CN202110588529.XA CN202110588529A CN113447948B CN 113447948 B CN113447948 B CN 113447948B CN 202110588529 A CN202110588529 A CN 202110588529A CN 113447948 B CN113447948 B CN 113447948B
Authority
CN
China
Prior art keywords
state
point cloud
fish
individual
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110588529.XA
Other languages
Chinese (zh)
Other versions
CN113447948A (en
Inventor
张青春
何孝慈
郭振久
姚胜
王刚
孟凡东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huaiyin Institute of Technology
Original Assignee
Huaiyin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huaiyin Institute of Technology filed Critical Huaiyin Institute of Technology
Priority to CN202110588529.XA priority Critical patent/CN113447948B/en
Publication of CN113447948A publication Critical patent/CN113447948A/en
Application granted granted Critical
Publication of CN113447948B publication Critical patent/CN113447948B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4802Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/497Means for monitoring or calibrating

Abstract

The invention discloses a camera and multi-laser radar fusion method based on an ROS robot, which comprises the steps of respectively using a laser radar and a camera to collect images of a calibration plate, fitting a plane to data collected by the laser radar, and performing registration among the multi-laser radar by using an artificial fish swarm algorithm; extracting angular points of images acquired by a camera and angular points of point cloud acquired by a laser radar; and solving the pose relation between the camera coordinate system and the world coordinate system of the three-dimensional scene structure and the optimal solution of transformation between two-dimensional data and three-dimensional data according to the two-dimensional structure and the three-dimensional structure of the angular points to complete data fusion. The algorithm of the invention has high efficiency and strong practicability; the perception of the robot to the surrounding environment can be more accurate.

Description

Camera and multi-laser-radar fusion method based on ROS robot
Technical Field
The invention relates to a data fusion method, in particular to an image fusion method of a depth camera and a laser radar.
Background
The laser radar is one of important sensors for realizing the sensing of the mobile robot and the automatic driving environment of the automobile, is suitable for the sensing of the complex traffic environment, and has high precision but low resolution of the obtained depth map, and the obtained depth map is very sparse and is easy to ignore small targets. Binocular stereo vision is widely applied to the automobile unmanned technology as an important branch of computer vision, but the accuracy of the obtained depth map is low due to the fact that image data is greatly influenced by environmental factors such as illumination and the like. Therefore, a depth map fusion method is needed in the fields of mobile robots and automatic driving of automobiles, and the like, so that the generated depth map is accurate and dense. However, in the prior art, the registration of multiple laser radars and the effective fusion of camera data and laser radar data cannot be performed.
Disclosure of Invention
The purpose of the invention is as follows: the invention aims to provide a method for effectively fusing camera data and laser radar data.
The technical scheme is as follows: the invention discloses a camera and multi-laser radar fusion method based on an ROS robot, which comprises the following steps:
(1) Collecting three-dimensional point cloud data of a calibration plate by using a multi-laser radar, and fitting a plane where the calibration plate is located;
(2) Carrying out registration among the multiple laser radars by using an artificial fish swarm algorithm;
(3) Acquiring a two-dimensional image of a calibration plate by using a camera, and extracting angular points of the calibration plate in the two-dimensional image;
(4) Extracting angular points of a calibration plate from the three-dimensional point cloud data;
(5) And solving the pose relation between the camera coordinate system and the world coordinate system of the three-dimensional scene structure and the optimal solution of transformation between the two-dimensional data and the three-dimensional data according to the two-dimensional image data and the three-dimensional image data of the calibration plate with the obtained corner points to complete data fusion.
The step of fitting the plane where the calibration plate is located in the step (1) comprises the following steps: randomly selecting 3 non-collinear points from the point cloud acquired by the laser radar, constructing a general equation of a three-dimensional plane by using the 3 points, and initializing the plane corresponding to the point cloud; calculating the distance from all the rest points in the point cloud to the plane; counting the number of points with the distance value smaller than a given threshold value, and calculating the proportion of the number of the points meeting the threshold value in all the points; if the calculated ratio meets a certain threshold value, the current plane parameter is considered as a plane obtained by the point cloud fitting, and the iteration is finished; otherwise, continuing to fit the plane from the beginning until the maximum iteration number is reached.
In the step (2), an artificial fish structure body is defined, the dimension of each individual fish is 6, wherein 3 dimensions represent rotation, 3 dimensions represent translation, an artificial fish swarm optimizer class is defined, and parameters of the optimizer and a search angle of an algorithm in a space are initialized; setting the number of individual fishes in a fish school, initializing all the individual fishes, randomly selecting one individual fish and considering the individual fish as a globally optimal individual; taking the current state of a point cloud on the corresponding surface of the laser radar as the state of the artificial fish, and executing foraging behavior, random behavior, clustering behavior and rear-end collision behavior of the artificial fish shoal for each fish i; obtaining the current state of each artificial fish; recording the optimal individual X _ best in the current fish school and the minimum distance value between the point cloud planes obtained by calculation;
judging whether the current minimum distance value meets a threshold or reaches the maximum iteration number, if so, outputting a globally optimal individual, calculating rigid transformation between the two laser radars according to a 6-dimensional data value of the optimal individual fish, transforming the source point cloud data to target point cloud data, and ending the algorithm; otherwise, the state of the artificial fish at present is obtained by executing foraging behavior, random behavior, clustering behavior and rear-end collision behavior of the artificial fish swarm on each fish i until the optimal value is solved.
Optimizing the state of the artificial fish by executing foraging in the step (2), and specifically comprising the following steps: as in the current state X 1 Distances D between all points in the point cloud to the corresponding target point cloud plane min (X 1 ) And state X 2 Distances D between all points in the point cloud and the corresponding target point cloud plane min (X 2 ) Satisfies D min (X 1 )>D min (X 2 ) Will be in state X 1 Is updated to X 2
When state S 1 And X 2 Does not satisfy the relationship of D min (X 1 )>D min (X 2 ) Then, a state X is randomly selected again within the movable distance range 2 Continuing the comparison, if the number of attempts reaches the preset value and still does not meet the condition, randomly selecting a state from the state X 1 To state X 2 Updated state X next Calculated according to the following formula:
Figure GDA0004069881050000021
where step represents the maximum value of the next distance that can be moved, and rand () represents a random number generated between 0 and 1.
In the step (2), when a smaller distance value cannot be found after trying for a plurality of times in the foraging behavior, performing a random behavior; the next state of the random behavior is calculated as:
X next =X 1 +visual×rand()。
in the step (2), the clustering behavior is executed to control the movement direction and the density of the artificial fish school, and the state is X 2 Setting the number of individual fish in the maximum detection range as N n . This N n The central position of each fish is S c Calculating
Figure GDA0004069881050000022
A value of (d); if it is
Figure GDA0004069881050000023
And 0 < delta < 1, expressed in X c The number of food is more, namely the size relation in foraging is met, and the distance between individual fishes is moderate, which shows that a state is found to be better than the current state at the moment, and the current state is updated to be the better state; otherwise, foraging is performed.
In the step (2), the rear-end collision behavior is executed to find the optimal individual fish, and the optimal individual fish meets the requirement D min (X 1 )>D min (X 2 ) And the individual X next The number Nn of the surrounding individual fish is satisfied if
Figure GDA0004069881050000024
And 0 < delta < 1, when the state is X 1 When the person is found within the sensing distance, the person is X next Comparing the distances from all points in the source point cloud to the target point cloud plane in the two states, and if D is satisfied min (X 1 )>D min (X next ) And the individual X next Number of individual fish around N n Satisfy if
Figure GDA0004069881050000025
Delta is more than 0 and less than 1; represents the individual X next For the best individual currently found, other individual fish should update their own statusThe state of the fish is the optimal individual; otherwise, foraging is performed.
In the step (3), four corner points of the calibration board are respectively pasted with stickers with four different colors, a camera is used for collecting images, pixel points in the images are traversed, areas with the four colors are searched, the pixel point position of each area is taken as the most marginal value, and the marginal value is taken as the pixel coordinate of the corner point in the images collected by the camera.
The step (4) comprises the following steps: removing useless point cloud data on the calibration plate by using direct filtering and conditional filtering; fitting the four edges of the calibration plate to obtain a general spatial expression of straight lines of the four edges; and solving the intersection point of two adjacent straight lines in the three-dimensional space, wherein the solved intersection point is the solved point cloud angular point of the laser radar.
The step (5) comprises the following steps: solving an absolute rotation matrix R and a translation vector T between a camera coordinate system and a three-dimensional world coordinate system according to the two-dimensional image and the three-dimensional point cloud of the obtained corner point coordinates; acquiring a pose relation between a camera coordinate system and a three-dimensional world coordinate system according to the obtained R and T; and (4) solving the optimal solution of the transformation between the three-dimensional data and the two-dimensional data by using a method for solving the PNP problem, and completing data fusion.
Has the beneficial effects that: compared with the prior art, the invention has the following remarkable advantages: the method has the advantages that the respective defects of the laser radar and the camera can be made up during image fusion, the generated depth image is more precise and accurate, the program is simple, the implementation is easy, the algorithm efficiency is high, the real-time performance is strong, and the accuracy of environment perception is improved.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a flow chart of registration between multiple lidar using an artificial fish swarm algorithm;
fig. 3 is a schematic diagram of calibration plate corner point acquisition.
Detailed Description
The technical scheme of the invention is further explained by combining the attached drawings. As shown in fig. 1, the invention provides a camera and multi-lidar fusion method based on an ROS robot, which is used for processing images of an ROS robot carrying a camera and a multi-lidar simultaneously, and comprises the following steps:
(1) Collecting three-dimensional point cloud data of a calibration plate by using a multi-laser radar, and fitting a plane where the calibration plate is located; randomly selecting 3 non-collinear points from the point cloud acquired by the laser radar, constructing a general equation of a three-dimensional plane by using the 3 points, and initializing the plane corresponding to the point cloud; calculating the distance from all the rest points in the point cloud to the plane; counting the number of points with the distance value smaller than a given threshold value, and calculating the proportion of the number of the points meeting the threshold value in all the points; if the calculated ratio meets a certain threshold value, the current plane parameter is considered as a plane obtained by the point cloud fitting, and the iteration is finished; otherwise, continuing to fit the plane from the beginning until the maximum iteration number is reached.
(2) Carrying out registration among the multiple laser radars by using an artificial fish swarm algorithm;
defining an artificial fish structure body, wherein the dimension of each individual fish is 6, the 3 dimension represents rotation, the 3 dimension represents translation, defining an artificial fish school optimizer class, and initializing the parameters of the optimizer and the search angle of the algorithm in space; setting the number of individual fish in a fish school, initializing all the individual fish, randomly selecting one individual fish and considering the individual fish as a global optimal individual; taking the current state of a point cloud on the corresponding surface of the laser radar as the state of the artificial fish, and executing foraging, random, clustering and rear-end collision of the artificial fish school for each fish i; obtaining the current state of each artificial fish; recording the optimal individual X _ best in the current fish school and the minimum distance value between point cloud planes obtained by calculation;
judging whether the current minimum distance value meets a threshold or reaches the maximum iteration number, if so, outputting a globally optimal individual, calculating rigid transformation between the two laser radars according to the 6-dimensional data value of the optimal individual fish, transforming the source point cloud data to target point cloud data, and finishing the algorithm; otherwise, performing foraging behavior, random behavior, clustering behavior and rear-end collision behavior of the artificial fish school on each fish i to obtain the current state of the artificial fish until an optimal value is solved, and solving the optimal individual by using the artificial fish school as shown in fig. 2.
The foraging behavior optimization is executed, so that the artificial fish swarm algorithm can be prevented from falling into a local optimal state; as in the current state X 1 Distances D between all points in the point cloud to the corresponding target point cloud plane min (X 1 ) And state X 2 Distances D between all points in the point cloud and the corresponding target point cloud plane min (X 2 ) Satisfies D min (X 1 )>D min (X 2 ) Will be in state X 1 Is updated to X 2
When state X 1 And X 2 Does not satisfy the relationship of D min (X 1 )>D min (S 2 ) Then, a state S is randomly selected again within the movable distance range 2 Continuing the comparison, if the number of attempts reaches a preset value and still does not satisfy the condition, randomly selecting a state from state S 1 To state S 2 Updated state X next Calculated according to the following formula:
Figure GDA0004069881050000041
where step represents the maximum value of the next distance that can be moved, and rand () represents a random number generated between 0 and 1.
When a smaller distance value cannot be found after trying for a plurality of times in the foraging behavior, random behavior is carried out, and the reached behavior is favorable for the algorithm to fall into a local optimal state; the next state of the artificial fish performing random behavior is calculated as follows:
X next =X 1 +visual×rand()。
performing a herding action to control the direction of motion and the density of the artificial fish herd, for a state X 2 Setting the number of individual fish in the maximum detection range as N n This N n The central position of the fish fillet is X c If at all
Figure GDA0004069881050000042
And 0 < delta < 1, expressed in X c The number of food is more, namely, the size relation in the foraging behavior is met, and the distance between individual fishes is moderate, which indicates that a state which is more optimal than the current state is found at the moment, and the current state is updated to the more optimal state; otherwise, foraging is performed.
The purpose of performing the rear-end collision behavior is to find the optimal individual fish that satisfies D min (X 1 )>D min (X 2 ) And the individual X next The number Nn of the surrounding individual fish is satisfied if
Figure GDA0004069881050000043
And delta is more than 0 and less than 1. When the state is X 1 When the person is found within the sensing distance, the person is X next Comparing the distances from all points in the source point cloud to the target point cloud plane in the two states, and if D is satisfied min (X 1 )>D min (X next ) And the individual X next Number of individual fish around N n Satisfy if
Figure GDA0004069881050000051
Delta is more than 0 and less than 1; represents the individual X next For the currently found optimal individual, the state of the other individual fish should be updated to be the state of the optimal individual fish: otherwise, foraging is performed.
(3) The method comprises the following steps of collecting a two-dimensional image of a calibration plate by using a camera, extracting an angular point of the calibration plate in the two-dimensional image, and specifically comprising the following steps: pasting four stickers with different colors on four corner points of the calibration plate respectively, collecting an image by using a camera, traversing pixel points in the image, searching regions with the four colors, taking the most marginal value of the pixel point position of each region, and taking the marginal value as the pixel coordinate of the corner point in the image collected by the camera.
(4) Extracting the angular points of the calibration plate from the three-dimensional point cloud data, and specifically comprising the following steps: removing useless point cloud data on the calibration plate by using direct filtering and conditional filtering; fitting the four edges of the calibration plate to obtain a general spatial expression of straight lines of the four edges; and (4) solving an intersection point of two adjacent straight lines in a three-dimensional space, wherein the solved intersection point is the solved laser radar point cloud angular point. The method for obtaining the general formula of the linear space where the four edges of the calibration plate are located is as follows:
assuming one point P = (x, y, z) in the three-dimensional point cloud data, by formula
Figure GDA0004069881050000052
And
Figure GDA0004069881050000053
Figure GDA0004069881050000054
the number of laser lines where the point P is located can be calculated, where Ange is a pitch angle of the laser beam and is obtained from the three-dimensional coordinates of the point, gene is a scaling factor and is a fixed value, and n is the number of laser beams where the point P is located that is required to be taken.
Then, the intersection point of two adjacent straight lines in the three-dimensional space is obtained, and the obtained intersection point is the obtained point cloud angular point of the laser radar, and the method comprises the following steps:
with the general equation of the spatial straight line known, the point P is set 1 =(x 1 ,y 1 ,z 1 ) And point P 2 =(x 2 ,y 2 ,z 2 ) Is a straight line L 1 Two points on, point P 3 =(x 3 ,y 3 ,z 3 ) And point P 4 =(x 4 ,y 4 ,z 4 ) Is a straight line L 2 For the above two points, the direction vectors corresponding to the two straight lines are respectively expressed as:
Figure GDA0004069881050000055
to obtain a line perpendicular to the straight line L at the same time 1 And L 2 Normal vector of (1)
Figure GDA0004069881050000056
By means of a straight line L 1 Last arbitrary point P 1 Method of alignment with straight line(Vector)
Figure GDA0004069881050000057
A simultaneous straight line L can be constructed 1 Sum vector
Figure GDA0004069881050000058
Plane of (A) is marked as 1 (x-x 1 )+B 1 (y-y 1 )+C 1 (z-z 1 )+D 1 And =0. The plane defined by the formula and L 2 By combining a linear equation set, a point Q can be obtained 1 Is recorded as Q 1 =(X 1 ,Y 1 ,Z 1 ) The point is then the plane and L 2 The intersection point of (a). In the same way, a point Q can be obtained 2 Is recorded as Q 2 =(X 2 ,Y 2 ,Z 2 ) Then the point is the plane and L 1 The intersection point of (a).
Due to L 1 And L 2 For two mutually different straight lines, the intersection of the straight lines obtained by fitting is estimated as Q obtained above 1 And Q 2 Average of the coordinates of (a).
(5) And solving the pose relation between the camera coordinate system and the world coordinate system of the three-dimensional scene structure and the optimal solution of transformation between the two-dimensional data and the three-dimensional data according to the two-dimensional image data and the three-dimensional image data of the calibration plate with the obtained corner points to complete data fusion.
Solving an absolute rotation matrix R and a translation vector T between a camera coordinate system and a three-dimensional world coordinate system according to the two-dimensional image and the three-dimensional point cloud of the obtained corner point coordinates; acquiring a pose relation between a camera coordinate system and a three-dimensional world coordinate system according to the obtained R and T; and (4) solving the optimal solution of the transformation between the three-dimensional data and the two-dimensional data by using a method for solving the PNP problem, and completing data fusion.
The specific steps of calculating the optimal solution for the transformation between the absolute rotation matrix R and the translational vector T and between the three-dimensional and two-dimensional data are as follows:
let a point in space be [ X ] W ,Y W ,Z w ,1] W The corresponding point on the two-dimensional image is [ X, Y,1 ]] w Then three-dimensional point transformationThe solution to the two-dimensional point is expressed as follows:
Figure GDA0004069881050000061
wherein f is x 、f y 、u 0 、v 0 Is a fixed coefficient read from the camera parameters, then there is R in the above transformation 11 、R 12 、R 13 、R 21 、R 22 、R 23 、R 31 、R 32 、R 33 、T 1 、T 2 、T 3 And a total of 12 unknowns, and the formula is developed as follows: x = X w ×(f x ×R 11 +u 0 ×R 31 )+Y w ×(f x ×R 12 +u 0 ×R 32 )+Z w ×(f x ×R 13 +u 0 ×R 33 )+f x ×T 1 +u 0 ×T 3
y=X w ×(f x ×R 21 +v 0 ×R 31 )+Y w ×(f y ×R 22 +v 0 ×R 32 )+Z w ×(f y ×R 23 +v 0 ×R 33 )+f y ×T 2 +v 0 ×T 3
The rotation matrix is an orthogonal matrix, the inner product of each row and each column of the vector group is 0, the rank and the degree of freedom of the matrix are 3 according to the property of the orthogonal matrix, 3 unknown solutions exist in the above translation vector, and the 2D and 3D structures of the right corner point are provided by S w ,Y w ,Z w X, y. It can be known from the nature of basic solution of mathematical equations that 6 unknowns require 6 equations to be solved simultaneously, one is to establish an equation set from 3 sets of three-dimensional and two-dimensional points, and the other is to solve from other multiple pairs of points by SVD.

Claims (4)

1. A camera and multi-laser radar fusion method based on an ROS robot is used for carrying images of the ROS robot with the camera and the multi-laser radar simultaneously, and is characterized by comprising the following steps:
(1) Collecting three-dimensional point cloud data of a calibration plate by using a multi-laser radar, and fitting a plane where the calibration plate is located;
(2) Carrying out registration among the multiple laser radars by using an artificial fish swarm algorithm;
(3) Acquiring a two-dimensional image of a calibration plate by using a camera, and extracting angular points of the calibration plate in the two-dimensional image;
(4) Extracting angular points of a calibration plate from the three-dimensional point cloud data;
(5) According to the two-dimensional and three-dimensional image data of the calibration plate with the obtained corner points, solving the pose relationship between the camera coordinate system and the world coordinate system of the three-dimensional scene structure and the optimal solution of transformation between the two-dimensional data and the three-dimensional data to complete data fusion;
in the step (2), an artificial fish structure body is defined, the dimension of each individual fish is 6, wherein 3 dimensions represent rotation, 3 dimensions represent translation, an artificial fish swarm optimizer class is defined, and parameters of the optimizer and a search angle of an algorithm in a space are initialized; setting the number of individual fishes in a fish school, initializing all the individual fishes, randomly selecting one individual fish and considering the individual fish as a globally optimal individual; taking the current state of a point cloud on the corresponding surface of the laser radar as the state of the artificial fish, and executing foraging, random, clustering and rear-end collision of the artificial fish school for each fish i; obtaining the current state of each artificial fish; recording the optimal individual X _ best in the current fish school and the minimum distance value between the point cloud planes obtained by calculation;
judging whether the current minimum distance value meets a threshold or reaches the maximum iteration number, if so, outputting a globally optimal individual, calculating rigid transformation between the two laser radars according to the 6-dimensional data value of the optimal individual fish, transforming the source point cloud data to target point cloud data, and finishing the algorithm; otherwise, executing foraging behavior, random behavior, clustering behavior and rear-end collision behavior of the artificial fish swarm on each fish i to obtain the current state of the artificial fish until an optimal value is solved;
carrying out foraging action on artificial fish in step (2)Optimizing the state, and specifically comprising the following steps: as in the current state X 1 Distances D between all points in the point cloud to the corresponding target point cloud plane min (X 1 ) And state X 2 Distances D between all points in the point cloud and the corresponding target point cloud plane min (X 2 ) Satisfies D min (X 1 )>D min (X 2 ) Will state X 1 Is updated to X 2
When state X 1 And X 2 Does not satisfy the relationship of D min (X 1 )>D min (X 2 ) Then, a state X is randomly selected again within the movable distance range 2 Continuing the comparison, if the number of attempts reaches the preset value and still does not meet the condition, randomly selecting a state from the state X 1 To state X 2 Updated state X next Calculated according to the following formula:
Figure FDA0004054474530000011
where step represents the maximum value of the next movable distance, and rand () represents a random number generated between 0 and 1;
in the step (2), when a smaller distance value cannot be found after trying for a plurality of times in the foraging behavior, performing a random behavior; the next state of the random behavior is calculated as:
X next =X 1 +visual×rand();
in the step (2), the clustering behavior is executed to control the movement direction and the density of the artificial fish school, and the state is X 2 Setting the number of individual fish in the maximum detection range as N n This N n The central position of each fish is X c Calculating
Figure FDA0004054474530000021
A value of (d); if it is
Figure FDA0004054474530000022
And 0 < delta < 1, expressed in X c The number of food is more, namely the size relation in foraging is met, and the distance between individual fishes is moderate, which shows that a state is found to be better than the current state at the moment, and the current state is updated to be the better state; otherwise, executing foraging behavior;
in the step (2), the rear-end collision behavior is executed to find the optimal individual fish, and the optimal individual fish meets D min (X 1 )>D min (X 2 ) And the individual X next Number of individual fish around N n Satisfy if
Figure FDA0004054474530000023
And 0 < delta < 1, when the state is X 1 When the person is found within the sensing distance, the person is X next Comparing the distances from all points in the source point cloud to the target point cloud plane in the two states, if D is satisfied min (X 1 )>D min (X next ) And the individual X next Number of individual fish around N n Satisfy if
Figure FDA0004054474530000024
Delta is more than 0 and less than 1; represents the individual X next For the optimal individual found at present, other individual fishes need to update the state of the fishes to be the state of the optimal individual fishes; otherwise, executing foraging behavior;
the step (4) comprises the following steps: removing useless point cloud data on the calibration plate by using direct filtering and conditional filtering; fitting the four edges of the calibration plate to obtain a general spatial expression of straight lines of the four edges; and solving the intersection point of two adjacent straight lines in the three-dimensional space, wherein the solved intersection point is the solved point cloud angular point of the laser radar.
2. The ROS robot-based camera and multi-lidar fusion method of claim 1, wherein the step of fitting the plane where the calibration plate is located in step (1) is: randomly selecting 3 non-collinear points from the point cloud acquired by the laser radar, constructing a general equation of a three-dimensional plane by using the 3 points, and initializing the plane corresponding to the point cloud; calculating the distance from all the rest points in the point cloud to the plane; counting the number of points with the distance value smaller than a given threshold value, and calculating the proportion of the number of the points meeting the threshold value in all the points; if the calculated ratio meets a certain threshold value, the current plane parameter is considered as a plane obtained by the point cloud fitting, and the iteration is finished; otherwise, continuing to fit the plane from the beginning until the maximum iteration number is reached.
3. The ROS robot-based camera and multi-lidar fusion method of claim 1, wherein in step (3), four corner points of the calibration plate are respectively pasted with stickers of four different colors, the camera is used to collect images, pixel points in the images are traversed, areas with four colors are found, the most marginal value is taken for the pixel point position of each area, and the marginal value is taken as the pixel coordinate of the corner point in the images collected by the camera.
4. The ROS robot-based camera and multi-lidar fusion method of claim 1, wherein said step (5) comprises: solving an absolute rotation matrix R and a translation vector T between a camera coordinate system and a three-dimensional world coordinate system according to the two-dimensional image and the three-dimensional point cloud of the obtained corner point coordinates; acquiring a pose relation between a camera coordinate system and a three-dimensional world coordinate system according to the obtained R and T; and (4) solving the optimal solution of the transformation between the three-dimensional data and the two-dimensional data by using a method for solving the PNP problem, and completing data fusion.
CN202110588529.XA 2021-05-28 2021-05-28 Camera and multi-laser-radar fusion method based on ROS robot Active CN113447948B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110588529.XA CN113447948B (en) 2021-05-28 2021-05-28 Camera and multi-laser-radar fusion method based on ROS robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110588529.XA CN113447948B (en) 2021-05-28 2021-05-28 Camera and multi-laser-radar fusion method based on ROS robot

Publications (2)

Publication Number Publication Date
CN113447948A CN113447948A (en) 2021-09-28
CN113447948B true CN113447948B (en) 2023-03-21

Family

ID=77810315

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110588529.XA Active CN113447948B (en) 2021-05-28 2021-05-28 Camera and multi-laser-radar fusion method based on ROS robot

Country Status (1)

Country Link
CN (1) CN113447948B (en)

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8290305B2 (en) * 2009-02-13 2012-10-16 Harris Corporation Registration of 3D point cloud data to 2D electro-optical image data
US10852419B2 (en) * 2017-10-20 2020-12-01 Texas Instruments Incorporated System and method for camera radar fusion
CN108010036B (en) * 2017-11-21 2020-01-21 江南大学 Object symmetry axis detection method based on RGB-D camera
CN108828606B (en) * 2018-03-22 2019-04-30 中国科学院西安光学精密机械研究所 One kind being based on laser radar and binocular Visible Light Camera union measuring method
CN110223379A (en) * 2019-06-10 2019-09-10 于兴虎 Three-dimensional point cloud method for reconstructing based on laser radar
CN110444014A (en) * 2019-07-01 2019-11-12 淮阴工学院 The anti-method for early warning that knocks into the back based on reversed ST-MRF vehicle tracking algorithm
CN110349221A (en) * 2019-07-16 2019-10-18 北京航空航天大学 A kind of three-dimensional laser radar merges scaling method with binocular visible light sensor
CN110988857A (en) * 2019-12-18 2020-04-10 中国人民解放军空军工程大学 Air combat refueling airspace positioning method based on artificial fish swarm algorithm
CN112729344B (en) * 2020-12-30 2022-09-13 珠海市岭南大数据研究院 Sensor external reference calibration method without reference object
CN112669393B (en) * 2020-12-31 2021-10-22 中国矿业大学 Laser radar and camera combined calibration method

Also Published As

Publication number Publication date
CN113447948A (en) 2021-09-28

Similar Documents

Publication Publication Date Title
CN109598765B (en) Monocular camera and millimeter wave radar external parameter combined calibration method based on spherical calibration object
CN111914715B (en) Intelligent vehicle target real-time detection and positioning method based on bionic vision
CN112396664B (en) Monocular camera and three-dimensional laser radar combined calibration and online optimization method
CN110853075B (en) Visual tracking positioning method based on dense point cloud and synthetic view
WO2021143935A1 (en) Detection method, device, electronic apparatus, and storage medium
Gao et al. Ground and aerial meta-data integration for localization and reconstruction: A review
Choe et al. Fast point cloud segmentation for an intelligent vehicle using sweeping 2D laser scanners
WO2021195939A1 (en) Calibrating method for external parameters of binocular photographing device, movable platform and system
CN111476242A (en) Laser point cloud semantic segmentation method and device
CN116258817B (en) Automatic driving digital twin scene construction method and system based on multi-view three-dimensional reconstruction
CN112907573A (en) Depth completion method based on 3D convolution
CN115201849A (en) Indoor map building method based on vector map
CN113838069B (en) Point cloud segmentation method and system based on flatness constraint
Perrollaz et al. Using obstacles and road pixels in the disparity-space computation of stereo-vision based occupancy grids
CN113447948B (en) Camera and multi-laser-radar fusion method based on ROS robot
CN112950786A (en) Vehicle three-dimensional reconstruction method based on neural network
CN112182122A (en) Method and device for acquiring navigation map of working environment of mobile robot
CN111198563A (en) Terrain recognition method and system for dynamic motion of foot type robot
Abduldayem et al. 3D reconstruction of complex structures with online profiling and adaptive viewpoint sampling
Bueno et al. Textureless object recognition and arm planning for a mobile manipulator
Chen et al. 3D Object Detection of Cars and Pedestrians by Deep Neural Networks from Unit-Sharing One-Shot NAS
Su Vanishing points in road recognition: A review
CN111678511B (en) Multi-sensor fusion positioning method and system for robot
CN112530014B (en) Three-dimensional reconstruction method and device for indoor scene of multiple unmanned aerial vehicles
Kinzel et al. Multiplatform stereoscopic 3D terrain mapping for UAV localization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant