CN114137571A - Method for simulating laser radar data by multiple depth cameras - Google Patents

Method for simulating laser radar data by multiple depth cameras Download PDF

Info

Publication number
CN114137571A
CN114137571A CN202111289579.4A CN202111289579A CN114137571A CN 114137571 A CN114137571 A CN 114137571A CN 202111289579 A CN202111289579 A CN 202111289579A CN 114137571 A CN114137571 A CN 114137571A
Authority
CN
China
Prior art keywords
depth
cameras
laser radar
camera
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111289579.4A
Other languages
Chinese (zh)
Inventor
吴佳玲
夏营威
张文
高震宇
王凡
王乐乐
张龙
刘勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei Institutes of Physical Science of CAS
Original Assignee
Hefei Institutes of Physical Science of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei Institutes of Physical Science of CAS filed Critical Hefei Institutes of Physical Science of CAS
Priority to CN202111289579.4A priority Critical patent/CN114137571A/en
Publication of CN114137571A publication Critical patent/CN114137571A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • General Physics & Mathematics (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

A method for simulating lidar data by multiple depth cameras comprises the following steps: s1, fixing a plurality of depth cameras on a fixing frame at a set angle, wherein the fixing frame is installed on a shell of the robot, and placing a camera synchronization module into the shell of the robot, wherein signal ends of the camera synchronization module are connected with corresponding ports of all the depth cameras, and the camera synchronization module is used for synchronously triggering image acquisition signals of the cameras; s2, respectively simulating a plurality of depth cameras into multi-line laser radars under respective camera coordinate systems; and S3, converting the depth cameras into the same coordinate system, namely simulating the depth cameras into multiline laser radar data with a horizontal view angle of 360 degrees. According to the invention, data of a single depth camera is converted into the laser radar, and then the data of a plurality of laser radars are converted and spliced, so that the data volume is small and flexible.

Description

Method for simulating laser radar data by multiple depth cameras
Technical Field
The invention belongs to the technical field of robots, and particularly relates to a method for simulating laser radar data by a multi-depth camera.
Background
The laser radar is a necessary sensor for autonomous positioning navigation, autonomous sensing and obstacle avoidance. In these areas, the simultaneous localization and navigation SLAM method is classified by sensors, and generally includes a laser radar and a camera. SLAM is largely classified into laser SLAM and visual SLAM. The laser SLAM starts earlier than the visual SLAM, and is relatively mature in theory, technology and product landing. And is the most stable and mainstream positioning and navigation method at present. Wherein single line laser is used for indoor navigation more, and multi-line laser radar is used for unmanned driving field more, compares single line laser radar, and multi-line laser radar can three-dimensional scanning, promotes the dimension and the scene reduction, improves environmental perception ability, but the price is expensive. The adoption of a depth camera instead of a laser radar is a general scheme. Methods for using multiple depth cameras to splice analog lidar data also appear in the prior art. For example, patent No. CN107255821A, published by the chinese intellectual property office, discloses a method for detecting an obstacle in a range of more than 90 degrees and less than 6 meters in front of a robot, converting depth data into two-dimensional laser radar data scanned in a vertical direction of a depth image, and splicing the depth data of a plurality of cameras first. However, each depth camera corresponds to w × h × FPS depth point cloud data, 8 depth cameras correspond to 8 × w × h FPS point cloud data, the data amount is huge, and tens of thousands of point clouds are processed each time, which causes great burden on subsequent operation processing steps.
Disclosure of Invention
In order to realize the detection of the obstacle in the 360-degree direction of the robot body, the invention provides a method for simulating laser radar data by a multi-depth camera, which comprises the following specific schemes:
a method for simulating lidar data by multiple depth cameras comprises the following steps:
s1, fixing a plurality of depth cameras on a fixed frame at a set angle, wherein the fixed frame is arranged on a shell of the robot to obtain a plurality of groups of actually synchronous images of all the depth cameras;
s2, respectively simulating a plurality of depth cameras into multi-line laser radars under respective camera coordinate systems;
and S3, converting the depth cameras into the same coordinate system, namely simulating the depth cameras into multiline laser radar data with a horizontal view angle of 360 degrees.
Specifically, step S1 further includes: the camera synchronization module is placed inside a robot shell, signal ends are connected with corresponding ports of all depth cameras, and the camera synchronization module is used for synchronously triggering image acquisition signals of the cameras.
Specifically, step S1 further includes: the frame rates of the cameras are set to be the same, then the cameras continuously shoot pictures, fixed time differences can be generated by timestamps of the pictures shot by the cameras, and multiple groups of images actually synchronized by the cameras are calculated through the fixed time differences.
Specifically, six depth cameras are arranged at four corners of a shell of the robot, 4 depth cameras are arranged in the advancing direction of the robot, and 2 depth cameras are arranged behind the robot; the six depth camera optical centers are arranged on the same horizontal line of the robot shell, and the vertical field angles of the six cameras are overlapped; the two phase depth machines are opposite to each other in pairs, the horizontal field angles are overlapped, and the overlapped horizontal field surrounds the front side, the left side and the right side of the robot.
Specifically, the six camera positions are:
six depth cameras are placed at four corners of the rectangular frame, one of the depth cameras and two depth cameras with other oblique angles are arranged oppositely, the included angle between the single depth camera and the vertical central axis is 32.16 degrees, the included angle between the single depth camera and the vertical central axis is 23.02 degrees, and the included angle between the single depth camera and the horizontal central axis is 46.19 degrees.
Specifically, step S2 is as follows:
s21, obtaining internal parameters and external parameters of the depth camera by calibrating the single depth camera;
s22, dividing the depth image into n depth image areas, wherein the height of each depth image area is d pixel points, d is h/n-1, and the width of each depth image area is the depth image width w;
s23, converting the n depth image areas into n-line laser radar data; traversing pixel coordinate points (d x w) of a first area of the depth image in columns, returning a point with the minimum corresponding column depth value, namely a point with the minimum z value in d pixel points, and assuming that the point with the minimum column depth value in the first area is m and the pixel coordinate of the point m is (u, v, z);
s24, converting the pixel coordinate point M (u, v, z) of the depth image into a coordinate point M (x, y, z) under a depth camera coordinate system;
s25, calculating M (x, y, z) under a world coordinate system corresponding to the point M (u, v, z) of the current depth image according to the calibrated external parameter matrixes R and T; x-z (u-u0) dx/f, y-z (v-v0) dy/f, z-z, wherein u0 and v0 respectively represent the offset of the horizontal axis and the vertical axis of the pixel coordinate system with respect to the optical center of the camera, and f represents the focal length of the camera;
s26, converting the point M in the world coordinate system into the data scanning angle of the corresponding point M in the laser radar coordinate system
θ=arctan(x/z)
Scan range of [ alpha, beta ]]Assuming that the horizontal resolution of the converted laser is K, the index value i of the projection of the point M on the range array in the lidar is ((θ - α)/((β - α)/K) ═ K (θ - α)/(β - α), that is, the distance of the measurement point M from the coordinate system origin in the lidar coordinate system is obtained
Figure BDA0003334177410000031
Figure BDA0003334177410000032
Finally forming a polar coordinate taking the single-line laser radar as a coordinate systemAn array;
s27, sequentially converting pixel coordinate points of other areas of the depth image to form n parallel single line laser radar data, converting the n single line laser radar data into one laser radar coordinate system, and forming the n line laser radar data by a single depth camera, wherein the acquired unknown depth value in the depth camera corresponds to the unknown laser data in the laser radar;
and S28, and repeating the steps, and converting the image data of other depth cameras into n-line laser radar data.
Specifically, step S3 specifically includes: and converting each laser radar coordinate system and the central point coordinate system in pairs, converting the final plurality of laser radar data into data description under the central point coordinate system, and converting the depth data of the plurality of depth cameras into the laser radar data under the central point coordinate system.
The invention has the beneficial effects that:
(1) according to the invention, data of a single depth camera is converted into the laser radar, and then the data of a plurality of laser radars are converted and spliced, so that the data volume is small and flexible.
(2) The invention also adopts a camera synchronization module, and ensures the time synchronization of the image data shot by the multiple cameras by controlling the synchronous exposure of the multiple cameras, so that the simulated laser radar data can ensure the integrity of the data at the same moment.
Drawings
FIG. 1 is a diagram of the position of a depth camera mounted on a mount in an embodiment;
FIG. 2 is a horizontal field of view diagram for six depth cameras.
FIG. 3 is a vertical field of view diagram for six depth cameras.
Fig. 4 is a structural diagram of a depth image divided into n depth image areas.
Detailed Description
A method for simulating lidar data by multiple depth cameras comprises the following steps:
s1, fixing a plurality of depth cameras on a fixing frame at a set angle, wherein the fixing frame is installed on a shell of the robot, and placing a camera synchronization module into the shell of the robot, wherein signal ends of the camera synchronization module are connected with corresponding ports of all the depth cameras, and the camera synchronization module is used for synchronously triggering image acquisition signals of the cameras;
in the scheme, as shown in fig. 1, six depth cameras are arranged at four corners of a robot shell, 4 depth cameras are arranged in the advancing direction of the robot, and 2 depth cameras are arranged behind the robot. The six depth camera optical centers are arranged on the same horizontal line of the robot shell, and the vertical field angles of the six cameras are overlapped. The two depth cameras are opposite to each other in pairs, the horizontal field angles are overlapped, and the overlapped horizontal field surrounds the front side, the left side and the right side of the robot. The scheme specifically comprises the following steps: six depths are arranged at four corners of the rectangular frame, the width of the rectangular frame is 270 degrees, the length of the rectangular frame is 418 degrees, one depth camera is arranged opposite to the two depth cameras with other oblique angles, specifically, the included angle between the single depth camera and the vertical central axis is 32.16 degrees, the included angle between the single depth camera and the vertical central axis is 23.02 degrees, and the included angle between the single depth camera and the horizontal central axis is 46.19 degrees. The horizontal field of view of the six depth cameras is shown in fig. 2 and the vertical field of view is shown in fig. 3.
S2, respectively simulating the six depth cameras into multi-line laser radars under respective camera coordinate systems; each point in the depth image corresponds to a three-dimensional point in the local coordinate system of the respective imaging camera, so that the depth image of each frame of the depth camera can be converted into point cloud data in the local three-dimensional coordinate system of the respective imaging camera; specifically, the point cloud data of a single depth camera is divided into a plurality of areas, the point cloud of each area is projected on a scanning surface of an imaging camera for simulating a single-line laser radar, and the multi-line laser radar data is formed by projecting a plurality of areas;
the method specifically comprises the following steps:
s21, obtaining internal parameters and external parameters of the depth camera by calibrating the single depth camera;
s22, as shown in fig. 4, dividing the depth image into n depth image regions, where the height is d pixel points (d ═ h/n-1), and the width is the depth image width w;
s23, converting the n depth image areas into n-line laser radar data; traversing pixel coordinate points (d x w) of a first area of the depth image in columns, returning a point with the minimum corresponding column depth value, namely a point with the minimum z value in d pixel points, and assuming that the point with the minimum column depth value in the first area is m and the pixel coordinate of the point m is (u, v, z);
s24, converting the pixel coordinate point M (u, v, z) of the depth image into a coordinate point M (x, y, z) under a depth camera coordinate system;
s25, calculating M (x, y, z) in the world coordinate system corresponding to the point M (u, v, z) of the current depth image according to the calibrated external reference matrix R, T as the depth value z measured by the depth camera is the depth value in the world coordinate system of the camera coordinate system; x-z (u-u0) dx/f, y-z (v-v0) dy/f, z-z; where u0, v0 represent the offsets of the horizontal and vertical axes of the pixel coordinate system, respectively, with respect to the camera's optical center, and f represents the focal length of the camera.
S26, converting the point M in the world coordinate system into the data scanning angle of the corresponding point M in the laser radar coordinate system
θ=arctan(x/z)
Scan range of [ alpha, beta ]]Assuming that the horizontal resolution of the converted laser is K, the formula of the index value i of the projection of the point M in the range array in the lidar is: i ═ ((theta-alpha)/((beta-alpha)/K) ═ K (theta-alpha)/(beta-alpha), namely, the distance of the measuring point M from the origin of the coordinate system in the laser radar coordinate system is obtained
Figure BDA0003334177410000061
Figure BDA0003334177410000062
Finally forming a polar coordinate array taking the single-line laser radar as a coordinate system;
s27, sequentially converting pixel coordinate points of other areas of the depth image to form n parallel single line laser radar data, converting the n single line laser radar data into one laser radar coordinate system, and forming the n line laser radar data by a single depth camera, wherein the acquired unknown depth value in the depth camera corresponds to the unknown laser data in the laser radar;
s28, repeating the steps, and converting the image data of other 5 depth cameras into n-line laser radar data; namely, the coordinate conversion of six n-line laser radars is completed, and the data is converted into 6 laser radar data and converted into laser radar coordinate data at the central point. Specifically, the laser radar data of six n-line laser radars are subjected to coordinate conversion, and converted into laser radar data in a coordinate system with the rectangular frame as a center point.
And S3, converting the six depth cameras into the same coordinate system, namely simulating the six depth cameras into multi-line laser radar data with a horizontal view field angle of 360 degrees. Since the frame rates of the six depth cameras correspond to the scanning frequencies of the simulated six lidar lasers, and the six cameras are time-synchronized, the six cameras can be converted to the same coordinate system.
Specifically, the lidar data descriptions in 6 lidar coordinate systems are transformed into descriptions of the coordinate system at the center point. And converting each laser radar coordinate system and the central point coordinate system pairwise. And finally, all the 6 laser radar data are converted into data description under a central point coordinate system, and the depth data of the 6 depth cameras are converted into the laser radar data under the central point coordinate system.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art should be considered to be within the technical scope of the present invention, and the technical solutions and the inventive concepts thereof according to the present invention should be equivalent or changed within the scope of the present invention.

Claims (7)

1. A method for simulating laser radar data by a multi-depth camera is characterized by comprising the following steps:
s1, fixing a plurality of depth cameras on a fixed frame at a set angle, wherein the fixed frame is arranged on a shell of the robot to obtain a plurality of groups of actually synchronous images of all the depth cameras;
s2, respectively simulating a plurality of depth cameras into multi-line laser radars under respective camera coordinate systems;
and S3, converting the depth cameras into the same coordinate system, namely simulating the depth cameras into multiline laser radar data with a horizontal view angle of 360 degrees.
2. The method for simulating lidar data according to claim 1, wherein step S1 further comprises: the camera synchronization module is placed inside a robot shell, signal ends are connected with corresponding ports of all depth cameras, and the camera synchronization module is used for synchronously triggering image acquisition signals of the cameras.
3. The method for simulating lidar data according to claim 1, wherein step S1 further comprises: the frame rates of the cameras are set to be the same, then the cameras continuously shoot pictures, fixed time differences can be generated by timestamps of the pictures shot by the cameras, and multiple groups of images actually synchronized by the cameras are calculated through the fixed time differences.
4. The method for simulating lidar data by multiple depth cameras according to claim 1, wherein the number of the depth cameras is six, the six depth cameras are placed at four corners of a housing of the robot, 4 depth cameras are installed in the forward direction of the robot, and 2 depth cameras are installed behind the robot; the six depth camera optical centers are arranged on the same horizontal line of the robot shell, and the vertical field angles of the six cameras are overlapped; the two phase depth machines are opposite to each other in pairs, the horizontal field angles are overlapped, and the overlapped horizontal field surrounds the front side, the left side and the right side of the robot.
5. The method of claim 4, wherein the six camera positions are:
six depth cameras are placed at four corners of the rectangular frame, one of the depth cameras and two depth cameras with other oblique angles are arranged oppositely, the included angle between the single depth camera and the vertical central axis is 32.16 degrees, the included angle between the single depth camera and the vertical central axis is 23.02 degrees, and the included angle between the single depth camera and the horizontal central axis is 46.19 degrees.
6. The method for simulating lidar data by using multiple depth cameras according to claim 1, wherein the step S2 is as follows:
s21, obtaining internal parameters and external parameters of the depth camera by calibrating the single depth camera;
s22, dividing the depth image into n depth image areas, wherein the height of each depth image area is d pixel points, d is h/n-1, and the width of each depth image area is the depth image width w;
s23, converting the n depth image areas into n-line laser radar data; traversing pixel coordinate points (d x w) of a first area of the depth image in columns, returning a point with the minimum corresponding column depth value, namely a point with the minimum z value in d pixel points, and assuming that the point with the minimum column depth value in the first area is m and the pixel coordinate of the point m is (u, v, z);
s24, converting the pixel coordinate point M (u, v, z) of the depth image into a coordinate point M (x, y, z) under a depth camera coordinate system;
s25, calculating M (x, y, z) under a world coordinate system corresponding to the point M (u, v, z) of the current depth image according to the calibrated external parameter matrixes R and T; x-z (u-u0) dx/f, y-z (v-v0) dy/f, z-z, wherein u0 and v0 respectively represent the offset of the horizontal axis and the vertical axis of the pixel coordinate system with respect to the optical center of the camera, and f represents the focal length of the camera;
s26, converting the point M in the world coordinate system into the data scanning angle of the corresponding point M in the laser radar coordinate system
θ=arctan(x/z)
Scan range of [ alpha, beta ]]Assuming that the horizontal resolution of the converted laser is K, the index value i of the projection of the point M on the range array in the lidar is ((θ - α)/((β - α)/K) ═ K (θ - α)/(β - α), that is, the distance of the measurement point M from the coordinate system origin in the lidar coordinate system is obtained
Figure FDA0003334177400000021
Figure FDA0003334177400000031
Finally forming a polar coordinate array taking the single-line laser radar as a coordinate system;
s27, sequentially converting pixel coordinate points of other areas of the depth image to form n parallel single line laser radar data, converting the n single line laser radar data into one laser radar coordinate system, and forming the n line laser radar data by a single depth camera, wherein the acquired unknown depth value in the depth camera corresponds to the unknown laser data in the laser radar;
and S28, and repeating the steps, and converting the image data of other depth cameras into n-line laser radar data.
7. The method for simulating lidar data by using multiple depth cameras according to claim 1, wherein the step S3 specifically comprises: and converting each laser radar coordinate system and the central point coordinate system in pairs, converting the final plurality of laser radar data into data description under the central point coordinate system, and converting the depth data of the plurality of depth cameras into the laser radar data under the central point coordinate system.
CN202111289579.4A 2021-11-02 2021-11-02 Method for simulating laser radar data by multiple depth cameras Pending CN114137571A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111289579.4A CN114137571A (en) 2021-11-02 2021-11-02 Method for simulating laser radar data by multiple depth cameras

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111289579.4A CN114137571A (en) 2021-11-02 2021-11-02 Method for simulating laser radar data by multiple depth cameras

Publications (1)

Publication Number Publication Date
CN114137571A true CN114137571A (en) 2022-03-04

Family

ID=80392089

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111289579.4A Pending CN114137571A (en) 2021-11-02 2021-11-02 Method for simulating laser radar data by multiple depth cameras

Country Status (1)

Country Link
CN (1) CN114137571A (en)

Similar Documents

Publication Publication Date Title
CN111473739B (en) Video monitoring-based surrounding rock deformation real-time monitoring method for tunnel collapse area
CN106981083B (en) The substep scaling method of Binocular Stereo Vision System camera parameters
CN111369630A (en) Method for calibrating multi-line laser radar and camera
CN105716542B (en) A kind of three-dimensional data joining method based on flexible characteristic point
CN102692214B (en) Narrow space binocular vision measuring and positioning device and method
CN105627948A (en) Large-scale complex curved surface measurement system and application thereof
CN113310430B (en) Four-line four-eye three-dimensional laser scanner and scanning method
CN104346829A (en) Three-dimensional color reconstruction system and method based on PMD (photonic mixer device) cameras and photographing head
CN111854636B (en) Multi-camera array three-dimensional detection system and method
CN113281723B (en) AR tag-based calibration method for structural parameters between 3D laser radar and camera
CN107339935B (en) Target space intersection measuring method for full-view scanning measuring system
CN113175899B (en) Camera and galvanometer combined three-dimensional imaging model of variable sight line system and calibration method thereof
CN113160327A (en) Method and system for realizing point cloud completion
CN111288891A (en) Non-contact three-dimensional measurement positioning system, method and storage medium
CN114296057A (en) Method, device and storage medium for calculating relative external parameter of distance measuring system
CN114279325B (en) System and method for calibrating spatial position relation of measurement coordinate system of vision measurement module
US11293748B2 (en) System and method for measuring three-dimensional coordinates
CN111721194A (en) Multi-laser-line rapid detection method
CN111829435A (en) Multi-binocular camera and line laser cooperative detection method
CN114137571A (en) Method for simulating laser radar data by multiple depth cameras
CN1064129C (en) Apparatus and method for remote sensing multi-dimension information integration
Su et al. Obtaining obstacle information by an omnidirectional stereo vision system
CN114157852B (en) Virtual camera array three-dimensional imaging method and system based on rotating double prisms
CN116051651A (en) Equivalent multi-vision model of variable-vision imaging system and calibration method thereof
CN114693807A (en) Method and system for reconstructing mapping data of power transmission line image and point cloud

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination