CN117115426A - Obstacle marking method and device based on 3D and electronic equipment - Google Patents
Obstacle marking method and device based on 3D and electronic equipment Download PDFInfo
- Publication number
- CN117115426A CN117115426A CN202311070790.6A CN202311070790A CN117115426A CN 117115426 A CN117115426 A CN 117115426A CN 202311070790 A CN202311070790 A CN 202311070790A CN 117115426 A CN117115426 A CN 117115426A
- Authority
- CN
- China
- Prior art keywords
- point cloud
- obstacle
- frame
- data
- acquiring
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 20
- 230000003068 static effect Effects 0.000 claims abstract description 33
- 230000001360 synchronised effect Effects 0.000 claims abstract description 9
- 238000002372 labelling Methods 0.000 claims description 12
- 238000010276 construction Methods 0.000 claims description 8
- 238000001914 filtration Methods 0.000 claims description 7
- 238000012545 processing Methods 0.000 claims description 6
- 238000013507 mapping Methods 0.000 claims description 5
- 238000005457 optimization Methods 0.000 claims description 3
- 238000001514 detection method Methods 0.000 abstract description 24
- 230000000007 visual effect Effects 0.000 description 6
- 238000010586 diagram Methods 0.000 description 4
- 230000001133 acceleration Effects 0.000 description 1
- 230000004888 barrier function Effects 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 230000000630 rising effect Effects 0.000 description 1
- 238000005096 rolling process Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/86—Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/93—Lidar systems specially adapted for specific applications for anti-collision purposes
- G01S17/931—Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S19/00—Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
- G01S19/38—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
- G01S19/39—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
- G01S19/40—Correcting position, velocity or attitude
- G01S19/41—Differential correction, e.g. DGPS [differential GPS]
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S19/00—Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
- G01S19/38—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
- G01S19/39—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
- G01S19/42—Determining position
- G01S19/43—Determining position using carrier phase measurements, e.g. kinematic positioning; using long or short baseline interferometry
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/277—Analysis of motion involving stochastic approaches, e.g. using Kalman filters
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/62—Extraction of image or video features relating to a temporal dimension, e.g. time-based feature extraction; Pattern tracking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
- G06T2207/30261—Obstacle
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computer Networks & Wireless Communication (AREA)
- Evolutionary Computation (AREA)
- Databases & Information Systems (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Electromagnetism (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Optical Radar Systems And Details Thereof (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
The application relates to a 3D-based obstacle marking method, a device and electronic equipment, wherein the method comprises the following steps: s1: acquiring time-space synchronous laser radar point cloud data, RTK positioning data and camera pictures; s2: constructing a local point cloud map according to the laser radar point cloud data and the RTK positioning data; s3: acquiring a 3D point cloud frame of the dynamic and static obstacle based on the local point cloud map, and acquiring an image tag and an obstacle 2D point cloud frame according to the camera picture; s4: and according to the 3D point cloud frame, the image tag and the obstacle 2D point cloud frame, a corresponding image tag is given to the 3D point cloud frame by using a preset algorithm. According to the technical scheme, the image label obtained by the image target detection algorithm is utilized, the 3D point cloud frame is projected onto the 2D image based on the radar and the internal and external parameters of the camera, and the I OU is formed with the 2D point cloud frame, so that the label corresponding to the image can be given to the corresponding 3D point cloud frame, and the label is more accurate.
Description
Technical Field
The application relates to the technical field of obstacle detection, in particular to a 3D-based obstacle marking method and device and electronic equipment.
Background
At present, due to the rising of automatic driving, the existing target detection algorithm is gradually changed from a visual 2D detection algorithm to a visual 3D detection algorithm or a BEV algorithm, so that the requirement on labeling of visual 3D barriers is higher and higher. At present, the conventional method is based on an on-vehicle driving camera and an internal and external parameter of a 360-degree mechanical laser radar, manually selecting a point cloud obstacle in a frame mode, then projecting a 3D frame selected in the frame mode onto an image, and marking the type of the obstacle.
In view of the above problems, those skilled in the art have sought solutions.
Disclosure of Invention
The technical problem to be solved by the application is to provide a 3D-based obstacle labeling method, a device and electronic equipment aiming at the defects of the prior art so as to automatically assign labels to 3D obstacles.
In order to achieve the above object, the present application is realized by the following technical scheme:
the application provides a 3D-based obstacle marking method, which comprises the following steps:
s1: acquiring time-space synchronous laser radar point cloud data, RTK positioning data and camera pictures;
s2: constructing a local point cloud map according to the laser radar point cloud data and the RTK positioning data;
s3: acquiring a 3D point cloud frame of the dynamic and static obstacle based on the local point cloud map, and acquiring an image tag and an obstacle 2D point cloud frame according to the camera picture;
s4: and according to the 3D point cloud frame, the image tag and the obstacle 2D point cloud frame, a corresponding image tag is given to the 3D point cloud frame by using a preset algorithm.
Optionally, step S2 includes:
fusing the laser radar point cloud data and the RTK positioning data to determine a new point cloud pose;
and superposing and mapping multi-frame point clouds according to the new point cloud pose to determine the local point cloud map.
Optionally, the fusing the laser radar point cloud data and the RTK positioning data to determine a new point cloud pose includes:
according to the laser radar point cloud data, calculating and determining the relative pose between the point cloud frames by utilizing an interframe matching algorithm;
and fusing the relative pose between the point cloud frames and the RTK positioning data by using a Kalman algorithm to determine the new point cloud pose, wherein the RTK positioning data is absolute RTK pose positioning data.
Optionally, step S3 includes:
acquiring point cloud frame information of a dynamic obstacle based on the local point cloud map;
filtering the dynamic obstacle based on the local point cloud map to obtain a static map without the dynamic obstacle;
acquiring point cloud frame information of a static obstacle according to the static map;
and determining the 3D point cloud frame of the dynamic and static obstacle according to the point cloud frame information of the dynamic obstacle and the point cloud frame information of the static obstacle.
Optionally, the acquiring the point cloud frame information of the dynamic obstacle based on the local point cloud map includes:
acquiring a basic point cloud frame of a laser point cloud based on the local point cloud map;
and performing outlier filtering and multi-frame optimization processing on the basic point cloud frame by using a target tracking algorithm to obtain the point cloud frame information of the dynamic obstacle.
Optionally, the preset algorithm includes an IOU algorithm; the step S4 includes:
projecting the 3D point cloud frame to the camera picture, and acquiring IOU values of the 3D point cloud frame and the obstacle 2D point cloud frame by using the IOU algorithm;
if the IOU value meets the preset condition, the corresponding image label is successfully given to the 3D point cloud frame.
Optionally, before step S1, the method includes:
uniformly calibrating a laser radar, an RTK (real-time kinematic) arranged at the top end of a vehicle and a preset number of cameras arranged around the vehicle to the center of a rear axle of the vehicle;
setting directions of an X axis, a Y axis and a Z axis of a vehicle body coordinate system of the vehicle by taking the center of a rear axis of the vehicle as a coordinate origin, wherein the directions are used for spatially synchronizing the laser radar point cloud data, the RTK positioning data and the camera picture;
and setting the laser radar, the RTK and the camera time synchronization of the preset quantity, and performing time synchronization on the laser radar point cloud data, the RTK positioning data and the camera pictures.
The application also provides an obstacle marking device, which comprises: the device comprises an acquisition module, a construction module and a labeling module;
the acquisition module is used for acquiring time-space synchronous laser radar point cloud data, RTK positioning data and camera pictures;
the construction module is used for constructing a local point cloud map according to the laser radar point cloud data and the RTK positioning data;
the acquisition module is also used for acquiring a 3D point cloud frame of the dynamic and static obstacle based on the local point cloud map and acquiring an image tag and an obstacle 2D point cloud frame according to the camera picture;
the labeling module is used for giving corresponding image labels to the 3D point cloud frames by utilizing a preset algorithm according to the 3D point cloud frames, the image labels and the obstacle 2D point cloud frames.
Optionally, the construction module is further configured to fuse the laser radar point cloud data and the RTK positioning data to determine a new point cloud pose, and superimpose and map multiple frame point clouds according to the new point cloud pose to determine the local point cloud map.
The application also provides electronic equipment comprising the obstacle marking device.
According to the technical scheme, the image tag obtained by the image target detection algorithm is utilized, the 3D point cloud frame is projected onto the 2D image based on the radar and the internal and external parameters of the camera, and the 2D point cloud frame obtained by target detection is used as the IOU, so that the tag corresponding to the image can be given to the corresponding 3D point cloud frame, and the tag is more accurate.
The foregoing and other objects, features and advantages of the application will be apparent from the following more particular description of preferred embodiments, as illustrated in the accompanying drawings.
Drawings
The application is described in detail below with reference to the drawings and the detailed description.
FIG. 1 is a flow chart of a method for marking an obstacle according to an embodiment of the application;
FIG. 2 is a vehicle coordinate system diagram provided by an embodiment of the present application;
fig. 3 is a schematic structural diagram of an obstacle marking device according to an embodiment of the application.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
Fig. 1 is a flow chart of an obstacle marking method according to an embodiment of the application, referring to fig. 1, the application provides a 3D-based obstacle marking method, which includes:
s1: and acquiring time-space synchronous laser radar point cloud data, RTK positioning data and camera pictures.
The time-space synchronization means that the laser radar point cloud data, the RTK positioning data and the camera picture are synchronized in time and are also synchronized in space; realizing time synchronization can be realized by uniformly acquiring the laser radar point cloud data, RTK positioning data and the time of the equipment of the camera picture; the spatial synchronization can be realized by calibrating equipment for acquiring the laser radar point cloud data, the RTK positioning data and the camera picture in a unified coordinate system.
In this embodiment, the lidar may output lidar point cloud data; the camera pictures are obtained through 7 paths of driving cameras which are arranged at the front, the rear, the left and the right of the vehicle, the cameras are also used for collecting environmental information around the vehicle, the environment around the vehicle can be more comprehensively known, and in one embodiment, the cameras can be 4 paths of fisheye cameras which are arranged at the front, the rear, the left and the right of the vehicle.
In this embodiment, the RTK positioning data is acquired by the RTK positioning apparatus. RTK (Real-time dynamic) carrier phase difference technology is a difference method for processing the observed quantity of carrier phases of two measuring stations in Real time, and the carrier phases acquired by a reference station are sent to a user receiver to calculate the difference and calculate the coordinates.
S2: and constructing a local point cloud map according to the laser radar point cloud data and the RTK positioning data.
And constructing a laser point cloud multi-frame local map according to the laser point cloud data and the RTK positioning data, wherein the information presented by the map obtained through processing is closer to a real scene, and the precision of subsequent processing can be improved.
S3: acquiring a 3D point cloud frame of the dynamic and static obstacle based on the local point cloud map, and acquiring an image tag and an obstacle 2D point cloud frame according to the camera picture;
s4: and according to the 3D point cloud frame, the image tag and the obstacle 2D point cloud frame, a corresponding image tag is given to the 3D point cloud frame by using a preset algorithm.
Optionally, step S2 includes:
fusing the laser radar point cloud data and the RTK positioning data to determine a new point cloud pose;
and superposing and mapping the multi-frame point clouds according to the pose of the new point cloud to determine a local point cloud map.
In one embodiment, the point cloud mapping is performed by using a camera, a laser radar, an IMU and other sensors to fuse SLAM. IMU (inertial motionunit) inertial motion unit, a sensor, is a combination of accelerometer and gyroscopic sensor for detecting acceleration and angular velocity to represent motion and motion intensity; the method is used for point cloud mapping in the implementation.
Optionally, fusing the laser radar point cloud data and the RTK positioning data to determine a new point cloud pose, including:
according to the laser radar point cloud data, calculating and determining the relative pose between the point cloud frames by utilizing an interframe matching algorithm;
and fusing the relative pose among the point cloud frames and RTK positioning data by using a Kalman algorithm to determine the pose of the new point cloud, wherein the RTK positioning data is absolute RTK pose positioning data.
Because the information after the superposition of the multi-frame point clouds exists on the locally created point cloud map, the number of point clouds corresponding to the single obstacle is more, and the target obstacle can be better detected based on the point cloud target detection algorithm.
Optionally, step S3 includes:
and acquiring the point cloud frame information of the dynamic obstacle based on the local point cloud map.
Filtering dynamic obstacles based on the local point cloud map to obtain a static map without dynamic obstacles;
acquiring point cloud frame information of a static obstacle according to a static map;
and determining the 3D point cloud frame of the dynamic and static obstacle according to the point cloud frame information of the dynamic obstacle and the point cloud frame information of the static obstacle.
Optionally, acquiring the point cloud frame information of the dynamic obstacle based on the local point cloud map includes:
acquiring a basic point cloud frame of a laser point cloud based on a local point cloud map;
and performing outlier filtering and multi-frame optimization processing on the basic point cloud frame by using a target tracking algorithm to obtain the point cloud frame information of the dynamic obstacle. In one embodiment, the point cloud frame information is the center point coordinates, length, width and height, and yaw/pitch/roll rotation angles of the obstacle; yaw refers to heading, i.e., the object is rotated about the Y-axis (localrotation Y); pitch refers to pitch, i.e., the rotation of an object about the X-axis (localrotation); roll refers to rolling, rotating an object about the Z-axis (localrotation Z).
And fusing the dynamic obstacle information with the static map, filtering out the dynamic obstacle to obtain a point cloud map without the dynamic obstacle, obtaining static obstacle information through a point cloud detection large model, obtaining a static obstacle point cloud frame through the point cloud detection large model, and combining the dynamic and static point cloud frames together to obtain a more accurate obstacle point cloud frame.
Optionally, the preset algorithm includes an IOU algorithm; the step S4 includes:
projecting the 3D point cloud frame to a camera picture, and acquiring IOU values of the 3D point cloud frame and the obstacle 2D point cloud frame by using an IOU algorithm;
if the IOU value meets the preset condition, the corresponding image label is successfully given to the 3D point cloud frame.
In this embodiment, the projection principle is similar to a camera model, and the 3D point cloud frame output by laser point cloud detection is projected to an image coordinate system by the following camera model to obtain a corresponding 2D point cloud frame, and the reference formula is as follows:
wherein,is a camera with internal parameters>Is a camera external parameter.
In the above formula, u and v are the corresponding pixel coordinates in the image coordinate system; the coordinate unit is pixel; dx and dy represent how many mm each column and each row represents, respectively; u (u) 0 And v 0 Representing the position of the pixel coordinate of the center of the image under a pixel coordinate system; f represents a camera focal length; f (f) x Finger-image distance, i.e. f x =f/dx, where f represents the focal length; dx represents what the actual physical length corresponds to one pixel in the x direction (how many mm one pixel corresponds to); f (f) y Finger-image distance, i.e. f y =f/dy, where f represents the focal length; dy represents what the actual physical length corresponds to one pixel in the y direction (how many mm one pixel corresponds to); r and T represent external parameters of the camera, namely a rotation and translation matrix of a camera coordinate system with respect to a vehicle body coordinate system; x is X W ,Y W ,Z W The pixel points correspond to the coordinates of the real points in the vehicle body coordinate system.
IOU (Intersection over union), i.e. the overlap ratio, is the ratio of the area of the overlapping parts of two regions to the total area of the two regions, and is used for measuring the overlapping degree of the two regions. In this embodiment, 7V image data collected by a vehicle is subjected to a visual target detection algorithm to obtain a 2D image point cloud frame and an image tag, and meanwhile, based on internal and external parameters of an image and a radar, a dynamic and static 3D point cloud frame is projected onto the image, and if the 2D point cloud frame of the image is used as an IOU, the image tag can be assigned to a corresponding dynamic and static 3D point cloud frame. Specifically, comparing the intersection of the dynamic and static 3D point cloud frames and the 2D point cloud frame with the union of the dynamic and static 3D point cloud frames and the 2D point cloud frame to obtain an IOU value; the preset condition of the application is that the IOU is larger than 0.5, and when the IOU value is larger than 0.5, the image label can be ensured to be endowed on the corresponding dynamic and static 3D point cloud frame.
After the image labels are successfully endowed to the corresponding dynamic and static 3D point cloud frames, the dynamic and static 3D point cloud frames and the labels are manually corrected to reduce the error rate, and a final automatic labeling result can be obtained.
In an embodiment, the automatic labeling result is fed back to the point cloud detection model and the visual target detection model, so that the model detection precision can be further improved.
Optionally, before step S1, the method includes:
uniformly calibrating a laser radar, an RTK (real-time kinematic) arranged at the top end of a vehicle and a preset number of cameras arranged around the vehicle to the center of a rear axle of the vehicle;
setting directions of an X axis, a Y axis and a Z axis of a vehicle body coordinate system of the vehicle by taking the center of a rear axis of the vehicle as a coordinate origin, wherein the directions are used for spatially synchronizing laser radar point cloud data, RTK positioning data and camera pictures;
and setting laser radar, RTK and the time synchronization of the cameras with the preset quantity, and performing time synchronization on laser radar point cloud data, RTK positioning data and camera pictures.
The vehicle-mounted hardware equipment related to the scheme is a laser radar, an RTK and 7-path driving cameras arranged on the front, back, left and right of the vehicle. The laser radar and the 7-path driving camera are calibrated to the center of a rear axle of the vehicle in a unified mode and perform time synchronization.
Fig. 2 is a diagram of a vehicle coordinate system provided in an embodiment of the present application, referring to fig. 2, directions of an X axis, a Y axis and a Z axis of a vehicle body coordinate system of a vehicle are set with a center of a rear axis of the vehicle as a coordinate origin, and XYZ axes of the vehicle body coordinate system are defined as a front left upper part, and the coordinate origin is the center of the rear axis of the vehicle, so that space synchronization is facilitated.
In the present embodiment, the vehicle coordinate system is defined as: the origin is the projection point 0 of the center of the rear axle of the vehicle to the ground, the front direction of the vehicle is the positive direction of the X axis, the left side of the vehicle is the positive direction of the Y axis, and the vertical upward direction is the positive direction of the Z axis. And the vehicle body coordinate system is defined by the ISO international standard. The specific examples are shown in Table 1:
fig. 3 is a schematic structural diagram of an obstacle marking device according to an embodiment of the present application, referring to fig. 3, the present application further provides an obstacle marking device, including: an acquisition module 10, a construction module 20 and a labeling module 30.
The acquisition module 10 is used for acquiring time-space synchronous laser radar point cloud data, RTK positioning data and camera pictures; the construction module 20 is used for constructing a local point cloud map according to the laser radar point cloud data and the RTK positioning data; the acquisition module 10 is further configured to acquire a 3D point cloud frame of the dynamic and static obstacle based on the local point cloud map, and acquire an image tag and an obstacle 2D point cloud frame according to the camera picture; the labeling module 30 is configured to assign corresponding image labels to the 3D point cloud frame by using a preset algorithm according to the 3D point cloud frame, the image labels, and the obstacle 2D point cloud frame.
Optionally, the construction module 20 is further configured to fuse the laser radar point cloud data and the RTK positioning data to determine a new point cloud pose, and superimpose and map multiple frame point clouds according to the new point cloud pose to determine the local point cloud map.
The application also provides electronic equipment comprising the obstacle marking device.
The technical scheme of the application has the following advantages: 1. the point cloud marked by the scheme is based on the fact that point cloud target detection is carried out on a local point cloud map after multi-frame fusion, and compared with single-frame point cloud information, the point cloud target detection method is more complete in point cloud detection accuracy; 2. the point cloud detection large model algorithm can obtain the information of the dynamic obstacle by using a transducer algorithm based on front and rear multi-frame point cloud space-time data, and simultaneously corrects the dynamic obstacle based on a tracking algorithm and a post-processing algorithm, so that the detection precision of the dynamic obstacle is higher; 3. because the type of the tag of the target obstacle cannot be divided in detail by the simple point cloud, the image tag obtained by the image target detection algorithm is fully utilized, the 3D point cloud frame is projected onto the 2D image based on the radar and the internal and external parameters of the camera, the 2D point cloud frame is obtained by the target detection and is used as the IOU, and the tag corresponding to the image can be given to the corresponding 3D point cloud frame so that the tag is more accurate; 4. the obtained automatic labeling result is fed back to the point cloud detection large model and the visual target detection large model, so that the model precision is improved, and the precision and the efficiency of automatic labeling can be further improved
It is to be understood that both the general principles and the general features of the present application, as well as the advantages of the present application, are illustrated and described above. It should be understood by those skilled in the art that the present application is not limited by the foregoing embodiments, but rather, the embodiments and descriptions described herein are merely illustrative of the principles of the present application, and any modifications, equivalents, or improvements made within the spirit and principles of the present application are intended to be included within the scope of the present application.
Claims (10)
1. A 3D-based obstacle marking method, comprising:
s1: acquiring time-space synchronous laser radar point cloud data, RTK positioning data and camera pictures;
s2: constructing a local point cloud map according to the laser radar point cloud data and the RTK positioning data;
s3: acquiring a 3D point cloud frame of the dynamic and static obstacle based on the local point cloud map, and acquiring an image tag and an obstacle 2D point cloud frame according to the camera picture;
s4: and according to the 3D point cloud frame, the image tag and the obstacle 2D point cloud frame, a corresponding image tag is given to the 3D point cloud frame by using a preset algorithm.
2. The method of claim 1, wherein step S2 comprises:
fusing the laser radar point cloud data and the RTK positioning data to determine a new point cloud pose;
and superposing and mapping multi-frame point clouds according to the new point cloud pose to determine the local point cloud map.
3. The method of claim 2, wherein the fusing the lidar point cloud data and the RTK positioning data to determine a new point cloud pose comprises:
according to the laser radar point cloud data, calculating and determining the relative pose between the point cloud frames by utilizing an interframe matching algorithm;
and fusing the relative pose between the point cloud frames and the RTK positioning data by using a Kalman algorithm to determine the new point cloud pose, wherein the RTK positioning data is absolute RTK pose positioning data.
4. The method of claim 1, wherein step S3 comprises:
acquiring point cloud frame information of a dynamic obstacle based on the local point cloud map;
filtering the dynamic obstacle based on the local point cloud map to obtain a static map without the dynamic obstacle;
acquiring point cloud frame information of a static obstacle according to the static map;
and determining the 3D point cloud frame of the dynamic and static obstacle according to the point cloud frame information of the dynamic obstacle and the point cloud frame information of the static obstacle.
5. The method of claim 4, wherein the obtaining point cloud frame information of a dynamic obstacle based on the local point cloud map comprises:
acquiring a basic point cloud frame of a laser point cloud based on the local point cloud map;
and performing outlier filtering and multi-frame optimization processing on the basic point cloud frame by using a target tracking algorithm to obtain the point cloud frame information of the dynamic obstacle.
6. The method of claim 1, wherein the preset algorithm comprises an IOU algorithm;
the step S4 includes:
projecting the 3D point cloud frame to the camera picture, and acquiring IOU values of the 3D point cloud frame and the obstacle 2D point cloud frame by using the IOU algorithm;
if the IOU value meets the preset condition, the corresponding image label is successfully given to the 3D point cloud frame.
7. The method according to any one of claims 1-6, comprising, prior to step S1:
uniformly calibrating a laser radar, an RTK (real-time kinematic) arranged at the top end of a vehicle and a preset number of cameras arranged around the vehicle to the center of a rear axle of the vehicle;
setting directions of an X axis, a Y axis and a Z axis of a vehicle body coordinate system of the vehicle by taking the center of a rear axis of the vehicle as a coordinate origin, wherein the directions are used for spatially synchronizing the laser radar point cloud data, the RTK positioning data and the camera picture;
and setting the laser radar, the RTK and the camera time synchronization of the preset quantity, and performing time synchronization on the laser radar point cloud data, the RTK positioning data and the camera pictures.
8. An obstruction beacon device, comprising: the device comprises an acquisition module, a construction module and a labeling module;
the acquisition module is used for acquiring time-space synchronous laser radar point cloud data, RTK positioning data and camera pictures;
the construction module is used for constructing a local point cloud map according to the laser radar point cloud data and the RTK positioning data;
the acquisition module is also used for acquiring a 3D point cloud frame of the dynamic and static obstacle based on the local point cloud map and acquiring an image tag and an obstacle 2D point cloud frame according to the camera picture;
the labeling module is used for giving corresponding image labels to the 3D point cloud frames by utilizing a preset algorithm according to the 3D point cloud frames, the image labels and the obstacle 2D point cloud frames.
9. The apparatus of claim 8, wherein the means for constructing is further configured to fuse the lidar point cloud data and the RTK positioning data to determine a new point cloud pose, and to overlay a plurality of frame point clouds according to the new point cloud pose to determine the local point cloud map.
10. An electronic device comprising the obstacle marking apparatus as claimed in any one of claims 8 to 9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311070790.6A CN117115426A (en) | 2023-08-23 | 2023-08-23 | Obstacle marking method and device based on 3D and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311070790.6A CN117115426A (en) | 2023-08-23 | 2023-08-23 | Obstacle marking method and device based on 3D and electronic equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117115426A true CN117115426A (en) | 2023-11-24 |
Family
ID=88797842
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311070790.6A Pending CN117115426A (en) | 2023-08-23 | 2023-08-23 | Obstacle marking method and device based on 3D and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117115426A (en) |
-
2023
- 2023-08-23 CN CN202311070790.6A patent/CN117115426A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111462135B (en) | Semantic mapping method based on visual SLAM and two-dimensional semantic segmentation | |
CN112894832B (en) | Three-dimensional modeling method, three-dimensional modeling device, electronic equipment and storage medium | |
Zhu et al. | The multivehicle stereo event camera dataset: An event camera dataset for 3D perception | |
CN109360245B (en) | External parameter calibration method for multi-camera system of unmanned vehicle | |
CN110033489B (en) | Method, device and equipment for evaluating vehicle positioning accuracy | |
CN104217439B (en) | Indoor visual positioning system and method | |
JP2020525809A (en) | System and method for updating high resolution maps based on binocular images | |
CN111830953B (en) | Vehicle self-positioning method, device and system | |
CN108406731A (en) | A kind of positioning device, method and robot based on deep vision | |
CN109374008A (en) | A kind of image capturing system and method based on three mesh cameras | |
CN110411457B (en) | Positioning method, system, terminal and storage medium based on stroke perception and vision fusion | |
JP2016057108A (en) | Arithmetic device, arithmetic system, arithmetic method and program | |
CN111986506A (en) | Mechanical parking space parking method based on multi-vision system | |
CN112461210B (en) | Air-ground cooperative building surveying and mapping robot system and surveying and mapping method thereof | |
CN114076956A (en) | Lane line calibration method based on laser radar point cloud assistance | |
JP7190261B2 (en) | position estimator | |
CN112476433B (en) | Mobile robot positioning method based on identification array boundary | |
CN111968228B (en) | Augmented reality self-positioning method based on aviation assembly | |
CN110458885B (en) | Positioning system and mobile terminal based on stroke perception and vision fusion | |
Zhou et al. | Developing and testing robust autonomy: The university of sydney campus data set | |
CN113296133B (en) | Device and method for realizing position calibration based on binocular vision measurement and high-precision positioning fusion technology | |
CN114638897B (en) | Multi-camera system initialization method, system and device based on non-overlapping views | |
CN114280599A (en) | Coordinate conversion matching vehicle detection method based on millimeter wave radar and video data | |
CN108195359B (en) | Method and system for acquiring spatial data | |
Wu et al. | AFLI-Calib: Robust LiDAR-IMU extrinsic self-calibration based on adaptive frame length LiDAR odometry |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |