CN116091533A - Laser radar target demonstration and extraction method in Qt development environment - Google Patents

Laser radar target demonstration and extraction method in Qt development environment Download PDF

Info

Publication number
CN116091533A
CN116091533A CN202310002862.7A CN202310002862A CN116091533A CN 116091533 A CN116091533 A CN 116091533A CN 202310002862 A CN202310002862 A CN 202310002862A CN 116091533 A CN116091533 A CN 116091533A
Authority
CN
China
Prior art keywords
target
data
point cloud
frame
blist
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310002862.7A
Other languages
Chinese (zh)
Other versions
CN116091533B (en
Inventor
郭凯
李文海
孙伟超
吴忠德
张家运
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Naval Aeronautical University
Original Assignee
Naval Aeronautical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Naval Aeronautical University filed Critical Naval Aeronautical University
Priority to CN202310002862.7A priority Critical patent/CN116091533B/en
Priority claimed from CN202310002862.7A external-priority patent/CN116091533B/en
Publication of CN116091533A publication Critical patent/CN116091533A/en
Application granted granted Critical
Publication of CN116091533B publication Critical patent/CN116091533B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The invention discloses a laser radar target demonstration and extraction method in a Qt development environment, which is characterized by comprising the following steps: s1, subscribing laser radar point cloud data in ROS by utilizing Qt; s2, dynamically demonstrating color three-dimensional point cloud data by utilizing an OPENGL module in the Qt; s3, completing multi-target extraction of single frame data through a voxel connection method; s4, completing multi-target tracking through inter-frame correlation analysis. According to the method, three-dimensional point cloud data can be acquired by subscribing messages issued by a laser radar sensor in the ROS, a three-dimensional color point cloud model is drawn and rendered by utilizing OPENGL, then single-frame multi-target segmentation extraction is completed by using a voxel connection method, and target tracking and real-time speed measurement are realized by comparing the correlation of target voxels between frames. The method has relatively simple steps, avoids using a self-contained data visualization module, optimizes the whole operation process, and is beneficial to the extraction and tracking of the inter-frame level targets.

Description

Laser radar target demonstration and extraction method in Qt development environment
Technical Field
The invention relates to the field of computer vision, in particular to a laser radar target demonstration and extraction method in a Qt development environment.
Background
Qt is a complete cross-platform C++ graphical user interface application program development framework, has a wide development foundation and a good packaging mechanism, is highly modularized in design, is simplified in memory recovery mechanism and is rich in API, and a development environment with strong portability, high usability and high running speed can be provided for users.
The laser radar technology has the characteristics of good directivity and high measurement precision, can generate a real-time high-resolution 3D point cloud of the surrounding environment by utilizing an active detection technology, and is not influenced by external natural light.
Therefore, how to combine the advantages of the two, so as to more intuitively and smoothly complete the demonstration of the point cloud data and the target identification becomes a new subject, and the following problems exist in the current combination of the Qt and the lidar:
first, the lidar can issue point cloud data through ROS nodes, conventionally, acquiring ROS node data with Qt requires installing ROS Qt Creator plug-ins, configuring environmental variables, creating workspaces (WorkSpace), modifying cmakelists.
Secondly, regarding the drawing of the three-dimensional point cloud image in Qt, the most direct method is to use a self-contained Datavisualization module, however, the module has the problems that the point cloud image demonstration is blocked due to higher CPU occupancy rate, and the reflectivity intensity information cannot be represented in pseudo color.
Third, current target extraction by lidar mainly includes Voxel-based (Voxel) and origin cloud-based methods. The object extraction method based on the voxels mostly needs to be abstracted through a 3D convolutional neural network, the operation process is complex, and the inter-frame object extraction and tracking are not facilitated.
Disclosure of Invention
In order to solve the defects of the technology, the invention provides a laser radar target demonstration and extraction method in a Qt development environment.
In order to solve the technical problems, the technical scheme adopted by the invention is that the laser radar target demonstration and extraction method in the Qt development environment comprises the following steps:
s1, subscribing laser radar point cloud data in ROS by utilizing Qt;
s2, dynamically demonstrating color three-dimensional point cloud data by utilizing an OPENGL module in the Qt;
s3, completing multi-target extraction of single frame data through a voxel connection method;
s4, completing multi-target tracking through inter-frame correlation analysis.
Further, the step S1 specifically includes:
s11, installing Qt and ROS media in a Ubuntu desktop operating system;
s12, adding a ROS-dependent dynamic link library and a ROS-dependent dynamic link path in the Qt engineering file;
s13, creating a subscription node in Qt, wherein the subscription node is used for subscribing laser radar point cloud data in the ROS;
s14, after the subscription node is created, starting the laser radar publishing node, and obtaining format data of laser radar publishing by rewriting a static callback function of the subscription node.
Further, step S2 specifically includes:
s21, converting a point cloud data format;
s22, data are transferred out;
s23, mapping single-frame point cloud reflectivity gray scale data into color data by utilizing OPENCV;
s24, rendering the point cloud data by using OPENGL;
s25, dynamically updating;
s26, graphic transformation.
Further, the single frame data in step S3 refers to data obtained by single period scanning of the laser radar, and step S3 specifically includes:
s31, establishing voxels;
s32, obtaining background data;
s33, identifying a target;
s34, confirming the target.
Further, the step S4 specifically includes:
s41, recording the position of the center point of each target according to the brightness lattice array of each target of the current frame;
s42, acquiring a brightness lattice array of each target of the next frame, and recording the position of the center point of each target; performing correlation analysis on the brightness lattice arrays of each target of the front frame and the rear frame, and obtaining a later frame array with the maximum correlation of a certain target in the previous frame through a traversal method;
s43, calculating the space distance between two frames of the same target to obtain the target speed;
and S44, setting the next frame as the current frame, and finishing iteration according to the methods of the steps S41, S42 and S43 when the next frame arrives, wherein each target speed is updated in a laser radar scanning period.
Further, the format conversion in step S21 refers to converting the point cloud data type by using the ROS library self function;
the data in step S22 refers to point cloud data in the static callback function in step S1;
the single-frame point cloud reflectivity gray data in the step S23 refers to data obtained by single-period scanning of the laser radar;
step S24 should include position information (p x 、p y 、p z ) Color information (p) R 、p G 、p B ) The method comprises the steps of carrying out a first treatment on the surface of the Writing all the information of the single-frame point cloud into a vertex buffer object QOpenGLBuffer;
s25, setting the display duration t of the single-frame point cloud in the picture P If the interface receiving point cloud time is t 1 Then at [ t ] 1 ,t 1 +t P ]The frame point cloud is displayed in the range exceeding t 1 +t P Then, the frame data is replaced and updated, so that dynamic display is realized and the memory is released in time;
s26, rewriting a mouse event in Qt by combining a camera, a visual angle and a rotation function in OPENGL, realizing the rotation of a mouse dragging image and the image scaling function of a mouse wheel, and smoothly displaying millions of point cloud data.
Further, the specific process of the data transfer in step S22 is as follows: and a signal slot is built in the static callback function, data is transmitted to the common slot function of the class, and in the common slot function, signals built with the external designer interface class object are transmitted, so that the transmission process of the data of the static function to the external class object through the signal slot can be completed.
Further, the mapping of the single frame point cloud reflectance gray data to color data using OPENCV in step S23 includes the steps of:
s231, installing an OPENCV in the Ubuntu desktop operating system;
s232, adding an OPENCV dependent dynamic link library into the Qt engineering file.
Further, step S31 specifically includes: setting a background sampling time t s =5s, at [0, t s ]Only background point clouds exist in the interior; firstly, obtaining the maximum value of absolute values of coordinates of the background point cloud in the X, Y, Z axial direction, and marking the maximum value as x m 、y m 、z m If the unit is meter, then all the current point clouds of the cuboid complete outsourcing can be established in the space rectangular coordinate system, and the range is [ -x [ m ,x m ],[-y m ,y m ],[-z m ,z m ]The method comprises the steps of carrying out a first treatment on the surface of the Establishing cube voxels with length of 0.1m, and dividing the point cloud space into 20 x m ·20·y m ·20·z m A voxel;
the step S32 specifically includes: calculated at t s Number N of scan points falling into a voxel s Selecting N s Maximum value r of medium reflectivity max And a minimum value r min The background reflectance interval of the voxel is r min ,r max ]The method comprises the steps of carrying out a first treatment on the surface of the Similarly, the reflectivity interval of all voxels in the outsourcing cuboid is recorded, and the reflectivity interval can be stored in a computer memory as voxel attributes;
the conditions for identifying the target in step S33 are: after the background acquisition is completed, when a moving target appears, laser irradiates the target to generate an echo, and when single-frame echo data meets one of the following conditions, the single-frame echo data can be judged as the target;
(1) Position p i (x i ,y i ,z i ) Not belonging to any voxel unit; at this time, the outsourcing cuboid range should be expanded according to the target position coordinates to completely contain the target point cloud;
(2) Position p of target point i (x i ,y i ,z i ) Belongs to a certain voxel, but its reflectivity information r i Not within the background reflection interval corresponding to the voxel;
the step S34 specifically includes: the point cloud information identified from the background may represent multiple objects, and therefore it is necessary to effectively segment them based on whether voxels containing the objects are crosslinked or not, and extract multiple objects based on the "voxel-join method".
Further, the voxel connecting method extracts multiple targets, and the specific steps are as follows:
s341, for the outsourcing cuboid, marking all voxels containing the target point cloud as 'bright grids', storing the center point coordinates of each bright grid by using a variable of a Qvector3D type, and counting the center point coordinates into an object blank of the QList < Qvector3D > type; as an alternative pool, the blank represents a bright lattice sequence where all point clouds of the target are located;
s342, selecting any point m in the blist 0 (x 0 ,y 0 ,z 0 ) It is voxel M 0 Is defined by a center of (a); and voxel M 0 The number of coplanar voxels is 6, and M 0 The number of cubes co-prismatic with each 1 edge is 1, thus being equal to M 0 The number of other voxels connected is 18, and each adjacent voxel is denoted as M 0i (i=0,1,2,…17);
S343 according to M 0i And M is as follows 0 Relative positional relationship (u) i ,v i ,w i ) Calculating the center coordinates m of each adjacent voxel 0i (x 0 +u i ,y 0 +v i ,z 0 +w i );
S344, finding m in the blist 0i If so, storing the data in a central point array block_0 of the target 0, wherein the data type is QList<QVector3D>The method comprises the steps of carrying out a first treatment on the surface of the To prevent repeated searches, m is required to be calculated 0i Delete from blist; in other words, m is 0i Moving from the candidate pool blist into the target pool blist_0;
s345, for the first element m in the blank_0 01 Searching 18 adjacent voxels and obtaining central coordinate value, which is marked as m 01i (x 01 +u i ,y 01 +v i ,z 01 +w i ) (i=0, 1,2,..17) if it is present in the blist, it is stored in blist_0 and m is taken up 01i Delete from blist; according to the method, each element of the blist_0 can be traversed; in addition, the blank_0 continuously completes capacity expansion while traversing so as to ensure that bright grids belonging to the current target are added continuously;
s346, when the traversal is finished, i.e. the number of blist_0 is not increased any more, the voxel M is used 0 Ending the process of selecting bright spots layer by layer for the center; blist_0 constitutes all the light boxes of target 0;
s347, judging the number of elements in the blist; if the target is 0, only one target exists, and the bright grid is blist; if greater than 0, then it is stated that there are also multiple targets; at this time, according to the thought of steps S342 to S347, multi-target blist_1, blist_2, … and blist_n are extracted from the blist by a layer-by-layer method until the number of elements in the candidate pool blist is 0, which indicates that all target extraction is completed.
The invention discloses a laser radar target demonstration and extraction method in a Qt development environment, which can subscribe a message issued by a laser radar sensor in an ROS to obtain three-dimensional point cloud data, draw and render a three-dimensional color point cloud model by using OPENGL, then finish single-frame multi-target segmentation extraction by using a voxel connection method, and realize target tracking and real-time speed measurement by comparing the correlation of target voxels between frames. The method has relatively simple steps, avoids using a self-contained data visualization module, optimizes the whole operation process, and is beneficial to the extraction and tracking of the inter-frame level targets.
Drawings
Fig. 1 is a general flow chart of the present invention.
Fig. 2 is a flowchart of single frame object extraction in the present invention.
FIG. 3 is a flow chart of a "voxel-connecting" object segmentation method in the present invention
Detailed Description
The invention will be described in further detail with reference to the drawings and the detailed description.
As shown in FIG. 1, the method for demonstrating and extracting the laser radar target in the Qt development environment comprises the following implementation processes:
s1, subscribing laser radar point cloud data in ROS by utilizing Qt;
s2, dynamically demonstrating color three-dimensional point cloud data by utilizing an OPENGL module in the Qt;
s3, completing multi-target extraction of single frame data through a voxel connection method;
s4, completing multi-target tracking through inter-frame correlation analysis.
The step S1 specifically comprises the following steps:
s11, in a Ubuntu 18.04 system, qt 5.9.9 and ROS media are installed;
s12, adding the following ROS-dependent dynamic link library and paths thereof into the Qt engineering file:
INCLUDEPATH+=/opt/ros/melodic/include
DEPENDPATH+=/opt/ros/melodic/lib
LIBS+=-L$$DEPENDPATH-lrosbag\
-lroscpp\
-lroslib\
-lroslz4\
-lrostime\
-lroscpp_serialization\
-lrospack\
-lcpp_common\
-lrosbag_storage\
-lrosconsole\
-lxmlrpcpp\
-lrosconsole_backend_interface\
-lrosconsole_log4cxx\;
s13, creating a subscription node class QNodeSub in the Qt for subscribing laser radar data in the ROS, wherein the class inherits to the Qt thread class Qthread; the main program of the class comprises a header file #include < ros/ros.h >, a handle ros is created, a variable ros is defined, subscriber chatter _subscore=node_subscore ("/livox/lidar", 1000, QNodeSub:: chatterCalback), and the creation of a subscription (subscore) node object, namely a subscriber_subscore, can be completed;
s14, after the subscription node is established, starting a laser radar release (publicher) node, and obtaining the data in the format of the sensor_msg:PointCloud 2 released by the laser radar sensor by rewriting a static callback function void QNodeSub:ChatterCallback (const sensor_msgs:PointCloud 2& msg) of the subscription node.
The step S2 specifically comprises the following steps:
s21, converting a point cloud data format;
s22, data are transferred out;
s23, mapping single-frame point cloud reflectivity gray scale data into color data by utilizing OPENCV;
s24, rendering the point cloud data by using OPENGL;
s25, dynamically updating;
s26, graphic transformation.
The format conversion in step S21 refers to converting the sensor_msg::: pointCloud2ToPointCloud class 2 point cloud data to the sensor_msg::: pointCloud class data using the ROS library self-contained function sensor_msgs::: convertPointCloud2 ToPointCloud.
The data in step S22 refers to point cloud data PointCloud class variable h in the static callback function in step S14;
the step S22 is carried out by the following specific steps: a signal slot is established in the callback function, and h is transmitted to the common slot function of the class; in the common slot function, a signal established with an external designer interface class object is transmitted, wherein the parameter of the signal is h, and the transmission process of the data of the static function to the external class object through the signal slot can be completed.
The use of OPENCV in step S23 can be accomplished as follows:
s231, installing OPENCV 4.5.4 in Ubuntu 18.04;
s232, adding an OPENCV dependent dynamic link library into the Qt engineering file:
INCLUDEPATH+=/usr/local/include\
/usr/local/include/opencv4\
/usr/local/include/opencv4/opencv2\
LIBS+=/usr/local/lib/libopencv_calib3d.so.4.5.4\、
/usr/local/lib/libopencv_core.so.4.5.4\
/usr/local/lib/libopencv_highgui.so.4.5.4\
/usr/local/lib/libopencv_imgcodecs.so.4.5.4\
/usr/local/lib/libopencv_imgproc.so.4.5.4\
/usr/local/lib/libopencv_dnn.so.4.5.4\
in step S23, mapping single-frame point cloud reflectivity gray scale data to color data specifically includes: creating an image container class (CV:: mat) object mapt with a format of CV_8UC1, and an image matrix size of 1. Single frame Point cloud data length N, namely: cv:: mat = cv:: mat:: zeros (1, n, cv_8uc1); the reflectivity gray data in the point cloud array h of the single frame PointCloud format is injected into img:
Figure BDA0004034699110000081
Figure BDA0004034699110000091
defining a cv:Mat class object mapc, and mapping the gray map mapt into a JET pseudo color map mapc by using the cv:an applyColorMap (mapt, mapc, cv:CORMAP_JET); for the i-th pixel in mapc, the R, G, B values thereof correspond to mapc.at < Vec3b > (0, i) [2], mapc.at < Vec3b > (0, i) [1], mapc.at < Vec3b > (0, i) [0], respectively.
In step S24, rendering point cloud data specifically includes: for any point p in the point cloud, position information (p x 、p y 、p z ) Color information (p) R 、p G 、p B ) The method comprises the steps of carrying out a first treatment on the surface of the If the length of the single-frame point cloud is N, the array dimension representing the single-frame point cloud is N multiplied by 6; writing the array into a vertex buffer object QOpenGLBuffer_VBO, and finishing the vertex attachment by using GLSL languageAnd writing the color device and the fragment color device to calculate and display the positions and colors of each point.
The step S25 specifically includes: setting display duration t of single-frame point cloud in picture P If the interface receiving point cloud time is t 1 Then at [ t ] 1 ,t 1 +t P ]The frame point cloud is displayed in the range exceeding t 1 +t P And then, the frame data is replaced and updated, so that dynamic display is realized and the memory is released in time.
The single frame data in step S3 refers to data obtained by single period scanning of the laser radar.
In combination with the single-frame target extraction flow chart of fig. 2, the steps are to set a loop, traverse all points in a frame of point cloud data, judge whether the point belongs to the background, if not, go to the next point, if yes, judge that the voxel where the point is located is put into a target candidate pool, then acquire a single-frame all-target 'voxel connection method' to complete the segmentation, and the step S3 specifically includes:
s31, establishing voxels;
s32, obtaining background data;
s33, identifying a target;
s34, confirming the target.
The step S31 specifically includes: setting a background sampling time t s =5s, at [0, t s ]Only background point clouds exist in the interior; firstly, the maximum value of the absolute value of the coordinates of the background point cloud in the X, Y, Z axial direction (which is rounded upwards if the maximum value is floating point) is recorded as x m 、y m 、z m If the unit is meter, then all the current point clouds of the cuboid complete outsourcing can be established in the space rectangular coordinate system, and the range is [ -x [ m ,x m ],[-y m ,y m ],[-z m ,z m ]The method comprises the steps of carrying out a first treatment on the surface of the Establishing cube voxels with 0.1m (adjustable precision) as length unit, dividing the point cloud space into 20 x m ·20·y m ·20·z m Each voxel.
The step S32 specifically includes: calculated at t s Number N of scan points falling into a voxel s Selecting N s Maximum value r of medium reflectivity max And a minimum value r min The background reflectance interval of the voxel is r min ,r max ]The method comprises the steps of carrying out a first treatment on the surface of the Similarly, the reflectivity interval of all voxels in the outsourcing cuboid is recorded, and can be stored in a computer memory as voxel attributes.
The conditions for identifying the target in step S33 are: after the background acquisition is completed, when a moving target appears, laser irradiates the target to generate an echo, and when single-frame echo data meets one of the following conditions, the target can be judged.
(1) Position p i (x i ,y i ,z i ) Not belonging to any voxel unit; at this time, the outsourcing cuboid range should be expanded according to the target position coordinates to completely contain the target point cloud;
(2) Position p of target point i (x i ,y i ,z i ) Belongs to a certain voxel, but its reflectivity information r i Is not within the background reflection interval corresponding to the voxel.
The step S34 specifically includes: the point cloud information identified from the background may represent a plurality of targets, so that the targets need to be effectively segmented, the segmentation basis is whether voxels containing the targets are crosslinked or not, a multi-target is extracted based on a voxel connection method, and a target segmentation flow chart of the voxel connection method shown in fig. 3 is combined.
The method comprises the following specific steps:
s341, for the outsourcing cuboid, marking all voxels containing the target point cloud as 'bright grids', storing the center point coordinates of each bright grid by using a variable of a Qvector3D type, and counting the center point coordinates into an object blank of the QList < Qvector3D > type; as an alternative pool, the blank represents a bright lattice sequence where all point clouds of the target are located;
s342, selecting any point m in the blist 0 (x 0 ,y 0 ,z 0 ) It is voxel M 0 Is defined by a center of (a); and voxel M 0 The number of coplanar voxels is 6, and M 0 The number of cubes co-prismatic with each 1 edge is 1, thus being equal to M 0 The number of other voxels connected is 18, and each adjacent voxel is denoted as M 0i (i=0,1,2,…17);
S343 according to M 0i And M is as follows 0 Relative positional relationship (u) i ,v i ,w i ) Calculating the center coordinates m of each adjacent voxel 0i (x 0 +u i ,y 0 +v i ,z 0 +w i );
S344, finding m in the blist 0i If so, storing the data in a central point array block_0 of the target 0, wherein the data type is QList<QVector3D>The method comprises the steps of carrying out a first treatment on the surface of the To prevent repeated searches, m is required to be calculated 0i Delete from blist; in other words, m is 0i Moving from the candidate pool blist into the target pool blist_0;
s345, for the first element m in the blank_0 01 Searching 18 adjacent voxels and obtaining central coordinate value, which is marked as m 01i (x 01 +u i ,y 01 +v i ,z 01 +w i ) (i=0, 1,2,..17) if it is present in the blist, it is stored in blist_0 and m is taken up 01i Delete from blist; according to the method, each element of the blist_0 can be traversed; in addition, the blank_0 continuously completes capacity expansion while traversing so as to ensure that bright grids belonging to the current target are added continuously;
s346, when the traversal is finished, i.e. the number of blist_0 is not increased any more, the voxel M is used 0 Ending the process of selecting bright spots layer by layer for the center; blist_0 constitutes all the light boxes of target 0;
s347, judging the number of elements in the blist; if the target is 0, only one target exists, and the bright grid is blist; if greater than 0, then it is stated that there are also multiple targets; at this time, according to the thought of the steps 342 to 347, multi-target blist_1, blist_2, … and blist_N are extracted from the blist by a layer-by-layer method until the number of elements in the candidate pool blist is 0, which indicates that all target extraction is finished.
The step S4 specifically comprises the following steps:
s41, recording the position Target of each Target center point according to the brightness lattice array of each Target of the current frame i
S42, acquiring a brightness lattice array of each Target of the next frame, and recording the position Target of the center point of each Target j The method comprises the steps of carrying out a first treatment on the surface of the Performing correlation analysis on the bright lattice arrays of each target of the front frame and the rear frame, and obtaining a later frame array with the maximum target correlation in the previous frame through a traversal method, namely, considering that the two arrays correspond to the same target, thereby realizing target tracking; specifically, the bright lattice sequence blist_0 of the target 0 in the previous frame image i As a reference, comparing with each target bright lattice sequence in the following frame, searching each target bright lattice sequence and blist_0 in the following frame due to extremely short frame interval time (0.1 s) i The sequence with the largest number of repeated elements in the sequence is identified as the same target; similarly, inter-frame correlation analysis may be performed for each object in the previous frame of image;
s43, calculating the center point Target between two frames of the same Target i 、Target j The target speed can be obtained;
and S44, setting the next frame as the current frame, and finishing iteration according to the methods of the steps S41, S42 and S43 when the next frame arrives, wherein each target speed is updated in a laser radar scanning period.
In summary, the laser radar target demonstration and extraction method in the Qt development environment comprises the steps of establishing ROS subscription nodes in the Qt to obtain point cloud data; dynamically displaying a color point cloud by utilizing an OPENGL module of QT; establishing a voxel model and acquiring a background reflectivity interval; confirming the voxel of a single frame target; dividing the target by a voxel connecting method; and realizing target tracking by utilizing the inter-frame correlation. According to the method, three-dimensional point cloud data can be acquired by subscribing messages issued by a laser radar sensor in the ROS, a three-dimensional color point cloud model is drawn and rendered by utilizing OPENGL, then single-frame multi-target segmentation extraction is completed by using a voxel connection method, and target tracking and real-time speed measurement are realized by comparing the correlation of target voxels between frames.
The above embodiments are not intended to limit the present invention, and the present invention is not limited to the above examples, but is also intended to be limited to the following claims.

Claims (10)

1. The laser radar target demonstration and extraction method under the Qt development environment is characterized by comprising the following steps:
s1, subscribing laser radar point cloud data in ROS by utilizing Qt;
s2, dynamically demonstrating color three-dimensional point cloud data by utilizing an OPENGL module in the Qt;
s3, completing multi-target extraction of single frame data through a voxel connection method;
s4, completing multi-target tracking through inter-frame correlation analysis.
2. The method for demonstrating and extracting the lidar target in the Qt development environment of claim 1, wherein the step S1 specifically includes:
s11, installing Qt and ROS media in a Ubuntu desktop operating system;
s12, adding a ROS-dependent dynamic link library and a ROS-dependent dynamic link path in the Qt engineering file;
s13, creating a subscription node in Qt, wherein the subscription node is used for subscribing laser radar point cloud data in the ROS;
s14, after the subscription node is created, starting the laser radar publishing node, and obtaining format data of laser radar publishing by rewriting a static callback function of the subscription node.
3. The method for demonstrating and extracting the lidar target in the Qt development environment of claim 1, wherein the step S2 specifically includes:
s21, converting a point cloud data format;
s22, data are transferred out;
s23, mapping single-frame point cloud reflectivity gray scale data into color data by utilizing OPENCV;
s24, rendering the point cloud data by using OPENGL;
s25, dynamically updating;
s26, graphic transformation.
4. The method for demonstrating and extracting a target of a lidar in a Qt development environment according to claim 1, wherein the single frame data in step S3 is data obtained by single period scanning of the lidar, and step S3 specifically includes:
s31, establishing voxels;
s32, obtaining background data;
s33, identifying a target;
s34, confirming the target.
5. The method for demonstrating and extracting the laser radar target in the Qt development environment of claim 1, wherein the method comprises the following steps: the step S4 specifically includes:
s41, recording the position of the center point of each target according to the brightness lattice array of each target of the current frame;
s42, acquiring a brightness lattice array of each target of the next frame, and recording the position of the center point of each target; performing correlation analysis on the brightness lattice arrays of each target of the front frame and the rear frame, and obtaining a later frame array with the maximum correlation of a certain target in the previous frame through a traversal method;
s43, calculating the space distance between two frames of the same target to obtain the target speed;
and S44, setting the next frame as the current frame, and finishing iteration according to the methods of the steps S41, S42 and S43 when the next frame arrives, wherein each target speed is updated in a laser radar scanning period.
6. The method for demonstrating and extracting a laser radar target in a Qt development environment of claim 3, wherein the method comprises the following steps:
the format conversion in the step S21 refers to converting the point cloud data type by using the ROS library self-contained function;
the data in the step S22 refers to point cloud data in the static callback function in the step S1;
the single-frame point cloud reflectivity gray data in the step S23 refers to data obtained by single-period scanning of the laser radar;
the step S24 should include position information (p for any point p in the point cloud data x 、p y 、p z ) Color information (p) R 、p G 、p B ) The method comprises the steps of carrying out a first treatment on the surface of the Writing all the information of the single-frame point cloud into a vertex buffer object QOpenGLBuffer;
s25, setting the display duration t of the single-frame point cloud in the picture P If the interface receiving point cloud time is t 1 Then at [ t ] 1 ,t 1 +t P ]The frame point cloud is displayed in the range exceeding t 1 +t P Then, the frame data is replaced and updated, so that dynamic display is realized and the memory is released in time;
s26, rewriting a mouse event in Qt by combining a camera, a visual angle and a rotation function in OPENGL, realizing the rotation of a mouse dragging image and the image scaling function of a mouse wheel, and smoothly displaying millions of point cloud data.
7. The method for demonstrating and extracting a lidar target in a Qt development environment of claim 3, wherein the specific process of transferring the data in step S22 is as follows: and a signal slot is built in the static callback function, data is transmitted to the common slot function of the class, and in the common slot function, signals built with the external designer interface class object are transmitted, so that the transmission process of the data of the static function to the external class object through the signal slot can be completed.
8. The method for demonstrating and extracting lidar targets in the Qt development environment of claim 5, wherein the mapping of single-frame point cloud reflectivity gray-scale data to color data using OPENCV in step S23 includes the steps of:
s231, installing an OPENCV in the Ubuntu desktop operating system;
s232, adding an OPENCV dependent dynamic link library into the Qt engineering file.
9. The method for demonstrating and extracting the lidar target in the Qt development environment of claim 4, wherein the step S31 is specifically: setting a background sampling time t s =5s, at [0, t s ]Only background point clouds exist in the interior; firstly, obtaining the maximum value of absolute values of coordinates of the background point cloud in the X, Y, Z axial direction, and marking the maximum value as x m 、y m 、z m If the unit is meter, then all the current point clouds of the cuboid complete outsourcing can be established in the space rectangular coordinate system, and the range is [ -x [ m ,x m ],[-y m ,y m ],[-z m ,z m ]The method comprises the steps of carrying out a first treatment on the surface of the Establishing cube voxels with length of 0.1m, and dividing the point cloud space into 20 x m ·20·y m ·20·z m A voxel;
the step S32 specifically includes: calculated at t s Number N of scan points falling into a voxel s Selecting N s Maximum value r of medium reflectivity max And a minimum value r min The background reflectance interval of the voxel is r min ,r max ]The method comprises the steps of carrying out a first treatment on the surface of the Similarly, the reflectivity interval of all voxels in the outsourcing cuboid is recorded, and the reflectivity interval can be stored in a computer memory as voxel attributes;
the conditions for identifying the target in step S33 are as follows: after the background acquisition is completed, when a moving target appears, laser irradiates the target to generate an echo, and when single-frame echo data meets one of the following conditions, the single-frame echo data can be judged as the target;
(1) Position p i (x i ,y i ,z i ) Not belonging to any voxel unit; at this time, the outsourcing cuboid range should be expanded according to the target position coordinates to completely contain the target point cloud;
(2) Position p of target point i (x i ,y i ,z i ) Belongs to a certain voxel, but its reflectivity information r i Not within the background reflection interval corresponding to the voxel;
the step S34 specifically includes: the point cloud information identified from the background may represent multiple objects, and therefore it is necessary to effectively segment them based on whether voxels containing the objects are crosslinked or not, and extract multiple objects based on the "voxel-join method".
10. The method for demonstrating and extracting the laser radar target in the Qt development environment according to claim 4, wherein the voxel connecting method extracts multiple targets, and the specific steps are as follows:
s341, for the outsourcing cuboid, marking all voxels containing the target point cloud as 'bright grids', storing the center point coordinates of each bright grid by using a variable of a Qvector3D type, and counting the center point coordinates into an object blank of the QList < Qvector3D > type; as an alternative pool, the blank represents a bright lattice sequence where all point clouds of the target are located;
s342, selecting any point m in the blist 0 (x 0 ,y 0 ,z 0 ) It is voxel M 0 Is defined by a center of (a); and voxel M 0 The number of coplanar voxels is 6, and M 0 The number of cubes co-prismatic with each 1 edge is 1, thus being equal to M 0 The number of other voxels connected is 18, and each adjacent voxel is denoted as M 0i (i=0,1,2,...17);
S343 according to M 0i And M is as follows 0 Relative positional relationship (u) i ,v i ,w i ) Calculating the center coordinates m of each adjacent voxel 0i (x 0 +u i ,y 0 +v i ,z 0 +w i );
S344, finding m in the blist 0i If so, storing the data in a central point array block_0 of the target 0, wherein the data type is QList<QVector3D>The method comprises the steps of carrying out a first treatment on the surface of the To prevent repeated searches, m is required to be calculated 0i Delete from blist; in other words, m is 0i Moving from the candidate pool blist into the target pool blist_0;
s345, for the first element m in the blank_0 01 Searching 18 adjacent voxels and obtaining central coordinate value, which is marked as m 01i (x 01 +u i ,y 01 +v i ,z 01 +w i ) (i=0, 1,2,..17) if it is present in the blist, it is stored in blist_0 and m is taken up 01i Delete from blist; according to the method, each element of the blist_0 can be traversed; in addition, the blank_0 continuously completes capacity expansion while traversing so as to ensure that bright grids belonging to the current target are added continuously;
s346, when the traversal is finished, i.e. the number of blist_0 is not increased any more, the voxel M is used 0 Ending the process of selecting bright spots layer by layer for the center; blist_0 constitutes all the light boxes of target 0;
s347, judging the number of elements in the blist; if the target is 0, only one target exists, and the bright grid is blist; if greater than 0, then it is stated that there are also multiple targets; at this time, according to the thought of steps S342 to S347, multi-target blist_1, blist_2, … and blist_n are extracted from the blist by a layer-by-layer method until the number of elements in the candidate pool blist is 0, which indicates that all target extraction is completed.
CN202310002862.7A 2023-01-03 Laser radar target demonstration and extraction method in Qt development environment Active CN116091533B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310002862.7A CN116091533B (en) 2023-01-03 Laser radar target demonstration and extraction method in Qt development environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310002862.7A CN116091533B (en) 2023-01-03 Laser radar target demonstration and extraction method in Qt development environment

Publications (2)

Publication Number Publication Date
CN116091533A true CN116091533A (en) 2023-05-09
CN116091533B CN116091533B (en) 2024-05-31

Family

ID=

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130222369A1 (en) * 2012-02-23 2013-08-29 Charles D. Huston System and Method for Creating an Environment and for Sharing a Location Based Experience in an Environment
WO2019023892A1 (en) * 2017-07-31 2019-02-07 SZ DJI Technology Co., Ltd. Correction of motion-based inaccuracy in point clouds
CN110210389A (en) * 2019-05-31 2019-09-06 东南大学 A kind of multi-targets recognition tracking towards road traffic scene
CN110264468A (en) * 2019-08-14 2019-09-20 长沙智能驾驶研究院有限公司 Point cloud data mark, parted pattern determination, object detection method and relevant device
US20200043182A1 (en) * 2018-07-31 2020-02-06 Intel Corporation Point cloud viewpoint and scalable compression/decompression
CN110853037A (en) * 2019-09-26 2020-02-28 西安交通大学 Lightweight color point cloud segmentation method based on spherical projection
US20200074230A1 (en) * 2018-09-04 2020-03-05 Luminar Technologies, Inc. Automatically generating training data for a lidar using simulated vehicles in virtual space
CN111476822A (en) * 2020-04-08 2020-07-31 浙江大学 Laser radar target detection and motion tracking method based on scene flow
CN111781608A (en) * 2020-07-03 2020-10-16 浙江光珀智能科技有限公司 Moving target detection method and system based on FMCW laser radar
CN113075683A (en) * 2021-03-05 2021-07-06 上海交通大学 Environment three-dimensional reconstruction method, device and system
CN114419152A (en) * 2022-01-14 2022-04-29 中国农业大学 Target detection and tracking method and system based on multi-dimensional point cloud characteristics
CN114746872A (en) * 2020-04-28 2022-07-12 辉达公司 Model predictive control techniques for autonomous systems
CN114862901A (en) * 2022-04-26 2022-08-05 青岛慧拓智能机器有限公司 Road-end multi-source sensor fusion target sensing method and system for surface mine
CN115032614A (en) * 2022-05-19 2022-09-09 北京航空航天大学 Bayesian optimization-based solid-state laser radar and camera self-calibration method
CN115330923A (en) * 2022-08-10 2022-11-11 小米汽车科技有限公司 Point cloud data rendering method and device, vehicle, readable storage medium and chip

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130222369A1 (en) * 2012-02-23 2013-08-29 Charles D. Huston System and Method for Creating an Environment and for Sharing a Location Based Experience in an Environment
WO2019023892A1 (en) * 2017-07-31 2019-02-07 SZ DJI Technology Co., Ltd. Correction of motion-based inaccuracy in point clouds
US20200043182A1 (en) * 2018-07-31 2020-02-06 Intel Corporation Point cloud viewpoint and scalable compression/decompression
US20200074230A1 (en) * 2018-09-04 2020-03-05 Luminar Technologies, Inc. Automatically generating training data for a lidar using simulated vehicles in virtual space
CN110210389A (en) * 2019-05-31 2019-09-06 东南大学 A kind of multi-targets recognition tracking towards road traffic scene
CN110264468A (en) * 2019-08-14 2019-09-20 长沙智能驾驶研究院有限公司 Point cloud data mark, parted pattern determination, object detection method and relevant device
CN110853037A (en) * 2019-09-26 2020-02-28 西安交通大学 Lightweight color point cloud segmentation method based on spherical projection
CN111476822A (en) * 2020-04-08 2020-07-31 浙江大学 Laser radar target detection and motion tracking method based on scene flow
CN114746872A (en) * 2020-04-28 2022-07-12 辉达公司 Model predictive control techniques for autonomous systems
CN111781608A (en) * 2020-07-03 2020-10-16 浙江光珀智能科技有限公司 Moving target detection method and system based on FMCW laser radar
CN113075683A (en) * 2021-03-05 2021-07-06 上海交通大学 Environment three-dimensional reconstruction method, device and system
CN114419152A (en) * 2022-01-14 2022-04-29 中国农业大学 Target detection and tracking method and system based on multi-dimensional point cloud characteristics
CN114862901A (en) * 2022-04-26 2022-08-05 青岛慧拓智能机器有限公司 Road-end multi-source sensor fusion target sensing method and system for surface mine
CN115032614A (en) * 2022-05-19 2022-09-09 北京航空航天大学 Bayesian optimization-based solid-state laser radar and camera self-calibration method
CN115330923A (en) * 2022-08-10 2022-11-11 小米汽车科技有限公司 Point cloud data rendering method and device, vehicle, readable storage medium and chip

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
ARASH KIANI: "Point Cloud Registration of Tracked Objects and Real-time Visualization of LiDAR Data on Web and Web VR", 《MASTER\'S THESIS IN INFORMATICS》, 15 May 2020 (2020-05-15), pages 1 - 56 *
吴开阳: "基于激光雷达传感器的三维多目标检测与跟踪技术研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》, no. 2022, 15 June 2022 (2022-06-15), pages 136 - 366 *
吴阳勇 等: "Qt与MATLAB混合编程设计雷达信号验证软件", 《电子测量技术》, vol. 43, no. 22, 23 November 2020 (2020-11-23), pages 13 - 18 *
石泽亮: "移动机器人视觉伺服操作臂控制方法研究", 《中国优秀硕士学位论文全文数据库信息科技辑》, no. 2022, 15 November 2022 (2022-11-15), pages 140 - 111 *
赵次郎: "基于激光视觉数据融合的三维场景重构与监控", 《中国优秀硕士学位论文全文数据库 信息科技辑》, no. 2015, 15 July 2015 (2015-07-15), pages 138 - 1060 *

Similar Documents

Publication Publication Date Title
CN111932671A (en) Three-dimensional solid model reconstruction method based on dense point cloud data
CN108986195B (en) Single-lens mixed reality implementation method combining environment mapping and global illumination rendering
Heo et al. Productive high-complexity 3D city modeling with point clouds collected from terrestrial LiDAR
Richter et al. Concepts and techniques for integration, analysis and visualization of massive 3D point clouds
WO2017206325A1 (en) Calculation method and apparatus for global illumination
Virtanen et al. Interactive dense point clouds in a game engine
CN110070488B (en) Multi-angle remote sensing image forest height extraction method based on convolutional neural network
Liang et al. Visualizing 3D atmospheric data with spherical volume texture on virtual globes
CN107220372B (en) A kind of automatic laying method of three-dimensional map line feature annotation
CN110097582B (en) Point cloud optimal registration and real-time display system and working method
CN113593027B (en) Three-dimensional avionics display control interface device
CN112927359A (en) Three-dimensional point cloud completion method based on deep learning and voxels
US20220351463A1 (en) Method, computer device and storage medium for real-time urban scene reconstruction
CN116109765A (en) Three-dimensional rendering method and device for labeling objects, computer equipment and storage medium
Balloni et al. Few shot photogrametry: A comparison between nerf and mvs-sfm for the documentation of cultural heritage
CN116091533B (en) Laser radar target demonstration and extraction method in Qt development environment
Bullinger et al. 3D Surface Reconstruction from Multi-Date Satellite Images
Buck et al. Ignorance is bliss: flawed assumptions in simulated ground truth
CN116091533A (en) Laser radar target demonstration and extraction method in Qt development environment
Crues et al. Digital Lunar Exploration Sites (DLES)
JP2009122998A (en) Method for extracting outline from solid/surface model, and computer software program
Pohle-Fröhlich et al. Roof Segmentation based on Deep Neural Networks.
Lin et al. A novel tree-structured point cloud dataset for skeletonization algorithm evaluation
CN116993894B (en) Virtual picture generation method, device, equipment, storage medium and program product
Conde et al. LiDAR Data Processing for Digitization of the Castro of Santa Trega and Integration in Unreal Engine 5

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant