CN116071400B - Target track tracking method based on laser radar equipment - Google Patents

Target track tracking method based on laser radar equipment Download PDF

Info

Publication number
CN116071400B
CN116071400B CN202310354196.3A CN202310354196A CN116071400B CN 116071400 B CN116071400 B CN 116071400B CN 202310354196 A CN202310354196 A CN 202310354196A CN 116071400 B CN116071400 B CN 116071400B
Authority
CN
China
Prior art keywords
point
container
target
data
distance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310354196.3A
Other languages
Chinese (zh)
Other versions
CN116071400A (en
Inventor
胡彦峰
张合勇
张洪国
邓媛
邹延培
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Guangpo Intelligent Technology Co ltd
Original Assignee
Zhejiang Guangpo Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Guangpo Intelligent Technology Co ltd filed Critical Zhejiang Guangpo Intelligent Technology Co ltd
Priority to CN202310354196.3A priority Critical patent/CN116071400B/en
Publication of CN116071400A publication Critical patent/CN116071400A/en
Application granted granted Critical
Publication of CN116071400B publication Critical patent/CN116071400B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a target track tracking method based on laser radar equipment. In order to solve the problems of huge background deep learning data volume and inaccurate preprocessing data in the prior art; the invention comprises the following steps: s1: acquiring laser radar three-dimensional point cloud data and judging according to an execution state; s2: when the execution state is in the 3D scene background deep learning, performing deep learning on three-dimensional point cloud data of each point bit feature in the 3D scene; s3: and when the execution state is a non-deep learning state, if the execution state is in a target track tracking state, detecting. Based on the data flow mode, invalid data rejection, background rejection and target detection are carried out on each frame of data, and each frame can detect a target track, so that the real-time performance is high, the universality is strong, and the deployment is convenient.

Description

Target track tracking method based on laser radar equipment
Technical Field
The invention relates to the field of target track tracking, in particular to a target track tracking method based on laser radar equipment.
Background
The target track detection technology is the latest engine for innovation and development in the fields of security protection, power inspection, logistics distribution, automatic driving, obstacle avoidance and the like at the present stage, drives the technical development and the industrial growth, and makes innovation and creation quite possible. From the technical point of view, the target track tracking technology in the three-dimensional scene can enable a machine to learn scene information from a radar data center, and the three-dimensional scene deep learning is combined, so that the effect is better, the efficiency is higher, and the interpretation is better. Innovative product function promotes user experience, and helping hand falls this and increases efficiency for the transformation upgrading of enterprise.
The target track tracking technology in the three-dimensional scene is constructed from data learning to reasoning detection, wherein the training stage needs to be supported by long-time detection and learning of a large amount of data and calculation force, and particularly, when full connection is performed, the training stage is generally completed on a strong server. The algorithm detection stage is generally carried out in a fixed three-dimensional space, and aims at detecting and tracking objects in a three-dimensional space scene, so that the real-time performance is high, the calculation force is huge, and the calculation is generally carried out on a central server.
For the detection and tracking stage, high requirements are generally put on calculation force and time delay, and if the algorithm is directly deployed at the terminal, problems such as insufficient calculation force or overlong calculation time are likely to occur. Therefore, it is necessary to effectively pre-process three-dimensional data and to perform an efficient target detection trajectory algorithm.
Currently available three-dimensional space target track tracking schemes:
1) Point-based object detection predicts 3D objects directly from the raw point cloud data. And extracting point cloud characteristics from the point cloud through a point-based backbone network and a point cloud operator, predicting a 3D frame based on the down-sampled points and characteristics, and tracking the track after finding out the target. The method relates to point cloud sampling, multidimensional learning of feature vectors, feature matching and the like. In the target detection stage, most algorithms do not pretreat data well during processing, so that the 3d binding box data detected from the three-dimensional laser point cloud is not accurate enough, the error is large, and the moving track of the target cannot be accurately positioned and tracked.
2) The algorithm and the processing flow based on the 3D target detection algorithm (such as PointRCNN, pointRGNN and CenterPoint) are complex, the cutting frame is huge and is not easy to master and learn, and the tracking result is greatly improved although the tracking result is improved. However, the computational effort consumed in the calculations is excessive, and the accuracy of these results is still limited by the characteristics of the sensor itself. Due to wire harness and distance limitations, the point cloud obtained by lidar is often very sparse at long distances (> 80 m), and thus the size of the object or the tag is often difficult to accurately detect and locate.
For the traditional three-dimensional target track tracking mode, due to the large data volume, not only the data information with multiple dimensions such as an x axis, a y axis and a z axis is involved, but also effective deep learning is required for a space background. There are several problems with these situations:
i. scene background deep learning data volume is huge. In a typical real-time application scenario, such as object detection in an underground garage scenario, there are millions/s and hundreds of thousands of massive data to be combed. The method directly leads to the fact that the algorithm efficiency is not high enough and a large amount of calculation force is needed to support the background learning, and three-dimensional data is learned.
And ii, calculating and reasoning the scene background characteristic points. In the present stage, the operations of extracting and fusing the feature vectors of each information in the scene are realized first, and partial connection or full connection is also involved in some cases. The model constructed in the learning stage is huge in data, and the subsequent detection algorithm is low in efficiency.
Pre-processing data is not accurate enough. When the target detection is carried out, most algorithms do not effectively process the data in the preprocessing stage of the data, so that the 3d binding box, namely a cuboid target frame, is directly detected from the three-dimensional laser point cloud, the precision error is not high enough, and the method is difficult to popularize and apply in the market.
Disclosure of Invention
The method mainly solves the problems of huge background deep learning data volume and inaccurate preprocessing data in the prior art; the target track tracking method based on the laser radar equipment can carry out smooth real-time accurate detection and positioning in a common personal host computer.
The technical problems of the invention are mainly solved by the following technical proposal:
a target track tracking method based on a laser radar apparatus, comprising:
s1: acquiring laser radar three-dimensional point cloud data and judging according to an execution state;
s2: when the execution state is in the 3D scene background deep learning, performing deep learning on three-dimensional point cloud data of each point bit feature in the 3D scene;
s3: when the execution state is a non-deep learning state, if the execution state is in a target track tracking state, analyzing three-dimensional point cloud data of each frame, and performing checksum elimination of the three-dimensional point cloud data according to the deep learning container data to obtain target data for tracking.
Efficient storage of background data and deep learning algorithms. Efficient background deep learning and quick retrieval of scene data are performed on a 3D scene by using an efficient language (C++), so that efficient background rejection and target detection are conveniently performed on each frame of data stream in subsequent detection. Low computational performance requirements, high real-time performance and low delay. And based on the data flow mode, carrying out invalid data rejection, background rejection and target detection on each frame of data. The target track can be detected in each frame, the real-time performance is high, and fps equivalent to the output frame rate can be realized. The universality is strong, and the deployment is convenient. The method can be applied to any conventional 3D scene, can be rapidly deployed in a personal host environment, and does not need an expensive central server or cloud server and the like. The 3d bounding box of the target is accurate in box selection and high in track tracking precision. Because of the pre-processing of the data flow and the subsequent algorithm processing, different parameters are set through different scenes to carry out the algorithm processing such as outlier removal, cluster segmentation and the like, the interference points can be effectively removed during detection, the target point can be accurately found, and the 3d binding box of the target can be accurately solved.
Preferably, the lidar camera acquires three-dimensional point cloud data in a streaming mode.
Low computational performance requirements, high real-time performance and low delay. And based on the data flow mode, carrying out invalid data rejection, background rejection and target detection on each frame of data. The target track can be detected in each frame, the real-time performance is high, and fps equivalent to the output frame rate can be realized.
Preferably, the deep learning process includes:
setting a parameter threshold value, and filtering invalid three-dimensional point cloud data in a 3D scene;
creating a hashmap point location feature container; producing a key of the hashmap point location feature container by using a hashkey algorithm; producing a depth feature container set for each point location;
and carrying out depth feature data retrieval on each point position of each frame, and carrying out deep learning on the depth feature data which is not subjected to the deep learning and putting the depth feature data into a depth feature container set.
After a certain number of data stream frames are continuously learned, the point cloud characteristic data of all the points in the 3D scene are efficiently stored and deeply learned.
Preferably, the set parameter threshold includes a minimum depth threshold in the filtered point cloud data stream, a maximum depth threshold in the filtered point cloud data stream, and a calculated two-point world coordinate distance threshold. And removing most of invalid three-dimensional point cloud data.
Preferably, the hashkey algorithm includes:
performing character string form conversion on X coordinate information of an X axis and Y coordinate information of a Y axis of each point location information;
and the X-axis coordinate value and the Y-axis coordinate value are spliced to form a key, and a depth characteristic data set of each point location is produced.
The Z-axis depth value is taken as the value value=z-axis depth value of the key value pair.
Preferably, the step S3 includes the following steps:
setting a parameter threshold, filtering invalid three-dimensional point cloud data, performing point location information search and point location distance judgment, obtaining a target point and placing the target point into a container A;
setting an outlier threshold value and executing an outlier algorithm;
setting a distance threshold value, and carrying out clustering segmentation on all points in the container A for multiple times to obtain a target point location set container B;
and setting a point threshold of the outlineBox set, finding out the central point of the container B, and putting each object into the outlineBox set.
Because of the pre-processing of the data flow and the subsequent algorithm processing, different parameters are set through different scenes to carry out the algorithm processing such as outlier removal, cluster segmentation and the like, the interference points can be effectively removed during detection, the target point can be accurately found, and the 3d binding box of the target can be accurately solved.
Preferably, a threshold value of the distance between the current point and the background point is set, and the point location information search and the point location distance judgment are executed, wherein the process is as follows:
searching a corresponding container through a key generated by the point location information;
taking the current depth value and each depth value in the container to carry out distance judgment, if the absolute value of the distance found in the depth container of the point location is smaller than the threshold value, indicating that the point location is a background point, otherwise, indicating that the point location is a target point;
the target point is marked as a point to be processed and inserted into the point set container A to be processed.
Preferably, each bit masking bit is created, and the initial state of each element is set to 0, and the to-be-processed point set container A is processed by a outlier algorithm:
and setting an outlier threshold, solving the distance between the current point and the previous position point, and if the distance is larger than the configured outlier threshold, identifying the outlier, and eliminating.
Preferably, when the point set container a is subjected to clustering algorithm processing, setting the distance threshold includes:
a current point X distance threshold value, a current point Y distance threshold value and a current point Z distance threshold value;
traversing the container A to take out the information of the first point and other point positions in the container A for distance operation, and marking the detected state;
judging the distance between the central point and the similar central point, and carrying out storage statistics on the similar points when the distance is smaller than a distance threshold value, otherwise, carrying out no statistics on the different types, and updating the information of the similar central points;
and when the detected point number is greater than the set number, detecting the target, classifying and storing the point position set, and inserting the point position set into a target point position set container B.
Preferably, a central point of the target point location set container B is solved, and an outlineBox set point number threshold is set;
when the point number in the container B reaches the threshold value of the point number of the outlineBox set, representing that the object belongs to an effective object, and executing rendering and track tracking;
setting a displacement threshold, judging that the center point positions of the new frame and the previous frame are offset, and if the displacement threshold is smaller than the new frame and the previous frame, the new frame is static, and if the displacement threshold is larger than the new frame and the previous frame, the new frame is moving.
The beneficial effects of the invention are as follows:
1. efficient storage of background data and deep learning algorithms. Efficient background deep learning and quick retrieval of scene data are performed on a 3D scene by using an efficient language (C++), so that efficient background rejection and target detection are conveniently performed on each frame of data stream in subsequent detection.
2. Low computational performance requirements, high real-time performance and low delay. And based on the data flow mode, carrying out invalid data rejection, background rejection and target detection on each frame of data. The target track can be detected in each frame, the real-time performance is high, and fps equivalent to the output frame rate can be realized.
3. The universality is strong, and the deployment is convenient. The method can be applied to any conventional 3D scene, can be rapidly deployed in a personal host environment, and does not need an expensive central server or cloud server and the like.
4. The 3d bounding box of the target is accurate in box selection and high in track tracking precision. Because of the pre-processing of the data flow and the subsequent algorithm processing, different parameters are set through different scenes to carry out the algorithm processing such as outlier removal, cluster segmentation and the like, the interference points can be effectively removed during detection, the target point can be accurately found, and the 3d binding box of the target can be accurately solved.
Drawings
FIG. 1 is a flow chart of target trajectory tracking of the present invention.
Detailed Description
The technical scheme of the invention is further specifically described below through examples and with reference to the accompanying drawings.
Examples:
the target track tracking method based on the laser radar device of the embodiment, as shown in fig. 1, includes the following steps:
s1: and acquiring laser radar three-dimensional point cloud data and judging according to the execution state.
In this embodiment, the lidar camera acquires three-dimensional point cloud data in a streaming mode.
The radar camera is fixedly arranged at a certain position of a scene and is in a relatively static state, so that real-time three-dimensional scene data can be stably captured. A radar laser and an image sensor are installed in the radar camera device.
The scheme of the embodiment comprises the following steps: operating a radar camera, acquiring three-dimensional data of the radar camera, and calculating and tracking real-time three-dimensional motion data of a target; the method comprises the steps of performing deep memory learning on a scene, processing three-dimensional real-time data in the scene during the deep memory learning, and indicating that the deep memory learning of the scene is completed after the deep memory learning reaches a certain set threshold value, so that the target in the scene can be tracked and searched.
And carrying out algorithm processing according to each frame of three-dimensional data of the radar, searching out the target, calculating real-time three-dimensional motion data of the target, calculating the accurate position of the tracked target in the three-dimensional scene, and calculating a three-dimensional box bounding box related to the accurate position of the tracked target according to the position of the searched target.
S2: and when the execution state is in the 3D scene background deep learning, performing the deep learning on the three-dimensional point cloud data of each point bit characteristic in the 3D scene.
The deep learning process comprises the following steps:
1) And setting a parameter threshold value, and filtering invalid three-dimensional point cloud data in the 3D scene.
The set parameter threshold comprises a minimum depth threshold in the filtered point cloud data stream, a maximum depth threshold in the filtered point cloud data stream and a calculated two-point world coordinate distance threshold.
In this embodiment, specifically set values are:
the minimum depth threshold in the filtered point cloud data stream is:
npintclouddepthminvalue=0 mm, in mm.
The maximum depth threshold in the filtered point cloud data stream is:
npintclouddepthmaxvalue=200000 mm in mm.
The two-point world coordinate distance threshold is calculated as follows:
nCalcBackgroundPointSphereRadiusThreshold=100mm。
2) Creating a hashmap point location feature container; producing a key of the hashmap point location feature container by using a hashkey algorithm; a set of depth feature containers for each point location is produced.
An unordered hashmap (m_umapbackgrounddat: point feature container) is then created.
Using a self-setting hashkey algorithm, the algorithm content is as follows:
and performing character string form conversion on the X-axis X-coordinate information and the Y-axis Y-coordinate information of each point location information, and splicing the key by the X-axis coordinate value and the Y-axis coordinate value to form a unique keyword of each point location, thereby organizing all the depth characteristic data sets of each point location. The expression form of the key is as follows:
szKey=itoa(nPointX, szKey, 10)+ itoa(nPointY, szKey + strlen(szKey), 10)
3) And carrying out depth feature data retrieval on each point position of each frame, and carrying out deep learning on the depth feature data which is not subjected to the deep learning and putting the depth feature data into a depth feature container set.
Each point location of each frame can search the depth feature data, if the new depth feature data is not learned, the new depth feature data is learned, and the new depth feature data is put into a feature set at the same time:
m_umapBackgroundData[szKey].insert(nPointZ)
and the key is produced as the key of the map container in the manner described above, and the depth value of the Z axis is used as the value value=z axis depth value of the key value pair.
After a certain number of data stream frames are continuously learned, the point number difference when the background deep learning is completed is configured and set to identify the threshold value of the point number difference when the learning is completed, and then all the point cloud characteristic data in the 3D scene are efficiently stored and deep learned.
The point number threshold value of the phase difference when the background deep learning is finished is as follows:
nLearnFinishOffsetPointsThreshold=50。
s3: and when the execution state is a non-deep learning state, if the execution state is in a target track tracking state, analyzing three-dimensional point cloud data of each frame, and performing checksum elimination of the three-dimensional point cloud data according to the deep learning container data to obtain target data. The method specifically comprises the following steps:
1) Setting a parameter threshold, filtering invalid three-dimensional point cloud data, performing point location information search and point location distance judgment, obtaining a target point and placing the target point into a container A.
And when the execution state is not the non-deep learning state, judging whether the execution state is in the target track tracking state, and if so, detecting.
And analyzing the data aiming at the stream data of each frame, and performing checksum elimination of the background data according to the container data of the background deep learning.
And firstly, filtering out part of invalid point cloud data by setting parameters. The set parameters include:
filtering a minimum depth threshold in the point cloud data stream:
npintclouddepthminvalue=0 mm, in mm.
Filtering a maximum depth threshold in the point cloud data stream:
npintclouddepthmaxvalue=200000 mm in mm.
And then, through setting parameters, performing point location information search and point location distance judgment. The set parameters are the distance difference value between the current point and the background point:
nCurPoint2 BackgroudPointDist=500 mm, units of millimeters.
The detection rules are as follows:
searching a corresponding container in the red-black tree container through the key generated by the point location information;
and then taking the current depth value and each depth value in the container to carry out distance judgment, if the absolute value of the distance found in the depth container of the point location is smaller than the threshold value, indicating that the point location is a background point, otherwise, the point location is a target point. The target point is then marked as a point to be processed and inserted into the point set container a to be processed.
2) And setting an outlier threshold value, and executing an outlier algorithm.
Each bit mask bit is created, and the outlier algorithm processing is performed on the point set container a to be processed using an array and setting the initial state of each element to 0 (m_pmultiobjclustericheckidx [ i ] =0).
An outlier threshold ncp outlierpintthreshold 1=1000 mm is set. In millimeters.
And solving the distance between the current point and the previous position point, if the distance is larger than the configuration threshold, identifying the outlier, and removing the outlier, wherein the outlier can be removed for 2-3 times.
3) And setting a distance threshold value, and carrying out clustering segmentation on all points in the container A for multiple times to obtain a target point location set container B.
When the clustering algorithm processing is carried out on the point set container A, the distance parameters are set as follows:
current point to center point X distance threshold:
dCurPoint2CenterPointDistXThreshold=500mm。
current point to center point Y distance threshold:
dCurPoint2CenterPointDistYThreshold=500mm。
current point to center point Z distance threshold:
dCurPoint2CenterPointDistZThreshold=500mm。
and traversing the container to take out the first point and other point location information in the container for distance operation, and marking the detected state.
And judging the distance between the central point and the similar central point, and if the distance is smaller than the distance parameter threshold, carrying out storage statistics on the similar points, otherwise, carrying out no statistics on the different types, and updating the information of the similar central points.
When the detected point number is larger than the set threshold (npointidssneap threshold=10), the target is detected, and the point set is classified and stored and inserted into a new point set container B.
4) And setting a point threshold of the outlineBox set, finding out the central point of the container B, and putting each object into the outlineBox set.
Solving the central point of the point location set B, and setting a point number threshold of the outlineBox set:
nOutlineRenderNeedMinPointNumThreshold=50。
when the number of point bits in the point set B container reaches the threshold, that is, the representation belongs to a valid target, rendering, trajectory tracking, and the like may be performed. Setting a displacement threshold (dmovadistthreshold=500 mm), judging that the central point positions of the new frame and the previous frame are offset, if the displacement threshold is smaller than the displacement threshold, the new frame and the previous frame are static, if the displacement threshold is larger than the displacement threshold, the new frame and the previous frame are moving, and accurately solving the information such as the 3d bounding box frame of the target through the point set information to track the real-time track.
The scheme of the embodiment is an efficient storage and deep learning algorithm of background data. Efficient background deep learning and quick retrieval of scene data are performed on a 3D scene by using an efficient language (C++), so that efficient background rejection and target detection are conveniently performed on each frame of data stream in subsequent detection. Low computational performance requirements, high real-time performance and low delay. And based on the data flow mode, carrying out invalid data rejection, background rejection and target detection on each frame of data. The target track can be detected in each frame, the real-time performance is high, and fps equivalent to the output frame rate can be realized. The universality is strong, and the deployment is convenient. The method can be applied to any conventional 3D scene, can be rapidly deployed in a personal host environment, and does not need an expensive central server or cloud server and the like. The 3d bounding box of the target is accurate in box selection and high in track tracking precision. Because of the pre-processing of the data flow and the subsequent algorithm processing, different parameters are set through different scenes to carry out the algorithm processing such as outlier removal, cluster segmentation and the like, the interference points can be effectively removed during detection, the target point can be accurately found, and the 3d binding box of the target can be accurately solved.
It should be understood that the examples are only for illustrating the present invention and are not intended to limit the scope of the present invention. Further, it is understood that various changes and modifications may be made by those skilled in the art after reading the teachings of the present invention, and such equivalents are intended to fall within the scope of the claims appended hereto.

Claims (5)

1. A target track tracking method based on a laser radar apparatus, comprising:
s1: acquiring laser radar three-dimensional point cloud data and judging according to an execution state;
s2: when the execution state is in the 3D scene background deep learning, performing deep learning on three-dimensional point cloud data of each point bit feature in the 3D scene;
s3: when the execution state is a non-deep learning state, if the execution state is in a target track tracking state, analyzing three-dimensional point cloud data of each frame, and performing checksum elimination of the three-dimensional point cloud data according to the deep learning container data to obtain target data for tracking;
the step S3 comprises the following steps:
setting a parameter threshold, filtering invalid three-dimensional point cloud data, performing point location information search and point location distance judgment, obtaining a target point and placing the target point into a container A;
setting an outlier threshold value and executing an outlier algorithm;
setting a distance threshold value, and carrying out clustering segmentation on all points in the container A for multiple times to obtain a target point location set container B;
setting a point threshold of an outlineBox set, finding out the central point of the container B, and putting each target into a respective outlineBox set;
setting a threshold value of the distance between the current point and the background point, and executing point location information search and point location distance judgment, wherein the process is as follows:
searching a corresponding container through a key generated by the point location information;
taking the current depth value and each depth value in the container to carry out distance judgment, if the absolute value of the distance found in the depth container of the point location is smaller than the threshold value, indicating that the point location is a background point, otherwise, indicating that the point location is a target point;
dividing a target point into points to be processed and inserting the points into a point set container A to be processed;
creating each point bit mask bit, using an array and setting the initial state of each element to 0, and performing outlier algorithm processing on the point set container A to be processed:
setting an outlier threshold, solving the distance between the current point and the previous position point, and if the distance is larger than the configured outlier threshold, identifying the outlier, and eliminating;
when the clustering algorithm processing is performed on the point set container A, the distance threshold setting comprises the following steps:
a current point X distance threshold value, a current point Y distance threshold value and a current point Z distance threshold value;
traversing the container A to take out the information of the first point and other point positions in the container A for distance operation, and marking the detected state;
judging the distance between the central point and the similar central point, and carrying out storage statistics on the similar points when the distance is smaller than a distance threshold value, otherwise, carrying out no statistics on the different types, and updating the information of the similar central points;
the method comprises the steps of circulating in this way, when the number of detected point bits is larger than the set number, detecting a target, classifying and storing a point position set of the detected point bit, and inserting the point position set into a target point position set container B;
solving a central point of the target point location set container B, and setting an outlineBox set point number threshold;
when the point number in the container B reaches the threshold value of the point number of the outlineBox set, representing that the object belongs to an effective object, and executing rendering and track tracking;
setting a displacement threshold, judging that the center point positions of the new frame and the previous frame are offset, and if the displacement threshold is smaller than the new frame and the previous frame, the new frame is static, and if the displacement threshold is larger than the new frame and the previous frame, the new frame is moving.
2. The target track tracking method based on a lidar device according to claim 1, wherein the lidar camera acquires three-dimensional point cloud data in a streaming mode.
3. The target track tracking method based on a lidar device according to claim 1 or 2, wherein the deep learning process comprises:
setting a parameter threshold value, and filtering invalid three-dimensional point cloud data in a 3D scene;
creating a hashmap point location feature container; producing a key of the hashmap point location feature container by using a hashkey algorithm; producing a depth feature container set for each point location;
and carrying out depth feature data retrieval on each point position of each frame, and carrying out deep learning on the depth feature data which is not subjected to the deep learning and putting the depth feature data into a depth feature container set.
4. A target track following method based on a lidar device according to claim 3, wherein the set parameter thresholds comprise a minimum depth threshold in the filtered point cloud data stream, a maximum depth threshold in the filtered point cloud data stream, and a calculated two-point world coordinate distance threshold.
5. A target track tracking method based on a lidar device according to claim 3, wherein the hashkey algorithm comprises:
performing character string form conversion on X coordinate information of an X axis and Y coordinate information of a Y axis of each point location information;
and the X-axis coordinate value and the Y-axis coordinate value are spliced to form a key, and a depth characteristic data set of each point location is produced.
CN202310354196.3A 2023-04-06 2023-04-06 Target track tracking method based on laser radar equipment Active CN116071400B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310354196.3A CN116071400B (en) 2023-04-06 2023-04-06 Target track tracking method based on laser radar equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310354196.3A CN116071400B (en) 2023-04-06 2023-04-06 Target track tracking method based on laser radar equipment

Publications (2)

Publication Number Publication Date
CN116071400A CN116071400A (en) 2023-05-05
CN116071400B true CN116071400B (en) 2023-07-18

Family

ID=86177134

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310354196.3A Active CN116071400B (en) 2023-04-06 2023-04-06 Target track tracking method based on laser radar equipment

Country Status (1)

Country Link
CN (1) CN116071400B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113985445A (en) * 2021-08-24 2022-01-28 中国北方车辆研究所 3D target detection algorithm based on data fusion of camera and laser radar

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10394237B2 (en) * 2016-09-08 2019-08-27 Ford Global Technologies, Llc Perceiving roadway conditions from fused sensor data
TWI664573B (en) * 2018-05-11 2019-07-01 國立交通大學 Motion computing device, robot system and robot controlling method
CN112561966B (en) * 2020-12-22 2022-11-11 清华大学 Sparse point cloud multi-target tracking method fusing spatio-temporal information
CN113345018B (en) * 2021-05-31 2022-06-14 湖南大学 Laser monocular vision fusion positioning mapping method in dynamic scene
CN113281718B (en) * 2021-06-30 2024-03-22 江苏大学 3D multi-target tracking system and method based on laser radar scene flow estimation
CN115905653A (en) * 2021-09-30 2023-04-04 腾讯科技(深圳)有限公司 Characteristic data storage method and target object analysis method based on characteristic data
CN114882458A (en) * 2022-04-14 2022-08-09 北京主线科技有限公司 Target tracking method, system, medium and device

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113985445A (en) * 2021-08-24 2022-01-28 中国北方车辆研究所 3D target detection algorithm based on data fusion of camera and laser radar

Also Published As

Publication number Publication date
CN116071400A (en) 2023-05-05

Similar Documents

Publication Publication Date Title
US20190206063A1 (en) Method and apparatus for processing point cloud data
CN107907124B (en) Positioning method based on scene recognition, electronic equipment, storage medium and system
CN111523545B (en) Article searching method combined with depth information
CN108765452A (en) A kind of detection of mobile target in complex background and tracking
CN115049700A (en) Target detection method and device
CN115546116B (en) Full-coverage type rock mass discontinuous surface extraction and interval calculation method and system
CN110146080B (en) SLAM loop detection method and device based on mobile robot
US10937150B2 (en) Systems and methods of feature correspondence analysis
CN113569968A (en) Model training method, target detection method, device, equipment and storage medium
Rogelio et al. Object detection and segmentation using Deeplabv3 deep neural network for a portable X-ray source model
CN117495891A (en) Point cloud edge detection method and device and electronic equipment
CN116946610B (en) Method and device for picking up goods in intelligent warehousing system
CN116071400B (en) Target track tracking method based on laser radar equipment
CN117103259A (en) Target following method, apparatus, movable following device and storage medium
CN117011341A (en) Vehicle track detection method and system based on target tracking
CN113569982B (en) Position identification method and device based on two-dimensional laser radar feature point template matching
CN115393755A (en) Visual target tracking method, device, equipment and storage medium
CN114913289A (en) Semantic SLAM method for three-dimensional dynamic uncertainty of production workshop
CN114140497A (en) Target vehicle 3D real-time tracking method and system
CN118038103B (en) Visual loop detection method based on improved dynamic expansion model self-adaptive algorithm
CN112258602B (en) Stop line generation method and device, electronic equipment and storage medium
Peng et al. An improved algorithm for detection and pose estimation of texture-less objects
CN113516090B (en) Factory building scene recognition method and device, electronic equipment and storage medium
Zhang et al. LSGDDN-LCD: An appearance-based loop closure detection using local superpixel grid descriptors and incremental dynamic nodes
CN118366119A (en) Method and device for determining exercise intention and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant