CN116466355A - Point cloud target detection method and device and computer readable storage medium - Google Patents

Point cloud target detection method and device and computer readable storage medium Download PDF

Info

Publication number
CN116466355A
CN116466355A CN202310298342.5A CN202310298342A CN116466355A CN 116466355 A CN116466355 A CN 116466355A CN 202310298342 A CN202310298342 A CN 202310298342A CN 116466355 A CN116466355 A CN 116466355A
Authority
CN
China
Prior art keywords
target
point cloud
detection
target detection
tracking track
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310298342.5A
Other languages
Chinese (zh)
Inventor
樊强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Geely Holding Group Co Ltd
Ningbo Geely Automobile Research and Development Co Ltd
Original Assignee
Zhejiang Geely Holding Group Co Ltd
Ningbo Geely Automobile Research and Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Geely Holding Group Co Ltd, Ningbo Geely Automobile Research and Development Co Ltd filed Critical Zhejiang Geely Holding Group Co Ltd
Priority to CN202310298342.5A priority Critical patent/CN116466355A/en
Publication of CN116466355A publication Critical patent/CN116466355A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/04Systems determining the presence of a target
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/50Systems of measurement based on relative movement of target
    • G01S17/58Velocity or trajectory determination systems; Sense-of-movement determination systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/62Extraction of image or video features relating to a temporal dimension, e.g. time-based feature extraction; Pattern tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Abstract

The embodiment of the application discloses a point cloud target detection method, a point cloud target detection device and a computer readable storage medium, wherein the method comprises the following steps: acquiring a point cloud; detecting a target in the point cloud according to a first target detection method; tracking the detected target, and determining whether a missed detection target exists according to the matching condition of the actual tracking track and the predicted tracking track of the target; when the detection omission target is determined, a target frame used for determining the predicted position of the target when the tracking track prediction is carried out on the target is obtained and is used as a priori frame, and the second target detection is carried out on the region in the preset range of the priori frame. By the aid of the method, the problem that laser radar target detection output is unstable is solved.

Description

Point cloud target detection method and device and computer readable storage medium
Technical Field
The embodiments of the present application relate to a target detection technology, and in particular, to a method and apparatus for detecting a point cloud target, and a computer readable storage medium.
Background
The current target frame detection method includes: deep sort and Deep multi-object (multi-object trace).
The Deepsort can solve the problem of frame missing of the target to a certain extent, namely, for a certain object, the previous frames can be normally detected, and 1-2 frames are lost halfway, and at the moment, the Deepsort algorithm can also normally output the target of frame missing. However, if multiple frames are lost, the deepsort cannot solve the problem, mainly because the deepsort algorithm cannot determine whether the target is actually lost or not detected by the target detection.
The Deep multi-object still has the unsolvable problems that the object detection continuous multi-frame is lost, and the missed detection object cannot be output.
Disclosure of Invention
The embodiment of the application provides a point cloud target detection method, a point cloud target detection device and a computer readable storage medium, which can solve the problem of unstable laser radar target detection output.
The embodiment of the application provides a point cloud target detection method, which can include:
acquiring a point cloud;
detecting a target in the point cloud according to a preset first target detection method;
tracking the detected target, and determining whether a missed detection target exists according to the matching condition of the actual tracking track and the predicted tracking track of the target;
when the detection omission target is determined, a target frame used for determining the predicted position of the target when the tracking track prediction is carried out on the target is obtained and is used as a priori frame, and the second target detection is carried out on the region within the preset range of the priori frame.
In an exemplary embodiment of the present application, the determining whether the missed detection target exists according to the matching situation of the actual tracking track and the predicted tracking track of the target may include:
detecting whether the actual tracking track is matched with the predicted tracking track or not according to a preset matching algorithm;
when the actual tracking track is matched with the predicted tracking track, determining that no missed detection target exists;
and when the actual tracking track is not matched with the predicted tracking track, determining that a missed detection target exists.
In an exemplary embodiment of the present application, the matching algorithm may include: hungarian matching algorithm.
In an exemplary embodiment of the present application, the method may further include:
in the target tracking process, judging whether the target disappears according to whether the tracking track disappears;
when the tracking track is not disappeared, judging that the target is not disappeared;
and when the tracking track disappears, judging that the target disappears.
In an exemplary embodiment of the present application, the performing the second target detection on the area within the preset range of the a priori frame may include:
expanding the prior frame into an expansion frame according to a preset proportion;
taking out the point cloud buckle in the expansion frame and recording the point cloud buckle as a first point cloud;
and judging whether the first point cloud has a potential target or not by adopting a second target detection method.
In an exemplary embodiment of the present application, the determining, by using a second target detection method, whether the first point cloud has a potential target may include:
calculating the height of the first point cloud in the vertical direction;
removing the ground height from the height of the first point cloud in the vertical direction according to the ground height preset in the point cloud;
acquiring point clouds corresponding to the residual heights;
calculating the point cloud number of the point cloud corresponding to the residual height;
and determining whether the potential target exists in the area where the first point cloud is located according to the point cloud number.
In an exemplary embodiment of the present application, the determining, according to the number of point clouds, whether the potential target exists in the area where the first point cloud is located may include:
when the number of the point clouds is larger than or equal to a preset number threshold, confirming that the potential targets exist in the area where the first point clouds are located;
and when the number of the point clouds is smaller than the number threshold, confirming that the potential target does not exist in the area where the first point cloud is located.
In an exemplary embodiment of the present application, after performing the second target detection on the area within the preset range of the a priori frame, the method may further include:
when the fact that the potential target does not exist in the area where the first point cloud is located is judged, directly outputting a target detection result of the potential target;
when the potential targets exist in the area where the first point cloud is located, performing second target detection on the first point cloud by adopting a preset deep learning target detection method, and outputting corresponding target detection results according to the detection results of the second target detection on the first point cloud.
In an exemplary embodiment of the present application, outputting a corresponding target detection result according to a detection result of performing the second target detection on the first point cloud may include:
if no target is detected when the first point cloud is subjected to the second target detection, adopting the first target detection algorithm to perform the third target detection on the first point cloud, and outputting a corresponding target detection result;
if the target is detected when the first point cloud is subjected to the second target detection, and the detected target is one, the one target is directly output as a target detection result;
and if the targets are detected when the first point cloud is subjected to the second target detection and the detected targets are a plurality of targets, outputting a target frame with the maximum intersection ratio IOU in target frames of the targets as a target detection result.
The embodiment of the application also provides a point cloud target detection device, which can comprise a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the point cloud target detection method is realized when the processor executes the computer program.
The embodiment of the application also provides a computer readable storage medium, on which a computer program is stored, which is characterized in that the computer program realizes the point cloud target detection method when being executed by a processor.
Compared with the related art, the point cloud target detection method of the embodiment of the application can comprise the following steps: acquiring a point cloud; detecting a target in the point cloud according to a preset first target detection method; tracking the detected target, and determining whether a missed detection target exists according to the matching condition of the actual tracking track and the predicted tracking track of the target; when the detection omission target is determined, a target frame used for determining the predicted position of the target when the tracking track prediction is carried out on the target is obtained and is used as a priori frame, and the second target detection is carried out on the region within the preset range of the priori frame. By the aid of the method, the problem that laser radar target detection output is unstable is solved.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the application. Other advantages of the present application may be realized and attained by the structure particularly pointed out in the written description and drawings.
Drawings
The accompanying drawings are included to provide an understanding of the technical aspects of the present application, and are incorporated in and constitute a part of this specification, illustrate the technical aspects of the present application and together with the examples of the present application, and not constitute a limitation of the technical aspects of the present application.
Fig. 1 is a flowchart of a point cloud target detection method according to an embodiment of the present application;
fig. 2 is a schematic diagram of a method for performing a second target detection on an area within a preset range of a priori frame according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a first target detection method according to an embodiment of the present application;
fig. 4 is a block diagram of a point cloud object detection apparatus according to an embodiment of the present application.
Detailed Description
The present application describes a number of embodiments, but the description is illustrative and not limiting and it will be apparent to those of ordinary skill in the art that many more embodiments and implementations are possible within the scope of the embodiments described herein. Although many possible combinations of features are shown in the drawings and discussed in the detailed description, many other combinations of the disclosed features are possible. Any feature or element of any embodiment may be used in combination with or in place of any other feature or element of any other embodiment unless specifically limited.
The present application includes and contemplates combinations of features and elements known to those of ordinary skill in the art. The embodiments, features and elements of the present disclosure may also be combined with any conventional features or elements to form a unique inventive arrangement as defined in the claims. Any feature or element of any embodiment may also be combined with features or elements from other inventive arrangements to form another unique inventive arrangement as defined in the claims. Thus, it should be understood that any of the features shown and/or discussed in this application may be implemented alone or in any suitable combination. Accordingly, the embodiments are not to be restricted except in light of the attached claims and their equivalents. Further, various modifications and changes may be made within the scope of the appended claims.
Furthermore, in describing representative embodiments, the specification may have presented the method and/or process as a particular sequence of steps. However, to the extent that the method or process does not rely on the particular order of steps set forth herein, the method or process should not be limited to the particular sequence of steps described. Other sequences of steps are possible as will be appreciated by those of ordinary skill in the art. Accordingly, the particular order of the steps set forth in the specification should not be construed as limitations on the claims. Furthermore, the claims directed to the method and/or process should not be limited to the performance of their steps in the order written, and one skilled in the art can readily appreciate that the sequences may be varied and still remain within the spirit and scope of the embodiments of the present application.
The embodiment of the application provides a point cloud target detection method, as shown in fig. 1, the method may include 101-S104:
s101, acquiring point cloud;
s102, detecting targets in the point cloud according to a preset first target detection method;
s103, carrying out target tracking on the detected target, and determining whether a missed detection target exists according to the matching condition of the actual tracking track and the predicted tracking track of the target;
and S104, when the detection omission target is determined, acquiring a target frame used for determining the predicted position of the target when the tracking track prediction is performed on the target, and performing second target detection on the region within the preset range of the prior frame as the prior frame.
The target detection of the laser radar at present has the condition of target missing detection, and if the target missing detection occurs, the output instability conditions such as no result, unstable output target speed and the like of the subsequent target fusion module are easily caused. According to the embodiment of the application, track tracking is introduced, a priori frame is given, secondary target detection is carried out on the region where the priori frame is located, the problem that target detection output of a laser radar is unstable is solved, and accurate target detection results can be output every frame as far as possible.
In an exemplary embodiment of the present application, the target detection algorithm (e.g., the first target detection algorithm described above and the second target detection algorithm described below) may be a conventional target detection algorithm, which is used to detect targets in a cloud, such as people, vehicles, non-vehicles, etc., and through target detection, information about the location, class, etc. of each target may be detected.
In an exemplary embodiment of the present application, the target tracking may be performed on the result of the target detection described above: the tracking trajectory of each target is recorded based on history information (e.g., history positions, history times when at different history positions, etc.) of successive frames of the detected target.
In the exemplary embodiment of the present application, in the target tracking process, for the tracking initial position, the number of point clouds corresponding to the target tracking trajectory is 0, but as the tracking time increases, the number of point clouds corresponding to the tracking trajectory gradually increases.
In an exemplary embodiment of the present application, the method may further include:
in the target tracking process, judging whether the target disappears according to whether the tracking track disappears;
when the tracking track is not disappeared, judging that the target is not disappeared;
and when the tracking track disappears, judging that the target disappears.
In the exemplary embodiment of the present application, since the target continuously disappears for a certain period of time, the tracking trajectory also disappears. Therefore, when the disappearance of the tracking locus is detected, it can be determined that the detection target has disappeared.
In an exemplary embodiment of the present application, if the detected target does not disappear, it may be determined whether there is a missed target according to a matching condition of an actual tracking track and a predicted tracking track of the target.
In an exemplary embodiment of the present application, the determining whether the missed detection target exists according to the matching situation of the actual tracking track and the predicted tracking track of the target may include:
detecting whether the actual tracking track is matched with the predicted tracking track or not according to a preset matching algorithm;
when the actual tracking track is matched with the predicted tracking track, determining that no missed detection target exists;
and when the actual tracking track is not matched with the predicted tracking track, determining that a missed detection target exists.
In an exemplary embodiment of the present application, the matching algorithm may include, but is not limited to: hungarian matching algorithm.
In an exemplary embodiment of the present application, if the actual tracking track and the predicted tracking track can be matched, it may be determined that no missed target exists, and the state of the corresponding tracking track may be updated, for example, the state of the track may be updated by a averaging strategy.
In an exemplary embodiment of the present application, if the actual tracking trajectory and the predicted tracking trajectory cannot match, it may be determined that there may be a missed target.
In the exemplary embodiment of the present application, if it is determined that a certain target does not have an actual tracking track corresponding to the target, it may be considered that the target does not have a corresponding track, so that a corresponding track may be initially obtained for the target, and the detected detection result data of the target having the unmatched tracking track may be assigned to the track state of the track.
In an exemplary embodiment of the present application, as shown in fig. 2, for a case where there may be a missed target, a target frame corresponding to the predicted tracking track may be taken as a priori frame, and an area where the priori frame is located and/or an area near the priori frame is considered to have a great probability that there is a potential target (or called a missed target), but the target detection at this time does not detect the potential target. Therefore, the secondary target detection can be performed on the area where the prior frame is located and the nearby area.
In an exemplary embodiment of the present application, the performing the second target detection on the area within the preset range of the a priori frame may include the steps of:
expanding the prior frame into an expansion frame according to a preset proportion;
taking out the point cloud buckle in the expansion frame and recording the point cloud buckle as a first point cloud;
and judging whether the first point cloud has a potential target or not by adopting a second target detection method.
In an exemplary embodiment of the present application, the prior box may be properly extended to a certain area, for example, extended by 50%, and the extended box is acquired, where the extended area cannot invade the area where the actual tracking track is located. And taking out the Point cloud in the area where the expansion frame is positioned, and naming the Point cloud as the first Point cloud.
In an exemplary embodiment of the present application, the determining, by using a second target detection method, whether the first point cloud has a potential target may include:
calculating the height of the first point cloud in the vertical direction;
removing the ground height from the height of the first point cloud in the vertical direction according to the ground height preset in the point cloud;
acquiring point clouds corresponding to the residual heights;
calculating the point cloud number of the point cloud corresponding to the residual height;
and determining whether the potential target exists in the area where the first point cloud is located according to the point cloud number.
In an exemplary embodiment of the present application, the determining, according to the number of point clouds, whether the potential target exists in the area where the first point cloud is located may include:
when the number of the point clouds is larger than or equal to a preset number threshold, confirming that the potential targets exist in the area where the first point clouds are located;
and when the number of the point clouds is smaller than the number threshold, confirming that the potential target does not exist in the area where the first point cloud is located.
In the exemplary embodiment of the present application, when determining whether the first Point cloud point_loop has a potential target, filtering the ground height from the height in the vertical direction of the first Point cloud, where the ground height may be set to 0-0.4m (meters), so as to filter the ground Point cloud, if a certain Point cloud still exists after the ground Point cloud is filtered, and the number of the Point clouds is greater than or equal to a number threshold (for example, 80-150, and 100 may be selected), then considering that the area where the expansion frame is located has the potential target; conversely, if the number of point clouds is less than the number threshold, it may be determined that no potential targets are present.
In an exemplary embodiment of the present application, after performing the second target detection on the area within the preset range of the a priori frame, the method may further include:
when the fact that the potential target does not exist in the area where the first point cloud is located is judged, directly outputting a target detection result of the potential target;
when the potential targets exist in the area where the first point cloud is located, performing second target detection on the first point cloud by adopting a preset deep learning target detection method, and outputting corresponding target detection results according to the detection results of the second target detection on the first point cloud.
In an exemplary embodiment of the present application, outputting a corresponding target detection result according to a detection result of performing the second target detection on the first point cloud may include:
if no target is detected when the first point cloud is subjected to the second target detection, adopting the first target detection algorithm to perform the third target detection on the first point cloud, and outputting a corresponding target detection result;
if the target is detected when the first point cloud is subjected to the second target detection, and the detected target is one, the one target is directly output as a target detection result;
and if the targets are detected when the first point cloud is subjected to the second target detection and the detected targets are a plurality of targets, outputting a target frame with the maximum intersection ratio IOU in target frames of the targets as a target detection result.
In an exemplary embodiment of the present application, if it is determined that there is a potential target, a second target detection is performed on the first point cloud using a new target detection scheme, such as a deep learning target detection model. When the second target detection is performed, any target may not be monitored, no target frame is acquired, and at this time, the third target detection may be performed on the first point cloud; it is also possible to detect only one target, acquire a corresponding target frame, and directly output the unique target detection result; it is also possible to detect multiple targets, acquire multiple corresponding target frames, at this time, calculate the IOU (cross-over ratio) of the multiple target frames, and acquire the target frame corresponding to the maximum IOU, and output as a target detection result.
In an exemplary embodiment of the present application, when the first point cloud performs the third target detection, the first target detection method may still be used for the detection.
In an exemplary embodiment of the present application, the first object detection method may include: knowing that there is a prior frame and a first point cloud at this time, according to the prior frame, the azimuth of the potential target relative to lidar (laser radar) can be known, and as shown in fig. 3, taking the potential target existing in the prior frame as an example of a vehicle, the vehicle is at the upper left of lidar, and at this time, the first surface 1 and the second surface 2 of the vehicle can be photographed by lidar. And the third face 3 and the fourth face 4 cannot be photographed. Detecting the position of a first Point cloud point_loop, and if the position appears in the 1 and 2 areas and the 3 and 4 areas have no first Point cloud, considering that a potential target exists in a priori frame of the target; otherwise, the potential target does not exist in the prior frame of the target, and a final target detection result is output.
In the exemplary embodiment of the present application, the method for detecting the target for the third time may be a conventional clustering algorithm, including but not limited to, european clustering, and taking the largest external rectangular frame as the prior frame.
In the exemplary embodiment of the present application, if the final detection result of detecting the potential target is null, that is, the potential target is not detected, the track state of the target is updated, and is marked as a target track loss state, and when the target is lost in consecutive frames, the track may be deleted. If the final detection result of the potential targets is not null and only one potential target exists, the original target track state is not processed, the current situation is maintained, and the track of the detected potential target is increased to track.
The embodiment of the application further provides a point cloud target detection device 100, as shown in fig. 4, may include a memory 110, a processor 120, and a computer program stored in the memory 110 and capable of running on the processor 120, where the processor 120 implements the point cloud target detection method when executing the computer program.
In the exemplary embodiments of the present application, any embodiment of the foregoing point cloud target detection method is applicable to the embodiment of the device, and will not be described herein in detail.
The embodiment of the application also provides a computer readable storage medium, on which a computer program is stored, which is characterized in that the computer program realizes the point cloud target detection method when being executed by a processor.
In the exemplary embodiments of the present application, any embodiment of the foregoing point cloud target detection method is applicable to the computer readable storage medium embodiment, and will not be described herein in detail.
Those of ordinary skill in the art will appreciate that all or some of the steps, systems, functional modules/units in the apparatus, and methods disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof. In a hardware implementation, the division between the functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components; for example, one physical component may have multiple functions, or one function or step may be performed cooperatively by several physical components. Some or all of the components may be implemented as software executed by a processor, such as a digital signal processor or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). The term computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, as known to those skilled in the art. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer. Furthermore, as is well known to those of ordinary skill in the art, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media.

Claims (10)

1. A method for detecting a point cloud target, the method comprising:
acquiring a point cloud;
detecting a target in the point cloud according to a preset first target detection method;
tracking the detected target, and determining whether a missed detection target exists according to the matching condition of the actual tracking track and the predicted tracking track of the target;
when the detection omission target is determined, a target frame used for determining the predicted position of the target when the tracking track prediction is carried out on the target is obtained and is used as a priori frame, and the second target detection is carried out on the region within the preset range of the priori frame.
2. The method for detecting a point cloud object according to claim 1, wherein determining whether a missed detection target exists according to a matching condition of an actual tracking trajectory and a predicted tracking trajectory of the object comprises:
detecting whether the actual tracking track is matched with the predicted tracking track or not according to a preset matching algorithm;
when the actual tracking track is matched with the predicted tracking track, determining that no missed detection target exists;
and when the actual tracking track is not matched with the predicted tracking track, determining that a missed detection target exists.
3. The point cloud target detection method of claim 1, further comprising:
in the target tracking process, judging whether the target disappears according to whether the tracking track disappears;
when the tracking track is not disappeared, judging that the target is not disappeared;
and when the tracking track disappears, judging that the target disappears.
4. A method for detecting a point cloud object according to any one of claims 1 to 3, wherein the performing the second object detection on the area within the preset range of the prior frame includes:
expanding the prior frame into an expansion frame according to a preset proportion;
taking out the point cloud buckle in the expansion frame and recording the point cloud buckle as a first point cloud;
and judging whether the first point cloud has a potential target or not by adopting a second target detection method.
5. The method of detecting a point cloud object according to claim 4, wherein the determining whether the first point cloud has a potential object by using a second object detecting method includes:
calculating the height of the first point cloud in the vertical direction;
removing the ground height from the height of the first point cloud in the vertical direction according to the ground height preset in the point cloud;
acquiring point clouds corresponding to the residual heights;
calculating the point cloud number of the point cloud corresponding to the residual height;
and determining whether the potential target exists in the area where the first point cloud is located according to the point cloud number.
6. The method for detecting a point cloud object according to claim 5, wherein the determining whether the potential object exists in the area where the first point cloud exists according to the point cloud number includes:
when the number of the point clouds is larger than or equal to a preset number threshold, confirming that the potential targets exist in the area where the first point clouds are located;
and when the number of the point clouds is smaller than the number threshold, confirming that the potential target does not exist in the area where the first point cloud is located.
7. The method of point cloud object detection according to claim 4, further comprising, after performing the second object detection on the area within the predetermined range of the prior frame:
when the fact that the potential target does not exist in the area where the first point cloud is located is judged, directly outputting a target detection result of the potential target;
when the potential targets exist in the area where the first point cloud is located, performing second target detection on the first point cloud by adopting a preset deep learning target detection method, and outputting corresponding target detection results according to the detection results of the second target detection on the first point cloud.
8. The method of detecting a point cloud object according to claim 7, wherein outputting a corresponding object detection result according to a detection result of performing the second object detection on the first point cloud comprises:
if no target is detected when the first point cloud is subjected to the second target detection, adopting the first target detection algorithm to perform the third target detection on the first point cloud, and outputting a corresponding target detection result;
if the target is detected when the first point cloud is subjected to the second target detection, and the detected target is one, the one target is directly output as a target detection result;
and if the targets are detected when the first point cloud is subjected to the second target detection and the detected targets are a plurality of targets, outputting a target frame with the maximum intersection ratio IOU in target frames of the targets as a target detection result.
9. A point cloud object detection apparatus comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the point cloud object detection method according to any one of claims 1-8 when executing the computer program.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the point cloud object detection method according to any of claims 1-8.
CN202310298342.5A 2023-03-23 2023-03-23 Point cloud target detection method and device and computer readable storage medium Pending CN116466355A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310298342.5A CN116466355A (en) 2023-03-23 2023-03-23 Point cloud target detection method and device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310298342.5A CN116466355A (en) 2023-03-23 2023-03-23 Point cloud target detection method and device and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN116466355A true CN116466355A (en) 2023-07-21

Family

ID=87172637

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310298342.5A Pending CN116466355A (en) 2023-03-23 2023-03-23 Point cloud target detection method and device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN116466355A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117218123A (en) * 2023-11-09 2023-12-12 上海擎刚智能科技有限公司 Cold-rolled strip steel wire flying equipment fault detection method and system based on point cloud

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117218123A (en) * 2023-11-09 2023-12-12 上海擎刚智能科技有限公司 Cold-rolled strip steel wire flying equipment fault detection method and system based on point cloud
CN117218123B (en) * 2023-11-09 2024-02-02 上海擎刚智能科技有限公司 Cold-rolled strip steel wire flying equipment fault detection method and system based on point cloud

Similar Documents

Publication Publication Date Title
CN106991389B (en) Device and method for determining road edge
US20180165525A1 (en) Traveling lane determining device and traveling lane determining method
CN110286389B (en) Grid management method for obstacle identification
CN116466355A (en) Point cloud target detection method and device and computer readable storage medium
Schubert et al. Generalized probabilistic data association for vehicle tracking under clutter
CN113269811A (en) Data fusion method and device and electronic equipment
CN115641454A (en) Target tracking method and device, electronic equipment and computer readable storage medium
CN112863242A (en) Parking space detection method and device
US11176379B2 (en) Method of acquiring detection zone in image and method of determining zone usage
CN114882491B (en) Non-motor vehicle target tracking method and device and electronic equipment
CN112560664B (en) Method, device, medium and electronic equipment for intrusion detection in forbidden area
JP7334632B2 (en) Object tracking device and object tracking method
CN114359346A (en) Point cloud data processing method and device, nonvolatile storage medium and processor
CN112200868A (en) Positioning method and device and vehicle
US20220067395A1 (en) Vehicles, Systems and Methods for Determining an Occupancy Map of a Vicinity of a Vehicle
EP4145407A1 (en) Vehicles, systems and methods for determining an occupancy map of a vicinity of a vehicle
CN112906495B (en) Target detection method and device, electronic equipment and storage medium
CN116605212B (en) Vehicle control method, device, computer equipment and storage medium
CN116625384B (en) Data association method and device and electronic equipment
CN116228820B (en) Obstacle detection method and device, electronic equipment and storage medium
CN112906424A (en) Image recognition method, device and equipment
CN116000924A (en) Robot control method, robot control device, robot and computer readable storage medium
CN116092290A (en) Automatic correction and supplementation method and system for acquired data
Danescu et al. Partid–individual objects tracking in occupancy grids using particle identities
JP2023059822A (en) Apparatus and method for detecting vehicle movement state, and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination