CN116597417A - Obstacle movement track determining method, device, equipment and storage medium - Google Patents

Obstacle movement track determining method, device, equipment and storage medium Download PDF

Info

Publication number
CN116597417A
CN116597417A CN202310552821.5A CN202310552821A CN116597417A CN 116597417 A CN116597417 A CN 116597417A CN 202310552821 A CN202310552821 A CN 202310552821A CN 116597417 A CN116597417 A CN 116597417A
Authority
CN
China
Prior art keywords
detection frame
frame
point cloud
obstacle
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310552821.5A
Other languages
Chinese (zh)
Other versions
CN116597417B (en
Inventor
严海旭
兰晓松
刘羿
何贝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sinian Zhijia Technology Co ltd
Original Assignee
Beijing Sinian Zhijia Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sinian Zhijia Technology Co ltd filed Critical Beijing Sinian Zhijia Technology Co ltd
Priority to CN202310552821.5A priority Critical patent/CN116597417B/en
Publication of CN116597417A publication Critical patent/CN116597417A/en
Application granted granted Critical
Publication of CN116597417B publication Critical patent/CN116597417B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a method, a device, equipment and a storage medium for determining a movement track of an obstacle, wherein at least one continuous frame of point cloud image is acquired; for each frame of the point cloud image, respectively carrying out target detection on at least one obstacle contained in the frame of the point cloud image to obtain at least one detection frame; for each two continuous frames of point cloud images, calculating the intersection ratio of each preceding detection frame and each subsequent detection frame in the two frames of point cloud images respectively; for each prior detection frame in the two-frame point cloud image, determining a target detection frame in a later-frame point cloud image in the two-frame point cloud image according to the intersection ratio of the prior detection frame and each later detection frame; and determining the motion trail of the target obstacle according to the target detection frame in the next frame of point cloud image in every two continuous frames of point cloud images. By adopting the method, the accuracy of the determined obstacle movement track is improved.

Description

Obstacle movement track determining method, device, equipment and storage medium
Technical Field
The present application relates to the field of image detection, and in particular, to a method, apparatus, device, and storage medium for determining a motion trajectory of an obstacle.
Background
With the rapid development of the automatic driving technology, the target detection technology becomes an indispensable link in the automatic driving technology. When the vehicle is planned, the image data acquired by various sensors such as cameras, laser radars, radars and the like are utilized through a target detection technology to identify the obstacle in the environment where the vehicle is located, and the vehicle is planned according to the motion trail of the identified obstacle, so that the safety and the intelligence of the automatic driving vehicle are improved by avoiding the obstacle.
In the prior art, a sliding window method is generally used to determine the movement track of an obstacle. The sliding window method is a common target detection method, and the basic idea is to slide a window with a fixed size on an image in a certain step length, judge whether the window contains a target obstacle by using a classifier at the position of each window, and then obtain the movement track of the window according to the position of the target obstacle in each window. However, the method has higher calculation complexity, and when the size and shape of the target obstacle are changed greatly, the target obstacle cannot be completely contained in one window, so that the judgment on whether the window contains the target obstacle is wrong, the movement track of the target obstacle cannot be obtained normally according to the position of the target obstacle in the window, and the accuracy of the movement track of the obstacle is reduced.
Disclosure of Invention
In view of the above, the present application aims to provide a method, a device and a storage medium for determining a movement track of an obstacle, so as to improve the accuracy of the determined movement track of the obstacle.
In a first aspect, an embodiment of the present application provides a method for determining a motion trajectory of an obstacle, where the method includes:
acquiring at least one continuous frame of point cloud images, wherein each frame of point cloud image comprises at least one obstacle;
for each frame of the point cloud image, respectively carrying out target detection on at least one obstacle contained in the frame of the point cloud image to obtain at least one detection frame, wherein each detection frame contains one obstacle;
for each two continuous frames of point cloud images, calculating the intersection ratio of each preceding detection frame and each subsequent detection frame in the two frames of point cloud images, wherein the preceding detection frame is the detection frame in the previous frame of point cloud image in the two frames of point cloud images, and the subsequent detection frame is the detection frame in the subsequent frame of point cloud image in the two frames of point cloud images;
for each prior detection frame in the two-frame point cloud image, determining a target detection frame in a subsequent frame point cloud image in the two-frame point cloud image according to the intersection ratio of the prior detection frame and each subsequent detection frame, wherein the target detection frame is a detection frame containing a target obstacle, and the target obstacle is an obstacle contained in the prior detection frame;
And determining the motion trail of the target obstacle according to the target detection frame in the next frame of point cloud image in every two continuous frames of point cloud images.
Optionally, for each frame of the point cloud image, performing object detection on at least one obstacle included in the frame of the point cloud image to obtain at least one detection frame, where the method includes:
and for each frame of the point cloud image, inputting the frame of the point cloud image into a point cloud target detection model, and respectively carrying out target detection on at least one obstacle contained in the frame of the point cloud image to obtain at least one detection frame.
Optionally, for each previous detection frame in the two-frame point cloud image, determining the target detection frame in the next-frame point cloud image in the two-frame point cloud image according to the intersection ratio of the previous detection frame and each next-frame detection frame includes:
judging whether the maximum intersection ratio of the prior detection frames exceeds a preset intersection ratio threshold value for the target obstacle or not for each prior detection frame in the two-frame point cloud image, wherein the maximum intersection ratio of the prior detection frames is the maximum value in the intersection ratio between the prior detection frames and each subsequent detection frame respectively;
And if the maximum intersection ratio of the preceding detection frame exceeds the intersection ratio threshold value preset for the target obstacle, determining a subsequent detection frame with the maximum intersection ratio with the preceding detection frame as the target detection frame.
Optionally, after determining, for each preceding detection frame in the two-frame point cloud image, whether the maximum intersection ratio of the preceding detection frame exceeds an intersection ratio threshold value preconfigured for the target obstacle, the method further includes:
if the maximum intersection ratio of the prior detection frame does not exceed the intersection ratio threshold value preconfigured for the target obstacle, judging whether the distance between the nearest center points of the prior detection frame exceeds the center point distance threshold value preconfigured for the target obstacle, wherein the distance between the nearest center points of the prior detection frame is the minimum value in the distances between the center points of the prior detection frame and the center points of each subsequent detection frame respectively;
and if the nearest center point distance of the prior detection frame does not exceed the preset center point distance threshold value for the target obstacle, determining a subsequent detection frame with the center point distance between the center point and the center point of the prior detection frame meeting the nearest center point distance as the target detection frame.
Optionally, after determining whether the nearest center distance of the preceding detection frame exceeds the center distance threshold preconfigured for the target obstacle if the maximum intersection ratio of the preceding detection frame does not exceed the intersection ratio threshold preconfigured for the target obstacle, the method further includes:
if the nearest center point distance of the preceding detection frame exceeds a preset center point distance threshold value for the target obstacle, judging whether a subsequent detection frame with the maximum intersection ratio with the preceding detection frame and a subsequent detection frame with a center point and a center point of the preceding detection frame meet the nearest center point distance or not are the same detection frame;
and if the subsequent detection frame with the maximum intersection ratio with the previous detection frame and the subsequent detection frame with the distance between the center point and the center point of the previous detection frame meeting the distance of the nearest center point are the same detection frame, determining the subsequent detection frame with the maximum intersection ratio with the previous detection frame as the target detection frame.
Optionally, after determining whether the subsequent detection frame having the maximum intersection ratio with the preceding detection frame and the subsequent detection frame having the center point and the center point of the preceding detection frame satisfy the closest center point distance if the closest center point distance of the preceding detection frame exceeds a center point distance threshold value configured in advance for the target obstacle, the method further includes:
And if the subsequent detection frame with the maximum intersection ratio with the prior detection frame and the subsequent detection frame with the distance between the center point and the center point of the prior detection frame meeting the distance between the nearest center points are not the same detection frame, determining the subsequent detection frame with the distance between the center point and the center point of the prior detection frame not exceeding a preset deviation value as the target detection frame.
Optionally, the determining the motion trail of the target obstacle according to the target detection frame in the next frame of point cloud images in every two consecutive frames of point cloud images includes:
and performing curve fitting on the central point of the target detection frame in the next frame of point cloud image in every two continuous frames of point cloud images to obtain the motion trail of the target obstacle.
In a second aspect, an embodiment of the present application provides an obstacle movement trajectory determining device, including:
the point cloud image acquisition module is used for acquiring at least one continuous frame of point cloud image, wherein each frame of point cloud image comprises at least one obstacle;
the detection frame determining module is used for respectively carrying out target detection on at least one obstacle contained in each frame of the point cloud image to obtain at least one detection frame, wherein each detection frame contains one obstacle;
The intersection ratio determining module is used for respectively calculating the intersection ratio of each previous detection frame and each subsequent detection frame in the two-frame point cloud image for each two-frame point cloud image, wherein the previous detection frame is the detection frame in the previous frame point cloud image in the two-frame point cloud image, and the subsequent detection frame is the detection frame in the subsequent frame point cloud image in the two-frame point cloud image;
the target detection frame determining module is used for determining a target detection frame in a next frame of point cloud images in the two frames of point cloud images according to the intersection ratio of the previous detection frame and each next frame of point cloud images for each previous detection frame in the two frames of point cloud images, wherein the target detection frame is a detection frame containing a target obstacle, and the target obstacle is an obstacle contained in the previous detection frame;
and the motion track determining module is used for determining the motion track of the target obstacle according to the target detection frame in the next frame of point cloud image in every two continuous frames of point cloud images.
Optionally, the detection frame determining module is configured to, when configured to, for each frame of the point cloud image, perform object detection on at least one obstacle included in the frame of the point cloud image to obtain at least one detection frame, respectively:
And for each frame of the point cloud image, inputting the frame of the point cloud image into a point cloud target detection model, and respectively carrying out target detection on at least one obstacle contained in the frame of the point cloud image to obtain at least one detection frame.
Optionally, the target detection frame determining module is configured to, when determining, for each previous detection frame in the two-frame point cloud image, a target detection frame in a subsequent frame point cloud image in the two-frame point cloud image according to an intersection ratio of the previous detection frame and each subsequent detection frame, specifically:
judging whether the maximum intersection ratio of the prior detection frames exceeds a preset intersection ratio threshold value for the target obstacle or not for each prior detection frame in the two-frame point cloud image, wherein the maximum intersection ratio of the prior detection frames is the maximum value in the intersection ratio between the prior detection frames and each subsequent detection frame respectively;
and if the maximum intersection ratio of the preceding detection frame exceeds the intersection ratio threshold value preset for the target obstacle, determining a subsequent detection frame with the maximum intersection ratio with the preceding detection frame as the target detection frame.
Optionally, the target detection frame determining module is further configured to, after being configured to determine, for each of the two frame point cloud images, whether a maximum intersection ratio of the preceding detection frames exceeds an intersection ratio threshold value preconfigured for the target obstacle:
If the maximum intersection ratio of the prior detection frame does not exceed the intersection ratio threshold value preconfigured for the target obstacle, judging whether the distance between the nearest center points of the prior detection frame exceeds the center point distance threshold value preconfigured for the target obstacle, wherein the distance between the nearest center points of the prior detection frame is the minimum value in the distances between the center points of the prior detection frame and the center points of each subsequent detection frame respectively;
and if the nearest center point distance of the prior detection frame does not exceed the preset center point distance threshold value for the target obstacle, determining a subsequent detection frame with the center point distance between the center point and the center point of the prior detection frame meeting the nearest center point distance as the target detection frame.
Optionally, the target detection frame determining module is further configured to, after determining whether the closest center distance of the previous detection frame exceeds the center distance threshold preconfigured for the target obstacle if the maximum intersection ratio of the previous detection frame does not exceed the intersection ratio threshold preconfigured for the target obstacle:
if the nearest center point distance of the preceding detection frame exceeds a preset center point distance threshold value for the target obstacle, judging whether a subsequent detection frame with the maximum intersection ratio with the preceding detection frame and a subsequent detection frame with a center point and a center point of the preceding detection frame meet the nearest center point distance or not are the same detection frame;
And if the subsequent detection frame with the maximum intersection ratio with the previous detection frame and the subsequent detection frame with the distance between the center point and the center point of the previous detection frame meeting the distance of the nearest center point are the same detection frame, determining the subsequent detection frame with the maximum intersection ratio with the previous detection frame as the target detection frame.
Optionally, the target detection frame determining module is further configured to, after determining whether the subsequent detection frame having the maximum intersection ratio with the preceding detection frame and the subsequent detection frame having the center point and the center point of the preceding detection frame satisfy the closest center point distance, if the closest center point distance of the preceding detection frame exceeds a center point distance threshold configured in advance for the target obstacle is the same detection frame:
and if the subsequent detection frame with the maximum intersection ratio with the prior detection frame and the subsequent detection frame with the distance between the center point and the center point of the prior detection frame meeting the distance between the nearest center points are not the same detection frame, determining the subsequent detection frame with the distance between the center point and the center point of the prior detection frame not exceeding a preset deviation value as the target detection frame.
Optionally, the motion trajectory determining module is configured to, when determining the motion trajectory of the target obstacle according to the target detection frame in the next frame of point cloud images in every two consecutive frames of point cloud images, specifically:
and performing curve fitting on the central point of the target detection frame in the next frame of point cloud image in every two continuous frames of point cloud images to obtain the motion trail of the target obstacle.
In a third aspect, an embodiment of the present application provides a computer apparatus, including: a processor, a memory and a bus, the memory storing machine readable instructions executable by the processor, the processor and the memory communicating over the bus when the computer device is running, the machine readable instructions when executed by the processor performing the steps of the obstacle movement trajectory determination method of any of the alternative embodiments of the first aspect described above.
In a fourth aspect, an embodiment of the present application provides a computer readable storage medium, where a computer program is stored, where the computer program is executed by a processor to perform the steps of the method for determining a movement track of an obstacle according to any one of the optional embodiments of the first aspect.
The technical scheme provided by the application comprises the following beneficial effects:
acquiring at least one continuous frame of point cloud images, wherein each frame of point cloud image comprises at least one obstacle; for each frame of the point cloud image, respectively carrying out target detection on at least one obstacle contained in the frame of the point cloud image to obtain at least one detection frame, wherein each detection frame contains one obstacle; by the above steps, a detection frame including each obstacle included in the point cloud image can be obtained.
For each two continuous frames of point cloud images, calculating the intersection ratio of each preceding detection frame and each subsequent detection frame in the two frames of point cloud images, wherein the preceding detection frame is the detection frame in the previous frame of point cloud image in the two frames of point cloud images, and the subsequent detection frame is the detection frame in the subsequent frame of point cloud image in the two frames of point cloud images; for each prior detection frame in the two-frame point cloud image, determining a target detection frame in a subsequent frame point cloud image in the two-frame point cloud image according to the intersection ratio of the prior detection frame and each subsequent detection frame, wherein the target detection frame is a detection frame containing a target obstacle, and the target obstacle is an obstacle contained in the prior detection frame; through the steps, the target detection frame containing the target obstacle in each frame of point cloud image can be determined according to the cross-over ratio between the detection frames contained in every two frames of continuous point cloud images.
Determining the motion trail of the target obstacle according to a target detection frame in a next frame of point cloud image in every two continuous frames of point cloud images; through the steps, the motion trail of the target obstacle can be determined according to the target detection frame containing the target obstacle in each frame of point cloud image.
By adopting the method, a plurality of detection frames are obtained by carrying out target detection on the obstacles contained in the point cloud images, then the cross-over ratio calculation is carried out on the detection frames contained in each two frames of continuous point cloud images, the target detection frames containing the target obstacles in each frame of point cloud images are determined according to the calculated cross-over ratio, and then the movement track of the target obstacles is determined according to the target detection frames containing the target obstacles in each frame of point cloud images, so that the accuracy of determining the movement track of the obtained obstacles is improved.
In order to make the above objects, features and advantages of the present invention more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a method for determining a movement track of an obstacle according to an embodiment of the present invention;
fig. 2 is a schematic diagram showing a detection frame in a point cloud image according to a first embodiment of the present invention;
fig. 3 is a schematic diagram showing a detection frame in a point cloud image according to a first embodiment of the present invention;
FIG. 4 is a schematic view showing a motion profile of an obstacle according to a first embodiment of the present invention;
fig. 5 is a schematic structural diagram of an obstacle movement track determining device according to a second embodiment of the present invention;
fig. 6 shows a schematic structural diagram of a computer device according to a third embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. The components of the embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the invention, as presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be made by a person skilled in the art without making any inventive effort, are intended to be within the scope of the present invention.
Example 1
In order to facilitate understanding of the present application, the following describes a detailed description of the first embodiment of the present application in conjunction with the flowchart of the method for determining the movement track of an obstacle provided by the first embodiment of the present application shown in fig. 1.
Referring to fig. 1, fig. 1 shows a flowchart of a method for determining a movement track of an obstacle according to a first embodiment of the present application, where the method includes steps S101 to S105:
s101: and acquiring at least one continuous frame of point cloud images, wherein each frame of point cloud image comprises at least one obstacle.
Specifically, at least one continuous frame of point cloud images is obtained through a camera, a laser radar and other sensors, and each frame of point cloud image comprises at least one obstacle, such as a wharf, a stacker, a crane, a hanger and other objects.
S102: and respectively carrying out target detection on at least one obstacle contained in the point cloud image of each frame to obtain at least one detection frame, wherein each detection frame contains one obstacle.
Specifically, performing target detection on an obstacle contained in each frame of point cloud image to obtain a plurality of detection frames, wherein each detection frame contains an obstacle; the task of target detection is to find out all interested targets (objects) in the image, determine the category and the position of the targets, and can be realized based on a deep learning method or a traditional algorithm method when the target detection is carried out; the deep learning-based method comprises a fast R-CNN (a target detection model), an SSD (Single Shot MultiBox Detector, a multi-target detection algorithm for directly predicting target class) and the like; methods based on conventional algorithms include Haar (Haar) feature algorithms, HOG (Histogram of Oriented Gradient, directional gradient histogram) feature algorithms, and the like.
For example, referring to fig. 2, fig. 2 shows a schematic diagram of a detection frame in a point cloud image according to a first embodiment of the present invention, where at least one frame of the point cloud image includes a point cloud image a and a point cloud image B: the point cloud image A comprises an obstacle A1, an obstacle A2 and an obstacle A3, and the obstacle A1, the obstacle A2 and the obstacle A3 in the point cloud image A are subjected to target detection to obtain a detection frame A11 containing the obstacle A1, a detection frame A22 containing the obstacle A2 and a detection frame A33 containing the obstacle A3; the point cloud image B includes an obstacle B1, an obstacle B2 and an obstacle B3, and the object detection is performed on the obstacle B1, the obstacle B2 and the obstacle B3 in the point cloud image B to obtain a detection frame B11 including the obstacle B1, a detection frame B22 including the obstacle B2, and a detection frame B33 including the obstacle B3.
S103: for each two continuous frames of point cloud images, calculating the intersection ratio of each preceding detection frame and each subsequent detection frame in the two frames of point cloud images, wherein the preceding detection frame is the detection frame in the previous frame of point cloud image in the two frames of point cloud images, and the subsequent detection frame is the detection frame in the subsequent frame of point cloud image in the two frames of point cloud images.
Specifically, for each two continuous frames of point cloud images, the two continuous frames of point cloud images comprise a previous frame of point cloud image and a next frame of point cloud image, and for each previous detection frame in the previous frame of point cloud image in the two continuous frames of point cloud images, calculating the cross-over ratio between the previous detection frame and each next detection frame in the next frame of point cloud image in the two continuous frames of point cloud images respectively; the intersection ratio is the ratio of the intersection and union of two bounding boxes.
For example, referring to fig. 3, fig. 3 shows a schematic diagram of a detection frame in a point cloud image according to the first embodiment of the present invention, in which the point cloud image a and the point cloud image B are two continuous frames of point cloud images, the acquisition time of the point cloud image a is before the point cloud image B, it can be known that the point cloud image a is a previous frame of point cloud image, the point cloud image B is a subsequent frame of point cloud image, then the detection frame a11 in the point cloud image a, the intersection ratio between the detection frame a22 and the detection frame a33 and the detection frame B11 in the point cloud image B, the intersection ratio between the detection frame B22 and the detection frame B33, i.e., the intersection ratio a11B11 between the detection frame a11 and the detection frame B11, the intersection ratio a11B22 between the detection frame a11 and the detection frame B22, the intersection ratio a33 between the detection frame a22 and the detection frame B33, the intersection ratio between the detection frame a22 and the detection frame B33, the detection frame B33.
S104: and determining a target detection frame in a point cloud image of a next frame in the two frames of point cloud images according to the intersection ratio of the previous detection frame and each next detection frame for each previous detection frame in the two frames of point cloud images, wherein the target detection frame is a detection frame containing a target obstacle, and the target obstacle is an obstacle contained in the previous detection frame.
Specifically, each detection frame in a first frame of point cloud images in at least one continuous frame of point cloud images contains an obstacle, the position of the moving obstacle in other point cloud images behind the first frame of point cloud images changes due to possible movement of the obstacle, and each frame of point cloud images contains a plurality of detection frames, so that it is impossible to determine which detection frame in other point cloud images contains the moving obstacle.
And an intersection ratio exists between each preceding detection frame in the previous frame point cloud image in the two frames of point cloud images and each subsequent detection frame in each subsequent frame point cloud image, and the greater the intersection ratio is, the higher the overlapping degree between every two detection frames is.
S105: and determining the motion trail of the target obstacle according to the target detection frame in the next frame of point cloud image in every two continuous frames of point cloud images.
Specifically, the detection frame in the first frame of point cloud image is the initial position of the obstacle, the previous frame of point cloud image in the first group of two continuous frames of point cloud images is the first frame of point cloud image in at least one frame of continuous frames of point cloud images, the next frame of point cloud image in the first group of two continuous frames of point cloud images is the previous frame of point cloud image in the second group of two continuous frames of point cloud images, and so on, because the target detection frame is the detection frame containing the target obstacle, the motion track of the target obstacle can be determined according to the target detection frame in the next frame of point cloud image in every two continuous frames of point cloud images.
In a possible embodiment, for each frame of the point cloud image, performing object detection on at least one obstacle included in the frame of the point cloud image to obtain at least one detection frame, where the detection frame includes:
and for each frame of the point cloud image, inputting the frame of the point cloud image into a point cloud target detection model, and respectively carrying out target detection on at least one obstacle contained in the frame of the point cloud image to obtain at least one detection frame.
Specifically, the Point cloud target detection model includes, but is not limited to, a Point pilar model (a Point cloud 3d detection model).
For each frame of the point cloud image, inputting the frame of the point cloud image into a point cloud target detection model, and respectively carrying out target detection on at least one obstacle contained in the frame of the point cloud image to obtain at least one detection frame, wherein the method comprises the following steps of:
for each frame of the point cloud image, determining first coordinate information of each obstacle under a laser radar sensor coordinate system according to point cloud data of at least one obstacle contained in the frame of the point cloud image, wherein the point cloud image is acquired by the laser radar sensor, the laser radar sensor coordinate system is a self coordinate system of the laser radar sensor, and the laser radar sensor is arranged on an acquisition vehicle;
for each obstacle in the frame point cloud image, converting first coordinate information of the obstacle under a radar sensor coordinate system into second coordinate information under a vehicle body coordinate system of the acquisition vehicle according to internal and external parameters of the radar sensor;
inputting the second coordinate information into a trained point cloud target detection model to obtain at least one detection frame; in addition, when the Point cloud target detection model is a Point Pillar model, the center Point position, the detection frame size and the heading angle of each detection frame can be obtained.
In a possible implementation manner, the determining, for each previous detection frame in the two-frame point cloud image, the target detection frame in the next-frame point cloud image in the two-frame point cloud image according to the intersection ratio of the previous detection frame and each next-frame detection frame includes:
judging whether the maximum intersection ratio of the prior detection frames exceeds a preset intersection ratio threshold value for the target obstacle or not for each prior detection frame in the two-frame point cloud image, wherein the maximum intersection ratio of the prior detection frames is the maximum value in the intersection ratio between the prior detection frames and each subsequent detection frame respectively;
and if the maximum intersection ratio of the preceding detection frame exceeds the intersection ratio threshold value preset for the target obstacle, determining a subsequent detection frame with the maximum intersection ratio with the preceding detection frame as the target detection frame.
Specifically, for example, the two-frame point cloud image includes a previous detection frame a11 and a previous detection frame a22, a subsequent detection frame B11 and a subsequent detection frame B22, and according to the above-mentioned step of calculating the intersection ratio, the intersection ratio a11B11 between the previous detection frame a11 and the subsequent detection frame B11, the intersection ratio a22B11 between the previous detection frame a22 and the subsequent detection frame B11, the intersection ratio a11B22 between the previous detection frame a11 and the subsequent detection frame B22, and the intersection ratio a22B22 between the previous detection frame a22 and the subsequent detection frame B22 may be obtained; assuming that the overlap ratio a11B11 is 0.1, the overlap ratio a11B22 is 0.2, the overlap ratio a22B11 is 0.3, the overlap ratio a22B22 is 0.4, and the overlap ratio threshold is 0.2, the maximum overlap ratio of the previous detection frame a11 is 0.2 (the overlap ratio threshold is not exceeded), and at this time, the target detection frame of the previous detection frame a11 in the point cloud image of the next frame cannot be determined; and the maximum cross-over ratio of the previous detection frame a22 is 0.4 (exceeds the cross-over ratio threshold), the subsequent detection frame B22 with the maximum cross-over ratio in the subsequent frame point cloud image is determined as the target detection frame in the subsequent frame point cloud image.
In one possible embodiment, after determining, for each prior detection frame in the two-frame point cloud image, whether the maximum intersection ratio of the prior detection frame exceeds an intersection ratio threshold pre-configured for the target obstacle, the method further comprises:
if the maximum intersection ratio of the prior detection frame does not exceed the intersection ratio threshold value preconfigured for the target obstacle, judging whether the distance between the nearest center points of the prior detection frame exceeds the center point distance threshold value preconfigured for the target obstacle, wherein the distance between the nearest center points of the prior detection frame is the minimum value in the distances between the center points of the prior detection frame and the center points of each subsequent detection frame respectively;
and if the nearest center point distance of the prior detection frame does not exceed the preset center point distance threshold value for the target obstacle, determining a subsequent detection frame with the center point distance between the center point and the center point of the prior detection frame meeting the nearest center point distance as the target detection frame.
Specifically, for example, if the maximum intersection ratio of the preceding detection frame a11 does not exceed the intersection ratio threshold, it is determined whether or not the closest center distance of the preceding detection frame exceeds a center distance threshold that is preset for the target obstacle, and if the center distance between the preceding detection frame a11 and the following detection frame B11 is 1 meter, and if the center distance between the preceding detection frame a11 and the following detection frame B22 is 2 meters, and the center distance threshold is 1.5 meters, it is known that the closest center distance of the preceding detection frame a11 is 1 meter (the center distance threshold is not exceeded), then the following detection frame B11 is determined as the target detection frame in the following frame point cloud image.
In one possible embodiment, after determining whether the closest center distance of the prior detection frame exceeds the center distance threshold preconfigured for the target obstacle if the maximum intersection ratio of the prior detection frame does not exceed the intersection ratio threshold preconfigured for the target obstacle, the method further comprises:
if the nearest center point distance of the preceding detection frame exceeds a preset center point distance threshold value for the target obstacle, judging whether a subsequent detection frame with the maximum intersection ratio with the preceding detection frame and a subsequent detection frame with a center point and a center point of the preceding detection frame meet the nearest center point distance or not are the same detection frame;
and if the subsequent detection frame with the maximum intersection ratio with the previous detection frame and the subsequent detection frame with the distance between the center point and the center point of the previous detection frame meeting the distance of the nearest center point are the same detection frame, determining the subsequent detection frame with the maximum intersection ratio with the previous detection frame as the target detection frame.
Specifically, for example, if the closest center point distance of the preceding detection frame a11 exceeds a center point distance threshold value that is preset for the target obstacle, it is determined whether or not the subsequent detection frame having the maximum intersection ratio with the preceding detection frame and the subsequent detection frame having the center point that satisfies the closest center point distance with the center point of the preceding detection frame are the same detection frame, and if the subsequent detection frame having the maximum intersection ratio with the preceding detection frame a11 is B11 and the subsequent detection frame having the center point that satisfies the closest center point distance with the preceding detection frame a11 is B11, the subsequent detection frame B11 is determined as the target detection frame in the point cloud image of the subsequent frame.
In one possible embodiment, after determining whether a subsequent detection frame having the maximum intersection ratio with the preceding detection frame and a subsequent detection frame having a center point that satisfies the closest center point distance with the center point of the preceding detection frame are the same detection frame if the closest center point distance of the preceding detection frame exceeds a center point distance threshold that is preconfigured for the target obstacle, the method further includes:
and if the subsequent detection frame with the maximum intersection ratio with the prior detection frame and the subsequent detection frame with the distance between the center point and the center point of the prior detection frame meeting the distance between the nearest center points are not the same detection frame, determining the subsequent detection frame with the distance between the center point and the center point of the prior detection frame not exceeding a preset deviation value as the target detection frame.
Specifically, for example, if the subsequent detection frame having the maximum intersection ratio with the preceding detection frame a11 is B11 and the subsequent detection frame having the distance from the center point of the preceding detection frame a11 that satisfies the closest center point distance is B22, determining the subsequent detection frame having the center point that does not exceed the preset deviation value from the center point of the preceding detection frame as the target detection frame; assuming that the distance between the center point of the preceding detection frame a11 and the center point of the following detection frame B11 is 1 meter, the distance between the center point of the preceding detection frame a11 and the center point of the following detection frame B22 is 2 meters, and the preset deviation value is 1.5 meters, the following detection frame B11 is determined as the target detection frame in the following frame point cloud image.
The laser radar sensor for collecting the point cloud images is arranged on the collecting vehicle, the collecting vehicle is running when the laser radar sensor collects the point cloud images, the distance travelled in the collecting interval of every two frames of the point cloud images is the product of the running speed of the collecting vehicle and the collecting interval duration of every two frames of the point cloud images, and the distance value is used as a preset deviation value to compare the distance of the central point.
In a possible embodiment, the determining the motion trail of the target obstacle according to the target detection frame in the next frame of point cloud images in every two consecutive frames of point cloud images includes:
and performing curve fitting on the central point of the target detection frame in the next frame of point cloud image in every two continuous frames of point cloud images to obtain the motion trail of the target obstacle.
In particular, methods of curve fitting include, but are not limited to, least squares.
For example, referring to fig. 4, fig. 4 shows a schematic diagram of a movement track of an obstacle according to an embodiment of the present invention, where the schematic diagram includes a plurality of point cloud images of a subsequent frame, a point cloud image L, a point cloud image M, and a point cloud image N, and curve fitting is performed on a center point O of a target detection frame LL in the point cloud image L, a center point P of a target detection frame MM in the point cloud image M, and a center point Q of a target detection frame NN in the point cloud image N to obtain a curve, where the curve is the movement track of the target obstacle.
After determining, for each preceding detection frame in the two-frame point cloud image, a target detection frame in a subsequent frame of the two-frame point cloud image according to an intersection ratio of the preceding detection frame and each subsequent detection frame, the method further includes:
for a target detection frame in each frame of point cloud image, indexing out previous historical frame detection frame information (detection efficiency is 10hz, 5 frames are usually searched), and smoothing the position of the central point and the course angle of the current detection frame according to the following smoothing function by using the time stamp of the historical frame, the central point and the course angle information:
wherein,,representing a detection frame corrected under the world coordinate system g after the i-th moment is subjected to smoothing treatment; alpha is a smoothing coefficient, and the value of alpha is between 0 and 1; Δt represents the unit time, typically 10hz, i.e. 100ms; t is t i A time stamp indicating the i-th time; t is t i-2 A time stamp indicating the i-2 th time; />Indicating the speed at the i-th moment; />Indicating the speed at the i-1 th moment; />Indicating the speed at the i-2 th moment; />Indicating the acceleration at the i-th moment; />Indicating the acceleration at the i-1 th moment; />Indicating the acceleration at the i-2 th moment; />A detection frame under the world coordinate system g at the ith moment is shown; / >The detection frame of the ith-1 moment in the world coordinate system g is shown.
Supplementing the corrected detection frame into an example track, and calculating the speed and acceleration of the detection frame according to the following formula by combining the time stamps of the previous frame and the following frame and the pose message:
wherein,,indicating that the ith time is subjected to smoothingDetection frame corrected under world coordinate system g,/->Representing a detection frame corrected under the world coordinate system g after the i-1 moment is subjected to smoothing treatment; />Indicating the speed at the i-th moment;indicating the speed at the i-1 th moment; Δt represents the unit time; />Indicating the acceleration at the i-th time.
Example two
Referring to fig. 5, fig. 5 shows a schematic structural diagram of an obstacle movement track determining device according to a second embodiment of the present invention, where the device includes:
a point cloud image obtaining module 501, configured to obtain at least one continuous frame of point cloud image, where each frame of point cloud image includes at least one obstacle;
the detection frame determining module 502 is configured to, for each frame of the point cloud image, perform object detection on at least one obstacle included in the frame of the point cloud image to obtain at least one detection frame, where each detection frame includes one obstacle;
The intersection ratio determining module 503 is configured to calculate, for each two consecutive frames of point cloud images, an intersection ratio of each preceding detection frame in the two frames of point cloud images to each subsequent detection frame, where the preceding detection frame is a detection frame in a preceding frame of point cloud image in the two frames of point cloud images, and the subsequent detection frame is a detection frame in a subsequent frame of point cloud image in the two frames of point cloud images;
the target detection frame determining module 504 is configured to determine, for each previous detection frame in the two-frame point cloud image, a target detection frame in a subsequent frame point cloud image in the two-frame point cloud image according to an intersection ratio of the previous detection frame and each subsequent detection frame, where the target detection frame is a detection frame including a target obstacle, and the target obstacle is an obstacle included in the previous detection frame;
the motion track determining module 505 is configured to determine a motion track of the target obstacle according to a target detection frame in a next frame of point cloud images in every two consecutive frames of point cloud images.
In a possible implementation manner, the detection frame determining module is specifically configured to, when configured to, for each frame of the point cloud image, perform object detection on at least one obstacle included in the frame of the point cloud image to obtain at least one detection frame, respectively:
And for each frame of the point cloud image, inputting the frame of the point cloud image into a point cloud target detection model, and respectively carrying out target detection on at least one obstacle contained in the frame of the point cloud image to obtain at least one detection frame.
In a possible implementation manner, the target detection frame determining module is specifically configured to, when determining, for each previous detection frame in the two-frame point cloud image, a target detection frame in a subsequent-frame point cloud image in the two-frame point cloud image according to an intersection ratio of the previous detection frame and each subsequent detection frame:
judging whether the maximum intersection ratio of the prior detection frames exceeds a preset intersection ratio threshold value for the target obstacle or not for each prior detection frame in the two-frame point cloud image, wherein the maximum intersection ratio of the prior detection frames is the maximum value in the intersection ratio between the prior detection frames and each subsequent detection frame respectively;
and if the maximum intersection ratio of the preceding detection frame exceeds the intersection ratio threshold value preset for the target obstacle, determining a subsequent detection frame with the maximum intersection ratio with the preceding detection frame as the target detection frame.
In a possible embodiment, the target detection frame determining module is further configured, after being configured to determine, for each previous detection frame in the two-frame point cloud image, whether a maximum intersection ratio of the previous detection frame exceeds a preset intersection ratio threshold for the target obstacle, to:
If the maximum intersection ratio of the prior detection frame does not exceed the intersection ratio threshold value preconfigured for the target obstacle, judging whether the distance between the nearest center points of the prior detection frame exceeds the center point distance threshold value preconfigured for the target obstacle, wherein the distance between the nearest center points of the prior detection frame is the minimum value in the distances between the center points of the prior detection frame and the center points of each subsequent detection frame respectively;
and if the nearest center point distance of the prior detection frame does not exceed the preset center point distance threshold value for the target obstacle, determining a subsequent detection frame with the center point distance between the center point and the center point of the prior detection frame meeting the nearest center point distance as the target detection frame.
In a possible embodiment, the target detection frame determining module is further configured to, after determining whether the closest center distance of the preceding detection frame exceeds the center distance threshold preconfigured for the target obstacle if the maximum intersection ratio of the preceding detection frame does not exceed the intersection ratio threshold preconfigured for the target obstacle:
if the nearest center point distance of the preceding detection frame exceeds a preset center point distance threshold value for the target obstacle, judging whether a subsequent detection frame with the maximum intersection ratio with the preceding detection frame and a subsequent detection frame with a center point and a center point of the preceding detection frame meet the nearest center point distance or not are the same detection frame;
And if the subsequent detection frame with the maximum intersection ratio with the previous detection frame and the subsequent detection frame with the distance between the center point and the center point of the previous detection frame meeting the distance of the nearest center point are the same detection frame, determining the subsequent detection frame with the maximum intersection ratio with the previous detection frame as the target detection frame.
In a possible embodiment, the target detection frame determining module is further configured to, after determining whether the subsequent detection frame having the maximum intersection ratio with the preceding detection frame and the subsequent detection frame having the center point and the center point of the preceding detection frame satisfy the closest center point distance if the closest center point distance of the preceding detection frame exceeds a center point distance threshold value configured in advance for the target obstacle,:
and if the subsequent detection frame with the maximum intersection ratio with the prior detection frame and the subsequent detection frame with the distance between the center point and the center point of the prior detection frame meeting the distance between the nearest center points are not the same detection frame, determining the subsequent detection frame with the distance between the center point and the center point of the prior detection frame not exceeding a preset deviation value as the target detection frame.
In a possible implementation manner, the motion trajectory determining module is specifically configured to, when determining the motion trajectory of the target obstacle according to the target detection frame in the next two consecutive frame of point cloud images:
and performing curve fitting on the central point of the target detection frame in the next frame of point cloud image in every two continuous frames of point cloud images to obtain the motion trail of the target obstacle.
Example III
Based on the same application concept, referring to fig. 6, fig. 6 shows a schematic structural diagram of a computer device provided in a third embodiment of the present application, where, as shown in fig. 6, a computer device 600 provided in the third embodiment of the present application includes:
the obstacle movement track determining method comprises a processor 601, a memory 602 and a bus 603, wherein the memory 602 stores machine-readable instructions executable by the processor 601, when the computer device 600 is running, the processor 601 and the memory 602 communicate through the bus 603, and the machine-readable instructions are executed by the processor 601 to execute the steps of the obstacle movement track determining method in the second embodiment.
Example IV
Based on the same application concept, the embodiment of the present application further provides a computer readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the steps of the obstacle movement track determining method in any one of the above embodiments are performed.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described system and apparatus may refer to corresponding procedures in the foregoing method embodiments, which are not described herein again.
The computer program product for determining the movement track of the obstacle provided by the embodiment of the invention comprises a computer readable storage medium storing program codes, wherein the instructions included in the program codes can be used for executing the method described in the previous method embodiment, and specific implementation can be referred to the method embodiment and will not be repeated here.
The obstacle movement track determining device provided by the embodiment of the invention can be specific hardware on equipment or software or firmware installed on the equipment and the like. The device provided by the embodiment of the present invention has the same implementation principle and technical effects as those of the foregoing method embodiment, and for the sake of brevity, reference may be made to the corresponding content in the foregoing method embodiment where the device embodiment is not mentioned. It will be clear to those skilled in the art that, for convenience and brevity, the specific operation of the system, apparatus and unit described above may refer to the corresponding process in the above method embodiment, which is not described in detail herein.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be other manners of division in actual implementation, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments provided in the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
It should be noted that: like reference numerals and letters in the following figures denote like items, and thus once an item is defined in one figure, no further definition or explanation of it is required in the following figures, and furthermore, the terms "first," "second," "third," etc. are used merely to distinguish one description from another and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: the above examples are only specific embodiments of the present invention, and are not intended to limit the scope of the present invention, but it should be understood by those skilled in the art that the present invention is not limited thereto, and that the present invention is described in detail with reference to the foregoing examples: any person skilled in the art may modify or easily conceive of the technical solution described in the foregoing embodiments, or perform equivalent substitution of some of the technical features, while remaining within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the corresponding technical solutions. Are intended to be encompassed within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A method for determining a movement trajectory of an obstacle, the method comprising:
acquiring at least one continuous frame of point cloud images, wherein each frame of point cloud image comprises at least one obstacle;
for each frame of the point cloud image, respectively carrying out target detection on at least one obstacle contained in the frame of the point cloud image to obtain at least one detection frame, wherein each detection frame contains one obstacle;
For each two continuous frames of point cloud images, calculating the intersection ratio of each preceding detection frame and each subsequent detection frame in the two frames of point cloud images, wherein the preceding detection frame is the detection frame in the previous frame of point cloud image in the two frames of point cloud images, and the subsequent detection frame is the detection frame in the subsequent frame of point cloud image in the two frames of point cloud images;
for each prior detection frame in the two-frame point cloud image, determining a target detection frame in a subsequent frame point cloud image in the two-frame point cloud image according to the intersection ratio of the prior detection frame and each subsequent detection frame, wherein the target detection frame is a detection frame containing a target obstacle, and the target obstacle is an obstacle contained in the prior detection frame;
and determining the motion trail of the target obstacle according to the target detection frame in the next frame of point cloud image in every two continuous frames of point cloud images.
2. The method according to claim 1, wherein for each frame of the point cloud image, performing object detection on at least one obstacle included in the frame of the point cloud image to obtain at least one detection frame, respectively, includes:
and for each frame of the point cloud image, inputting the frame of the point cloud image into a point cloud target detection model, and respectively carrying out target detection on at least one obstacle contained in the frame of the point cloud image to obtain at least one detection frame.
3. The method according to claim 1, wherein for each preceding detection frame in the two-frame point cloud image, determining the target detection frame in the subsequent frame point cloud image in the two-frame point cloud image according to the intersection ratio of the preceding detection frame and each subsequent detection frame comprises:
judging whether the maximum intersection ratio of the prior detection frames exceeds a preset intersection ratio threshold value for the target obstacle or not for each prior detection frame in the two-frame point cloud image, wherein the maximum intersection ratio of the prior detection frames is the maximum value in the intersection ratio between the prior detection frames and each subsequent detection frame respectively;
and if the maximum intersection ratio of the preceding detection frame exceeds the intersection ratio threshold value preset for the target obstacle, determining a subsequent detection frame with the maximum intersection ratio with the preceding detection frame as the target detection frame.
4. A method according to claim 3, wherein, after determining, for each preceding detection frame in the two frames of point cloud images, whether the maximum intersection ratio of the preceding detection frame exceeds a preconfigured intersection ratio threshold for the target obstacle, the method further comprises:
If the maximum intersection ratio of the prior detection frame does not exceed the intersection ratio threshold value preconfigured for the target obstacle, judging whether the distance between the nearest center points of the prior detection frame exceeds the center point distance threshold value preconfigured for the target obstacle, wherein the distance between the nearest center points of the prior detection frame is the minimum value in the distances between the center points of the prior detection frame and the center points of each subsequent detection frame respectively;
and if the nearest center point distance of the prior detection frame does not exceed the preset center point distance threshold value for the target obstacle, determining a subsequent detection frame with the center point distance between the center point and the center point of the prior detection frame meeting the nearest center point distance as the target detection frame.
5. The method of claim 4, wherein after determining whether the closest center point distance of the prior detection frame exceeds a center point distance threshold pre-configured for the target obstacle if the maximum intersection ratio of the prior detection frame does not exceed the intersection ratio threshold pre-configured for the target obstacle, the method further comprises:
if the nearest center point distance of the preceding detection frame exceeds a preset center point distance threshold value for the target obstacle, judging whether a subsequent detection frame with the maximum intersection ratio with the preceding detection frame and a subsequent detection frame with a center point and a center point of the preceding detection frame meet the nearest center point distance or not are the same detection frame;
And if the subsequent detection frame with the maximum intersection ratio with the previous detection frame and the subsequent detection frame with the distance between the center point and the center point of the previous detection frame meeting the distance of the nearest center point are the same detection frame, determining the subsequent detection frame with the maximum intersection ratio with the previous detection frame as the target detection frame.
6. The method of claim 5, wherein after determining whether a subsequent detection frame having the maximum intersection ratio with the preceding detection frame and a subsequent detection frame having a center point that meets the closest center point distance from the center point of the preceding detection frame are the same detection frame if the closest center point distance of the preceding detection frame exceeds a center point distance threshold pre-configured for the target obstacle, the method further comprises:
and if the subsequent detection frame with the maximum intersection ratio with the prior detection frame and the subsequent detection frame with the distance between the center point and the center point of the prior detection frame meeting the distance between the nearest center points are not the same detection frame, determining the subsequent detection frame with the distance between the center point and the center point of the prior detection frame not exceeding a preset deviation value as the target detection frame.
7. The method according to claim 1, wherein determining the motion trajectory of the target obstacle according to the target detection frame in the next two successive frames of point cloud images comprises:
and performing curve fitting on the central point of the target detection frame in the next frame of point cloud image in every two continuous frames of point cloud images to obtain the motion trail of the target obstacle.
8. An obstacle movement trajectory determining device, the device comprising:
the point cloud image acquisition module is used for acquiring at least one continuous frame of point cloud image, wherein each frame of point cloud image comprises at least one obstacle;
the detection frame determining module is used for respectively carrying out target detection on at least one obstacle contained in each frame of the point cloud image to obtain at least one detection frame, wherein each detection frame contains one obstacle;
the intersection ratio determining module is used for respectively calculating the intersection ratio of each previous detection frame and each subsequent detection frame in the two-frame point cloud image for each two-frame point cloud image, wherein the previous detection frame is the detection frame in the previous frame point cloud image in the two-frame point cloud image, and the subsequent detection frame is the detection frame in the subsequent frame point cloud image in the two-frame point cloud image;
The target detection frame determining module is used for determining a target detection frame in a next frame of point cloud images in the two frames of point cloud images according to the intersection ratio of the previous detection frame and each next frame of point cloud images for each previous detection frame in the two frames of point cloud images, wherein the target detection frame is a detection frame containing a target obstacle, and the target obstacle is an obstacle contained in the previous detection frame;
and the motion track determining module is used for determining the motion track of the target obstacle according to the target detection frame in the next frame of point cloud image in every two continuous frames of point cloud images.
9. A computer device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory in communication via the bus when the computer device is running, the machine-readable instructions when executed by the processor performing the steps of the obstacle movement trajectory determination method of any one of claims 1 to 7.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a computer program which, when executed by a processor, performs the steps of the obstacle movement trajectory determination method as claimed in any one of claims 1 to 7.
CN202310552821.5A 2023-05-16 2023-05-16 Obstacle movement track determining method, device, equipment and storage medium Active CN116597417B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310552821.5A CN116597417B (en) 2023-05-16 2023-05-16 Obstacle movement track determining method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310552821.5A CN116597417B (en) 2023-05-16 2023-05-16 Obstacle movement track determining method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN116597417A true CN116597417A (en) 2023-08-15
CN116597417B CN116597417B (en) 2024-08-13

Family

ID=87605762

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310552821.5A Active CN116597417B (en) 2023-05-16 2023-05-16 Obstacle movement track determining method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116597417B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108470332A (en) * 2018-01-24 2018-08-31 博云视觉(北京)科技有限公司 A kind of multi-object tracking method and device
CN109460702A (en) * 2018-09-14 2019-03-12 华南理工大学 Passenger's abnormal behaviour recognition methods based on human skeleton sequence
CN112184767A (en) * 2020-09-22 2021-01-05 深研人工智能技术(深圳)有限公司 Method, device, equipment and storage medium for tracking moving object track
CN112750146A (en) * 2020-12-31 2021-05-04 浙江大华技术股份有限公司 Target object tracking method and device, storage medium and electronic equipment
CN114120127A (en) * 2021-11-30 2022-03-01 济南博观智能科技有限公司 Target detection method, device and related equipment
WO2022127180A1 (en) * 2020-12-17 2022-06-23 深圳云天励飞技术股份有限公司 Target tracking method and apparatus, and electronic device and storage medium
WO2022135027A1 (en) * 2020-12-22 2022-06-30 深圳云天励飞技术股份有限公司 Multi-object tracking method and apparatus, computer device, and storage medium
CN115311646A (en) * 2022-09-14 2022-11-08 北京斯年智驾科技有限公司 Method and device for detecting obstacle

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108470332A (en) * 2018-01-24 2018-08-31 博云视觉(北京)科技有限公司 A kind of multi-object tracking method and device
CN109460702A (en) * 2018-09-14 2019-03-12 华南理工大学 Passenger's abnormal behaviour recognition methods based on human skeleton sequence
CN112184767A (en) * 2020-09-22 2021-01-05 深研人工智能技术(深圳)有限公司 Method, device, equipment and storage medium for tracking moving object track
WO2022127180A1 (en) * 2020-12-17 2022-06-23 深圳云天励飞技术股份有限公司 Target tracking method and apparatus, and electronic device and storage medium
WO2022135027A1 (en) * 2020-12-22 2022-06-30 深圳云天励飞技术股份有限公司 Multi-object tracking method and apparatus, computer device, and storage medium
CN112750146A (en) * 2020-12-31 2021-05-04 浙江大华技术股份有限公司 Target object tracking method and device, storage medium and electronic equipment
CN114120127A (en) * 2021-11-30 2022-03-01 济南博观智能科技有限公司 Target detection method, device and related equipment
CN115311646A (en) * 2022-09-14 2022-11-08 北京斯年智驾科技有限公司 Method and device for detecting obstacle

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
RUIQI LU ETAL.: "Occluded Pedestrian Detection with Visible IoU and Box Sign Predictor", 《2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP)》, 26 August 2019 (2019-08-26) *
王英先等: "结合置信度评估与再检测的目标长时跟踪", 《计算机工程与设计》, vol. 43, no. 12, 31 December 2022 (2022-12-31) *

Also Published As

Publication number Publication date
CN116597417B (en) 2024-08-13

Similar Documents

Publication Publication Date Title
CN112292711B (en) Associating LIDAR data and image data
CN112734852B (en) Robot mapping method and device and computing equipment
US20190086923A1 (en) Method and apparatus for generating obstacle motion information for autonomous vehicle
CN110018489B (en) Target tracking method and device based on laser radar, controller and storage medium
CN111292352B (en) Multi-target tracking method, device, equipment and storage medium
CN111611853A (en) Sensing information fusion method and device and storage medium
WO2022001323A1 (en) Target vehicle control method and apparatus, electronic device and storage medium
JP7328281B2 (en) Method and system for predicting the trajectory of a target vehicle in a vehicle's environment
EP3667612B1 (en) Roadside object detection device, roadside object detection method, and roadside object detection system
JP5281867B2 (en) Vehicle traveling speed control device and method
CN110942474A (en) Robot target tracking method, device and storage medium
CN115576329B (en) Obstacle avoidance method of unmanned AGV based on computer vision
CN112037257A (en) Target tracking method, terminal and computer readable storage medium thereof
CN115861968A (en) Dynamic obstacle removing method based on real-time point cloud data
Sakic et al. Camera-LIDAR object detection and distance estimation with application in collision avoidance system
CN115077519A (en) Positioning and mapping method and device based on template matching and laser inertial navigation loose coupling
CN117331071A (en) Target detection method based on millimeter wave radar and vision multi-mode fusion
CN111507126B (en) Alarm method and device of driving assistance system and electronic equipment
KR20190081334A (en) Method for tracking moving trajectory based on complex positioning and apparatus thereof
CN111273701A (en) Visual control system and control method for holder
CN112711255B (en) Mobile robot obstacle avoidance method, control equipment and storage medium
CN112364751B (en) Obstacle state judgment method, device, equipment and storage medium
CN111832343A (en) Eye tracking method and device and storage medium
CN116597417B (en) Obstacle movement track determining method, device, equipment and storage medium
CN112729289B (en) Positioning method, device, equipment and storage medium applied to automatic guided vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant