CN111598920A - Track prediction method, device and storage medium - Google Patents

Track prediction method, device and storage medium Download PDF

Info

Publication number
CN111598920A
CN111598920A CN201910127531.XA CN201910127531A CN111598920A CN 111598920 A CN111598920 A CN 111598920A CN 201910127531 A CN201910127531 A CN 201910127531A CN 111598920 A CN111598920 A CN 111598920A
Authority
CN
China
Prior art keywords
detected
target
images
unmanned equipment
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910127531.XA
Other languages
Chinese (zh)
Inventor
薛舟
陈子冲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ninebot Beijing Technology Co Ltd
Original Assignee
Ninebot Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ninebot Beijing Technology Co Ltd filed Critical Ninebot Beijing Technology Co Ltd
Priority to CN201910127531.XA priority Critical patent/CN111598920A/en
Publication of CN111598920A publication Critical patent/CN111598920A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0253Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting relative motion information from a plurality of images taken successively, e.g. visual odometry, optical flow
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Electromagnetism (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Image Analysis (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention provides a track prediction method and a track prediction device, which are applied to unmanned equipment, wherein the unmanned equipment at least comprises an acquisition unit; the method comprises the following steps: acquiring at least two images, wherein each of the at least two images comprises at least one target to be detected; identifying a target to be detected in the at least two images, and acquiring two-dimensional position information of the target to be detected in the at least two images; calculating at least two distances between the target to be detected and the unmanned equipment in the at least two images based on the two-dimensional position information of the target to be detected in the at least two images and the parameters of the acquisition unit, and calculating at least two spatial positions of the target to be detected relative to the unmanned equipment based on the at least two distances between the target to be detected and the unmanned equipment; and predicting the motion trail of the target to be detected based on at least two spatial positions of each target to be detected.

Description

Track prediction method, device and storage medium
Technical Field
The invention relates to the field of machine vision, in particular to a track prediction method, a track prediction device and a storage medium.
Background
The detection and identification of the target are widely applied to the fields of visual navigation, robots, intelligent transportation and the like. Among them, detection, identification and trajectory prediction of dynamic targets (including but not limited to cars and pedestrians) around the robot/unmanned car are key links in robot/unmanned car navigation and path planning.
At present, most of robot/unmanned automobile navigation is based on a multi-view camera system, and three-dimensional detection and matching of vehicles are carried out in a preset automobile data set, so that the effect of tracking and predicting movement tracks of targets such as automobiles, pedestrians and the like is achieved. The hardware cost and the application universality of the multi-view camera system limit the possibility of wider and more extensive application.
Disclosure of Invention
In view of the above, the main objective of the present invention is to provide a trajectory prediction method, apparatus and storage medium, which can predict the target action trajectory around the robot/unmanned vehicle, and are simple and easy to implement.
In order to achieve the purpose, the technical scheme of the invention is realized as follows:
the embodiment of the invention provides a track prediction method, which is applied to unmanned equipment, wherein the unmanned equipment at least comprises an acquisition unit; the method comprises the following steps:
acquiring at least two images, wherein each of the at least two images comprises at least one target to be detected;
identifying a target to be detected in the at least two images, and acquiring two-dimensional position information of the target to be detected in the at least two images;
calculating at least two distances between the target to be detected and the unmanned equipment in the at least two images based on the two-dimensional position information of the target to be detected in the at least two images and the parameters of the acquisition unit, and calculating at least two spatial positions of the target to be detected relative to the unmanned equipment based on the at least two distances between the target to be detected and the unmanned equipment;
and predicting the motion trail of the target to be detected based on at least two spatial positions of each target to be detected.
In the above scheme, the acquiring two-dimensional position information of the target to be detected in the at least two images includes:
detecting the position of a target to be detected in the image based on a neural network algorithm;
and setting a 2-D frame of the target to be detected based on the position of the target to be detected in the image, wherein the central position of the 2-D frame is the central position of the target to be detected.
In the above scheme, the calculating at least two spatial positions of the object to be detected relative to the unmanned device further includes:
calculating at least two vertical distances between the target to be detected in the at least two images and the positive direction of the unmanned equipment based on the two-dimensional position information of the target to be detected in the at least two images and the parameters of the acquisition unit;
and calculating at least two vertical distances between the target to be detected in the at least two images and the horizontal plane where the unmanned equipment is located based on the two-dimensional position information of the target to be detected in the at least two images and the parameters of the acquisition unit.
In the above scheme, the calculating at least two distances between the target to be detected in the at least two images and the unmanned equipment based on the two-dimensional position information of the target to be detected in the at least two images and the parameters of the acquisition unit includes:
and calculating at least two distances between the target to be detected and the unmanned equipment in the at least two images based on the central position of the 2-D frame, the height of the 2-D frame, the focal length of the acquisition unit, the central position of the images and the height of the unmanned equipment of the target to be detected in the at least two images.
In the above scheme, the calculating, based on the two-dimensional position information of the target to be detected in the at least two images and the parameters of the acquisition unit, at least two vertical distances between the target to be detected in the at least two images and the positive direction of the unmanned aerial vehicle, and at least two vertical distances between the target to be detected and the horizontal plane where the unmanned aerial vehicle is located includes:
calculating at least two vertical distances between the target to be detected in the at least two images and the unmanned equipment in the positive direction based on the central position of the 2-D frame of the target to be detected in the at least two images, the central position of the images, the focal length of the acquisition unit and the distance between the target to be detected and the unmanned equipment;
and calculating at least two vertical distances between the target to be detected in the at least two images and the horizontal plane where the unmanned equipment is located based on the central position of the 2-D frame of the target to be detected in the at least two images, the central position of the images, the focal length of the acquisition unit and the distance between the target to be detected and the unmanned equipment.
In the above scheme, the predicting the motion trajectory of the target to be detected based on at least two spatial positions of each target to be detected includes:
fitting the motion trail of the target to be detected based on a polynomial parameter model, and predicting the motion trail of the target to be detected;
or predicting the motion trail of the target to be detected based on a machine learning method.
In the above scheme, after predicting the motion trajectory of the target to be detected based on at least two spatial positions of each target to be detected, the method further includes:
and adjusting the path planning of the unmanned equipment according to the predicted motion trail of at least one target to be detected.
An embodiment of the present invention provides a trajectory prediction apparatus, including: the device comprises an acquisition module, an identification module, a reconstruction module and a prediction module; wherein the content of the first and second substances,
the acquisition module is used for acquiring at least two images, and the at least two images respectively comprise at least one target to be detected;
the identification module is used for identifying the target to be detected in the at least two images and acquiring two-dimensional position information of the target to be detected in the at least two images;
the reconstruction module is used for calculating at least two distances between the target to be detected and the unmanned equipment in the at least two images based on the two-dimensional position information of the target to be detected in the at least two images and the parameters of the acquisition unit, and calculating at least two spatial positions of the target to be detected relative to the unmanned equipment based on the at least two distances between the target to be detected and the unmanned equipment;
the prediction module is used for predicting the motion trail of the target to be detected based on at least two spatial positions of each target to be detected.
In the above solution, the identification module includes: a detection module and a marking module; wherein the content of the first and second substances,
the detection module is used for detecting the position of the target to be detected in the image based on a neural network algorithm;
the marking module is used for setting a 2-D frame of the target to be detected based on the position of the target to be detected in the image, and the central position of the 2-D frame is the central position of the target to be detected.
In the above solution, the reconstruction module comprises a vertical distance module,
the vertical distance module is used for calculating at least two vertical distances between the target to be detected in the at least two images and the positive direction of the unmanned equipment based on the two-dimensional position information of the target to be detected in the at least two images and the parameters of the acquisition unit;
and calculating at least two vertical distances between the target to be detected in the at least two images and the horizontal plane where the unmanned equipment is located based on the two-dimensional position information of the target to be detected in the at least two images and the parameters of the acquisition unit.
In the above scheme, the reconstruction module further comprises a depth distance module; wherein the content of the first and second substances,
the depth distance module is used for calculating at least two distances between the target to be detected and the unmanned equipment in the at least two images based on the central position of the 2-D frame of the target to be detected in the at least two images, the height of the 2-D frame, the focal length of the acquisition unit, the central position of the images and the height of the unmanned equipment.
In the above scheme, the vertical distance module is specifically configured to calculate at least two vertical distances in a positive direction between the target to be detected in the at least two images and the unmanned equipment based on a center position of a 2-D frame of the target to be detected in the at least two images, a center position of the image, a focal length of the acquisition unit, and a distance between the target to be detected and the unmanned equipment;
and calculating at least two vertical distances between the target to be detected in the at least two images and the horizontal plane where the unmanned equipment is located based on the central position of the 2-D frame of the target to be detected in the at least two images, the central position of the images, the focal length of the acquisition unit and the distance between the target to be detected and the unmanned equipment.
In the above scheme, the prediction module is configured to fit the motion trajectory of the target to be detected based on a polynomial parameter model, and predict the motion trajectory of the target to be detected;
or predicting the motion trail of the target to be detected based on a machine learning method.
In the above solution, the apparatus further comprises: a path adjustment module; wherein the content of the first and second substances,
and the path adjusting module is used for adjusting the path planning of the unmanned equipment according to the predicted motion trail of the at least one target to be detected.
The invention also provides a computer-readable storage medium having a computer program stored thereon, wherein the computer program is adapted to perform any of the steps of the above-described method when executed by a processor.
The present invention also provides a trajectory prediction device, including: a processor and a memory for storing a computer program operable on the processor, wherein the processor is operable to perform any of the steps of the method described above when executing the computer program.
According to the track prediction method and device provided by the embodiment of the invention, the target image in the driving process of the unmanned equipment is continuously acquired through the acquisition unit, the two-dimensional position information of the target in the image can be obtained by carrying out identification detection on the basis of at least two acquired images, and the spatial position reconstruction of the target is further realized according to the obtained two-dimensional image data and the parameters of the acquisition unit; the embodiment of the invention realizes target space reconstruction by directly acquiring two-dimensional information, so that the operation is simpler and the realization is easier in practical application; after the spatial reconstruction of the target is completed, the motion track of the target can be predicted more accurately based on the spatial position, the feasibility of predicting the target track is well guaranteed, and a foundation is provided for the autonomous navigation of the unmanned equipment.
Drawings
FIG. 1 is a flow chart of a trajectory prediction method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a simplified imaging relationship;
FIG. 3 is a schematic diagram of coordinate system transformation at a point in space;
FIG. 4 is a schematic diagram of coordinate system transformation of a target to be detected in space;
FIG. 5 is a schematic diagram of an image coordinate system including an object to be detected;
FIG. 6 is a labeled graph of targets to be detected in a particular application;
FIG. 7 is a schematic structural diagram of a trajectory prediction apparatus according to an embodiment of the present invention;
fig. 8 is a schematic diagram of a hardware structure of a trajectory prediction apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the following describes specific technical solutions of the present invention in further detail with reference to the accompanying drawings in the embodiments of the present invention. The following examples are intended to illustrate the invention only, without limiting its scope.
The track prediction method is applied to unmanned equipment, and the unmanned equipment at least comprises an acquisition unit; fig. 1 is a flowchart of a trajectory prediction method according to an embodiment of the present invention, where the method includes:
s101, acquiring at least two images, wherein each of the at least two images comprises at least one target to be detected;
s102, identifying a target to be detected in the at least two images, and acquiring two-dimensional position information of the target to be detected in the at least two images;
s103, calculating at least two distances between the target to be detected and the unmanned equipment in the at least two images based on the two-dimensional position information of the target to be detected in the at least two images and the parameters of the acquisition unit, and calculating at least two spatial positions of the target to be detected relative to the unmanned equipment based on the at least two distances between the target to be detected and the unmanned equipment;
and S104, predicting the motion trail of the target to be detected based on at least two spatial positions of each target to be detected.
The unmanned device may be an intelligent mobile device such as an automobile, an airplane, a robot, etc.; the object to be detected can be a moving object such as a pedestrian or an automobile, and can also be static without generating a moving object such as a tall building or a pillar.
The acquisition unit of the unmanned equipment at least comprises a monocular camera, and also comprises a camera, a microphone and other tools capable of realizing data acquisition; the acquisition unit can be a camera, a video camera, a camera and a microphone, and the exhaustion is not performed; in the embodiment, the acquisition unit is provided with one camera for acquiring data, and the equipment cost can be reduced by using one camera for shooting.
It should be further noted that, because the unmanned device takes multiple images during driving, the images may or may not include the target to be detected; when the image obtained by shooting contains the target to be detected, the image only contains one target to be detected, or contains a plurality of targets to be detected. For example, in the captured image, the following may be: no pedestrian, automobile and other objects exist; it can also be: only 1 pedestrian a; the method can also be as follows: there are 1 pedestrian a, 1 driving car B, and certainly, other objects in motion may be included, which is not exhaustive.
Then, for the embodiment of the present invention, because the present invention provides a trajectory prediction method, for a same object, at least two spatial position coordinates are required to obtain the trajectory of the object, the trajectory prediction method provided by the embodiment of the present invention needs to: at least two images of the obtained multiple images contain the same target to be detected; for example, when ten images are acquired during the driving of the unmanned aerial vehicle, two images of the ten images must contain the same pedestrian a.
In step S101, the unmanned device regularly captures a plurality of images through a monocular camera configured for the unmanned device during the movement of the unmanned device; the method specifically comprises the following steps: the unmanned device is provided with a camera, and when the unmanned device is in a driving process, a picture is taken at preset time intervals, for example, an image is taken at intervals of 1s, so that a plurality of images can be obtained in the driving process of the unmanned device.
Here, it should be noted that the image captured at each time corresponds to a corresponding time stamp; the time stamps are used for representing the sequence of the shot images, and based on the sequence of the shot images, after the images are detected and processed and the spatial position information of the target is reconstructed, the calculated spatial positions can be sequenced according to the time stamps corresponding to the images, so that the action track of the target is determined.
In step S102, recognizing the target to be detected in the at least two images, and acquiring two-dimensional position information of the target to be detected in the at least two images, specifically, detecting and recognizing the target in the at least two images acquired in step S101, that is, detecting and recognizing objects such as pedestrians and automobiles in the images, and acquiring positions of the objects such as pedestrians and automobiles in the images.
Further, the acquiring two-dimensional position information of the target to be detected in the at least two images includes:
detecting the position of a target to be detected in the image based on a neural network algorithm;
and setting a 2-D frame of the target to be detected based on the position of the target to be detected in the image, wherein the central position of the 2-D frame is the central position of the target to be detected.
Here, it should be noted that the target detection of the present embodiment may be performed by a classifier based on the current detection system, for example, the target detection may be performed by identifying the target to be detected in the image based on r-cnn, fast-rcnn, yolo, ssd and other neural network algorithms, and setting a 2-D frame to mark the target to be detected. In order to realize the detection of the target, the detection systems realize the detection by the operation of the classifier at different positions of the image and at uniform intervals on the whole image by adopting different scales. For example, the r-cnn algorithm first generates potential bounding boxes (bounding boxes) in the image, and then runs the classifier on these boxes; after classification, a process is performed to refine the bounding box, eliminate duplicate detections, and re-score the bounding box according to other objects in the scene to find the specific location of the object on the image and what the object is.
Fig. 2 is a schematic diagram showing a relationship between a pedestrian and an image taken by a camera. In fig. 2, O is a camera on the unmanned device, and the camera takes a picture of the pedestrian a to obtain an image containing the pedestrian a, (Cx, Cy) is the central position of the image; the center position of the image is a relative position obtained with respect to the frame position of the image, and specifically, the center position (Cx, Cy) of the image may be obtained by establishing an image coordinate system with the angular vertex of the image as the origin of coordinates.
In step S103, since the actual spatial position of the object to be detected is calculated, 3 position parameters of XYZ axes of the object to be detected need to be obtained. After the two-dimensional position information of the target to be detected in the image is obtained, the space position of the target to be detected is calculated based on the camera imaging principle through the obtained two-dimensional position information of the target to be detected in the image and the parameters of the camera.
As shown in fig. 3, which is a schematic diagram of coordinate system conversion of a certain point in space, in the diagram, O is an optical center of a camera, M is a target to be detected, the target to be detected M is photographed by the camera O, an image containing the target to be detected M is obtained, and M is an imaging point of the target to be detected M on the image; o (XYZ) is a camera coordinate system, and p (xy) is an image coordinate system. The calculating of the spatial position of the target to be detected specifically includes calculating an actual 3-dimensional coordinate position of the target to be detected. In fig. 3, the 3-dimensional spatial position (X, Y, Z) of the object M to be detected is calculated.
In step S103, based on the two-dimensional position information of the target to be detected in the at least two images and the parameters of the acquisition unit, calculating at least two distances between the target to be detected and the unmanned device in the at least two images, that is, calculating Z-axis parameters; here, since there are a plurality of acquired images, a Z-axis parameter is obtained based on the object to be detected in each image. For example, two images are captured during the driving process of the unmanned device, and the two images have the same target to be detected, and if a pedestrian a exists, the distance Z between the pedestrian a and the unmanned device in the two images needs to be calculated respectively, so that two Z-axis parameters can be obtained.
It should be noted that the two-dimensional position information of the target to be detected in the image is information such as a center position of a 2-D frame of the target to be detected, a height of the 2-D frame, and the like; the parameters of the acquisition unit are parameters such as the focal length of a camera on the unmanned equipment, the height of the camera from the ground and the like.
Then, the calculating at least two distances between the target to be detected and the unmanned equipment in the at least two images based on the two-dimensional position information of the target to be detected in the at least two images and the parameters of the acquisition unit includes: calculating at least two distances between the target to be detected and the unmanned equipment in the at least two images based on the central position of the 2-D frame, the height of the 2-D frame, the focal length of the acquisition unit, the central position of the images and the height of the unmanned equipment of the target to be detected in the at least two images, wherein the distance Z between the target to be detected and the unmanned equipment can be calculated through the following formula:
Z=fy/(Iy+0.5*Hb-Cy)*h
wherein f isyIs the focal length of the camera on the drone, IyCoordinates of the center position y-axis of the 2-D frame as the object to be detected, HbHeight of 2-D frame as object to be detected, CyH is the distance from the optical axis of the camera to the ground, where h is the height of the drone when the camera is on top of the drone.
FIG. 4 is a schematic diagram of coordinate system transformation of an object to be detected in space, where in FIG. 4, O is the optical center of the camera, and fx、fyIs the focal length of the camera, Cx、CyDefining the center position of the image, Ix、IyDefining the central position, W, of a 2-D frame of an object to be detected in an imagebWidth, H, of 2-D frame for object to be detectedbThe height of the 2-D frame of the target to be detected and h are the distance from the optical axis of the camera to the ground. Wherein the focal length f of the camerax=fy
In step S103, in order to obtain the actual spatial position of the target to be detected, a Y-axis position parameter and an X-axis position parameter of the target to be detected in the actual space need to be calculated; namely said calculating at least two spatial positions of the object to be detected relative to the unmanned aerial vehicle, further comprises:
calculating at least two vertical distances between the target to be detected in the at least two images and the positive direction of the unmanned equipment based on the two-dimensional position information of the target to be detected in the at least two images and the parameters of the acquisition unit;
and calculating at least two vertical distances between the target to be detected in the at least two images and the horizontal plane where the unmanned equipment is located based on the two-dimensional position information of the target to be detected in the at least two images and the parameters of the acquisition unit.
The vertical distance between the target to be detected and the positive direction of the unmanned equipment is a position parameter of an X axis in the actual spatial position of the target to be detected; and the vertical distance between the target to be detected and the horizontal plane where the unmanned equipment is located is a position parameter of the Y axis in the actual spatial position of the target to be detected.
The method for calculating at least two vertical distances (X-axis position parameters) between the target to be detected in the at least two images and the positive direction of the unmanned equipment and at least two vertical distances (Y-axis position parameters) between the target to be detected and the horizontal plane where the unmanned equipment is located based on the two-dimensional position information of the target to be detected in the at least two images and the parameters of the acquisition unit comprises the following steps:
calculating at least two vertical distances in the positive direction between the target to be detected in the at least two images and the unmanned equipment based on the central position of the 2-D frame of the target to be detected in the at least two images, the central position of the images, the focal length of the acquisition unit and the distance between the target to be detected and the unmanned equipment;
and calculating at least two vertical distances between the target to be detected in the at least two images and the horizontal plane where the unmanned equipment is located based on the central position of the 2-D frame of the target to be detected in the at least two images, the central position of the images, the focal length of the acquisition unit and the distance between the target to be detected and the unmanned equipment.
Here, it should be noted that when the unmanned aerial vehicle is photographed directly behind an object to be detected, when the spatial position of the object to be detected is calculated by using the position of the object to be detected in the photographed image and the parameters of the camera, the obtained position parameter of the X axis is 0.
Based on the calculated distance Z between the target to be detected and the unmanned equipment, the vertical distance X between the target to be detected and the positive direction of the unmanned equipment can be calculated through the following formula:
X=(Ix-Cx)/fx*Z
wherein f isxIs the focal length of the camera on the drone, IxCoordinates of the x-axis of the center position of the 2-D frame as the object to be detected, CxThe coordinate of the x axis of the central position of the image and the Z axis are the distance between the target to be detected and the unmanned equipment.
FIG. 5 is an image plane containing objects to be inspectedIn the embodiment, an image coordinate system is established by taking the upper left corner p of the image as a coordinate origin, and a rectangle in the image, namely a 2-D frame of the target to be detected, represents the position of the target to be detected in the image; wherein, Cx、CyDefines the center position of the picture, Ix、IyDefining the central position H of the 2-D frame of the object to be detected in the imagebCalculating the distance I from the lower frame of the 2-D frame of the target to be detected to the optical axis according to the relative operation for the height of the 2-D frame of the target to be detectedy+0.5*Hb-CyDistance I from the center of the 2-D frame of the target to be detected to the optical axis vertical planex-Cx
Similarly, based on the two-dimensional position information of the target to be detected in the image and the parameters of the acquisition unit, the vertical distance Y between the target to be detected and the horizontal plane where the unmanned equipment is located can be calculated through the camera imaging principle.
Here, it should be noted that when the imaging plane of the camera is not perpendicular to the ground, a similar result can be obtained only by performing projection transformation on the image; specifically, when an included angle exists between the position of the target in the obtained shot image and the ground, it can be determined that an included angle also exists between the actual spatial position of the target and the actual ground, and the included angle between the target in the image and the ground is the included angle between the camera imaging plane and the ground.
After the spatial position of the target to be detected is obtained through calculation, the motion trajectory of the target to be detected needs to be restored according to the obtained spatial position, and then the motion trend of the target to be detected at the next moment is predicted according to the restored motion trajectory. Here, at least two spatial positions of the object to be detected need to be obtained to recover the motion trajectory of the object to be detected.
The predicting of the motion trail of the target to be detected based on at least two spatial positions of each target to be detected comprises the following steps: fitting the motion trail of the target to be detected based on a polynomial parameter model, and predicting the motion trail of the target to be detected; or predicting the motion trail of the target to be detected based on a machine learning method.
It should be noted that, in the embodiment, the trajectory prediction of the target to be detected may adopt a simple polynomial parameter model to fit the motion trajectory of the recovered target to be detected, and the complexity of the polynomial parameter model is adjusted according to the length of trajectory crossing. For example, the motion trajectory of a certain target to be detected obtained by continuous 5S shooting is a straight line, and then the motion trajectory can be fitted through a 2-term parameter model; the method can also be as follows: through continuous 10S shooting, the motion trail of a certain target to be detected is a section of curve, so that the motion trail of the target to be detected can be fitted through a polynomial parameter model, and parameters of the polynomial model are set according to the condition of the curve. The continuity of the space position is used as a supervision item of the image detection tracking method, and the expandability of the whole algorithm is ensured.
The prediction of the motion track of the target to be detected can also be realized by a machine learning method, by acquiring the tracks and characteristics of a large number of dynamic targets and training and testing based on a classification model, so as to predict the track prediction of the target to be detected at the next moment or moments; here, the characteristic of the acquired dynamic object may be a dynamic object category, speed, scale, or the like.
After predicting the motion trajectory of the target to be detected based on the at least two spatial positions of each target to be detected, the method further comprises: and adjusting the path planning of the unmanned equipment according to the predicted motion trail of at least one target to be detected.
In the embodiment, the camera is arranged on the unmanned equipment, the set camera is used for shooting a plurality of images in the driving process of the unmanned equipment, the obstacle track in the driving process of the unmanned equipment is predicted through a series of algorithms based on the shot image information and camera parameters, and then the unmanned equipment can adjust the driving path of the unmanned equipment according to the predicted movement tracks of a plurality of obstacles, so that a foundation is laid for autonomous navigation of the unmanned equipment.
In order to express the content of the embodiment more clearly, the following description is made with reference to specific application cases:
as shown in fig. 6, during the operation of the unmanned device (e.g. robot) from the a place to the B place, people/vehicles moving around need to be sensed, the moving trend of the people/vehicles around is predicted, and the operation of the robot from the a place to the B place is completed. The method comprises the steps that a camera is arranged on a robot body, one camera is adopted to continuously shoot a plurality of images in the operation process of the robot from an A place to a B place, each image obtained by shooting is detected and processed, when an object exists in the image, namely an obstacle (such as a pedestrian, a vehicle and the like), the image is processed, the obstacle in the image is identified, and the spatial position of the obstacle is recovered according to a spatial algorithm; when a plurality of spatial positions of the same obstacle such as the pedestrian A are obtained, the obtained spatial positions of the pedestrian A are sequenced based on the time stamps corresponding to the images, the action track of the pedestrian A is further determined, and then the action trend of the pedestrian A at the next stage is predicted according to the action track.
Here, it should be noted that, in the course of predicting the trajectory, the action trajectories of other multiple obstacles and the action trends in the next stage are also obtained, so that the robot can avoid the obstacle according to the action trend of the obstacle predicted in real time based on the route planning from the a place to the B during the operation from the a place to the B place, and prepare for the robot to complete the navigation from the a place to the B place.
The embodiment can also realize that when the robot predicts the target (obstacle: person/vehicle), the robot detects images of all targets (obstacle: person/vehicle) shot by the camera arranged on the robot body in the process of traveling by an off-line method, and recovers the data of the size, the movement speed and the like of the target (obstacle: person/vehicle) in the 3-D space according to the detected images.
In the trajectory prediction method provided by this embodiment, a neural network algorithm is used to identify and detect a target in at least two acquired images, a spatial position of the target is recovered based on a 2-D frame of the target obtained by detection and parameters of an acquisition unit, and after at least two spatial positions of the target are obtained, the target motion trajectory can be recovered based on the at least two spatial positions, so as to predict a next-moment action trajectory of the target; because a large number of images of the target are collected and a simple and effective spatial position estimation method is used, the feasibility of directly predicting the track by using a machine learning method is ensured.
Based on the same technical concept as the previous embodiment, an embodiment of the present invention provides a trajectory prediction apparatus 200, as shown in fig. 7, the apparatus including: an acquisition module 201, an identification module 202, a reconstruction module 203 and a prediction module 204; wherein the content of the first and second substances,
the acquisition module 201 is configured to acquire at least two images, where each of the at least two images includes at least one target to be detected;
the identification module 202 is configured to identify a target to be detected in the at least two images, and acquire two-dimensional position information of the target to be detected in the at least two images;
the reconstruction module 203 is configured to calculate a distance between the target to be detected and the unmanned device in the at least two images based on the two-dimensional position information of the target to be detected in the at least two images and the parameters of the acquisition unit, and calculate at least two spatial positions of the target to be detected relative to the unmanned device based on at least two distances between the target to be detected and the unmanned device;
the predicting module 204 is configured to predict a motion trajectory of each target to be detected based on at least two spatial positions of each target to be detected.
Here, the identification module 202 includes: a detection module 2021 and a marking module 2022; the detection module 2021 is configured to detect a position of the target to be detected in the image based on a neural network algorithm; the marking module 2022 is configured to set a 2-D frame of the target to be detected based on a position of the target to be detected in the image, where a center position of the 2-D frame is a center position of the target to be detected.
It should be noted that the target detection in this embodiment may identify the target to be detected in the image based on neural network algorithms such as r-cnn, fast-rcnn, yolo, ssd, and the like, and set the 2-D frame to mark the target to be detected, so as to obtain the specific position of the object on the image and what the object is.
Further, since the actual spatial position of the target to be detected is calculated, 3 position parameters of XYZ axes of the target to be detected need to be obtained; the reconstruction module 203 in the trajectory prediction apparatus 200 includes a vertical distance module 2031, where the vertical distance module 2031 is configured to calculate at least two vertical distances between the target to be detected in the at least two images and the positive direction of the unmanned aerial vehicle, based on the two-dimensional position information of the target to be detected in the at least two images and the parameters of the acquisition unit; and calculating at least two vertical distances between the target to be detected in the at least two images and the horizontal plane where the unmanned equipment is located based on the two-dimensional position information of the target to be detected in the at least two images and the parameters of the acquisition unit.
The reconstruction module 203 further includes a depth distance module 2032; the depth distance module 2032 is configured to calculate at least two distances between the target to be detected and the unmanned device based on the center position of the 2-D frame, the height of the 2-D frame, the focal length of the acquisition unit, the center position of the image, and the height of the unmanned device in the at least two images.
The vertical distance module 2031 is further configured to calculate at least two vertical distances in the positive direction between the target to be detected and the unmanned aerial vehicle in the at least two images based on the center position of the 2-D frame of the target to be detected in the at least two images, the center position of the image, the focal length of the acquisition unit, and the distance between the target to be detected and the unmanned aerial vehicle; and calculating at least two vertical distances between the target to be detected and the horizontal plane where the unmanned equipment is located based on the central position of the 2-D frame of the target to be detected in the at least two images, the central position of the images, the focal length of the acquisition unit and the distance between the target to be detected and the unmanned equipment.
The prediction module 204 is configured to fit the motion trajectory of the target to be detected based on the polynomial parameter model and predict the motion trajectory of the target to be detected; or predicting the motion trail of the target to be detected based on a machine learning method.
Here, the trajectory prediction apparatus 200 of the target further includes: a path adjustment module 205; the path adjusting module 205 is configured to adjust a path plan of the unmanned device according to the predicted motion trajectories of the multiple targets to be detected.
It should be noted that, because the principle of the trajectory prediction apparatus 200 for solving the problem is similar to the trajectory prediction method applied to the unmanned aerial vehicle, the specific implementation process and implementation principle of the trajectory prediction apparatus 200 can be referred to the foregoing method and implementation process, and the description of the implementation principle, and repeated details are not repeated.
The trajectory prediction device provided by this embodiment can detect and recognize a plurality of images shot in the driving process of the unmanned equipment through the recognition module, reconstruct the spatial position of a target in the plurality of shot images through the heavy modeling block, predict an obstacle in the driving process of the unmanned equipment based on the prediction module, adjust the driving path of the unmanned equipment according to the predicted movement trajectories of a plurality of targets to be detected, and lay a foundation for autonomous driving of the unmanned equipment.
Embodiments of the present invention further provide a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the foregoing method embodiments, and the foregoing storage medium includes: a mobile storage device, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
An embodiment of the present invention further provides a trajectory prediction apparatus for a moving object, including: a processor and a memory for storing a computer program capable of running on the processor, wherein the processor is configured to execute the steps of the above-described method embodiments stored in the memory when running the computer program.
Fig. 8 is a schematic hardware structure diagram of a trajectory prediction apparatus provided in an embodiment of the present invention, where the distributed transaction apparatus 300 includes: the various components of the at least one processor 301, the memory 302, and the trajectory prediction device 300 of the moving object are coupled together by a bus system 303, it being understood that the bus system 303 is used to enable communications among the components. The bus system 303 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 303 in FIG. 8.
The memory 302 in the embodiment of the present invention is used to store various types of data to support the operation of the trajectory prediction apparatus 300 for a moving object. Examples of such data include: any computer program for operating on the trajectory prediction device 300 of a moving object, such as stored acquired image information, spatial position coordinates of an object to be detected, etc., a program implementing the method of an embodiment of the present invention may be contained in the memory 302.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: a mobile storage device, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (16)

1. A trajectory prediction method is applied to unmanned equipment, and the unmanned equipment at least comprises an acquisition unit; characterized in that the method comprises:
acquiring at least two images, wherein each of the at least two images comprises at least one target to be detected;
identifying a target to be detected in the at least two images, and acquiring two-dimensional position information of the target to be detected in the at least two images;
calculating at least two distances between the target to be detected and the unmanned equipment in the at least two images based on the two-dimensional position information of the target to be detected in the at least two images and the parameters of the acquisition unit, and calculating at least two spatial positions of the target to be detected relative to the unmanned equipment based on the at least two distances between the target to be detected and the unmanned equipment;
and predicting the motion trail of the target to be detected based on at least two spatial positions of each target to be detected.
2. The method according to claim 1, wherein the acquiring two-dimensional position information of the object to be detected in the at least two images comprises:
detecting the position of a target to be detected in the image based on a neural network algorithm;
and setting a 2-D frame of the target to be detected based on the position of the target to be detected in the image, wherein the central position of the 2-D frame is the central position of the target to be detected.
3. The method of claim 1, wherein the calculating at least two spatial positions of the object to be detected relative to the drone further comprises:
calculating at least two vertical distances between the target to be detected in the at least two images and the positive direction of the unmanned equipment based on the two-dimensional position information of the target to be detected in the at least two images and the parameters of the acquisition unit;
and calculating at least two vertical distances between the target to be detected in the at least two images and the horizontal plane where the unmanned equipment is located based on the two-dimensional position information of the target to be detected in the at least two images and the parameters of the acquisition unit.
4. The method according to claim 3, wherein the calculating of the at least two distances between the target to be detected and the unmanned equipment in the at least two images based on the two-dimensional position information of the target to be detected in the at least two images and the parameters of the acquisition unit comprises:
and calculating at least two distances between the target to be detected and the unmanned equipment in the at least two images based on the central position of the 2-D frame, the height of the 2-D frame, the focal length of the acquisition unit, the central position of the images and the height of the unmanned equipment of the target to be detected in the at least two images.
5. The method according to claim 4, wherein the calculating at least two vertical distances from the target to be detected in the at least two images to the positive direction of the unmanned aerial vehicle and at least two vertical distances from the target to be detected in the at least two images to the horizontal plane of the unmanned aerial vehicle based on the two-dimensional position information of the target to be detected in the at least two images and the parameters of the acquisition unit comprises:
calculating at least two vertical distances between the target to be detected in the at least two images and the unmanned equipment in the positive direction based on the central position of the 2-D frame of the target to be detected in the at least two images, the central position of the images, the focal length of the acquisition unit and the distance between the target to be detected and the unmanned equipment;
and calculating at least two vertical distances between the target to be detected in the at least two images and the horizontal plane where the unmanned equipment is located based on the central position of the 2-D frame of the target to be detected in the at least two images, the central position of the images, the focal length of the acquisition unit and the distance between the target to be detected and the unmanned equipment.
6. The method according to claim 1, wherein predicting the motion trajectory of the to-be-detected object based on at least two spatial positions of each to-be-detected object comprises:
fitting the motion trail of the target to be detected based on a polynomial parameter model, and predicting the motion trail of the target to be detected;
or predicting the motion trail of the target to be detected based on a machine learning method.
7. The method according to claim 1, wherein after predicting the motion trajectory of the objects to be detected based on at least two spatial positions of each object to be detected, the method further comprises:
and adjusting the path planning of the unmanned equipment according to the predicted motion trail of at least one target to be detected.
8. A trajectory prediction device, characterized in that the device comprises: the device comprises an acquisition module, an identification module, a reconstruction module and a prediction module; wherein the content of the first and second substances,
the acquisition module is used for acquiring at least two images, and the at least two images respectively comprise at least one target to be detected;
the identification module is used for identifying the target to be detected in the at least two images and acquiring two-dimensional position information of the target to be detected in the at least two images;
the reconstruction module is used for calculating at least two distances between the target to be detected and the unmanned equipment in the at least two images based on the two-dimensional position information of the target to be detected in the at least two images and the parameters of the acquisition unit, and calculating at least two spatial positions of the target to be detected relative to the unmanned equipment based on the at least two distances between the target to be detected and the unmanned equipment;
the prediction module is used for predicting the motion trail of the target to be detected based on at least two spatial positions of each target to be detected.
9. The apparatus of claim 8, wherein the identification module comprises: a detection module and a marking module; wherein the content of the first and second substances,
the detection module is used for detecting the position of the target to be detected in the image based on a neural network algorithm;
the marking module is used for setting a 2-D frame of the target to be detected based on the position of the target to be detected in the image, and the central position of the 2-D frame is the central position of the target to be detected.
10. The apparatus of claim 8, wherein the reconstruction module comprises a vertical distance module,
the vertical distance module is used for calculating at least two vertical distances between the target to be detected in the at least two images and the positive direction of the unmanned equipment based on the two-dimensional position information of the target to be detected in the at least two images and the parameters of the acquisition unit;
and calculating at least two vertical distances between the target to be detected in the at least two images and the horizontal plane where the unmanned equipment is located based on the two-dimensional position information of the target to be detected in the at least two images and the parameters of the acquisition unit.
11. The apparatus of claim 10, wherein the reconstruction module further comprises a depth distance module; wherein the content of the first and second substances,
the depth distance module is used for calculating at least two distances between the target to be detected and the unmanned equipment in the at least two images based on the central position of the 2-D frame of the target to be detected in the at least two images, the height of the 2-D frame, the focal length of the acquisition unit, the central position of the images and the height of the unmanned equipment.
12. The apparatus of claim 11,
the vertical distance module is specifically used for calculating at least two vertical distances in the positive direction between the target to be detected in the at least two images and the unmanned equipment based on the central position of the 2-D frame of the target to be detected in the at least two images, the central position of the images, the focal length of the acquisition unit and the distance between the target to be detected and the unmanned equipment;
and calculating at least two vertical distances between the target to be detected in the at least two images and the horizontal plane where the unmanned equipment is located based on the central position of the 2-D frame of the target to be detected in the at least two images, the central position of the images, the focal length of the acquisition unit and the distance between the target to be detected and the unmanned equipment.
13. The apparatus of claim 8,
the prediction module is used for fitting the motion trail of the target to be detected based on the polynomial parameter model and predicting the motion trail of the target to be detected;
or predicting the motion trail of the target to be detected based on a machine learning method.
14. The apparatus of claim 8, further comprising: a path adjustment module; wherein the content of the first and second substances,
and the path adjusting module is used for adjusting the path planning of the unmanned equipment according to the predicted motion trail of the at least one target to be detected.
15. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
16. A trajectory prediction device, comprising: a processor and a memory for storing a computer program operable on the processor, wherein the processor is operable to perform the steps of the method of any of claims 1 to 7 when the computer program is executed.
CN201910127531.XA 2019-02-20 2019-02-20 Track prediction method, device and storage medium Pending CN111598920A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910127531.XA CN111598920A (en) 2019-02-20 2019-02-20 Track prediction method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910127531.XA CN111598920A (en) 2019-02-20 2019-02-20 Track prediction method, device and storage medium

Publications (1)

Publication Number Publication Date
CN111598920A true CN111598920A (en) 2020-08-28

Family

ID=72192071

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910127531.XA Pending CN111598920A (en) 2019-02-20 2019-02-20 Track prediction method, device and storage medium

Country Status (1)

Country Link
CN (1) CN111598920A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104200689A (en) * 2014-08-28 2014-12-10 长城汽车股份有限公司 Road early warning method and device
CN104700414A (en) * 2015-03-23 2015-06-10 华中科技大学 Rapid distance-measuring method for pedestrian on road ahead on the basis of on-board binocular camera
US10031526B1 (en) * 2017-07-03 2018-07-24 Baidu Usa Llc Vision-based driving scenario generator for autonomous driving simulation
DE102018105014A1 (en) * 2017-03-06 2018-09-06 GM Global Technology Operations LLC FORECAST ALGORITHM FOR A VEHICLE CRASH USING A RADAR SENSOR AND A UPA SENSOR
US20190049970A1 (en) * 2017-08-08 2019-02-14 Uber Technologies, Inc. Object Motion Prediction and Autonomous Vehicle Control

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104200689A (en) * 2014-08-28 2014-12-10 长城汽车股份有限公司 Road early warning method and device
CN104700414A (en) * 2015-03-23 2015-06-10 华中科技大学 Rapid distance-measuring method for pedestrian on road ahead on the basis of on-board binocular camera
DE102018105014A1 (en) * 2017-03-06 2018-09-06 GM Global Technology Operations LLC FORECAST ALGORITHM FOR A VEHICLE CRASH USING A RADAR SENSOR AND A UPA SENSOR
US10031526B1 (en) * 2017-07-03 2018-07-24 Baidu Usa Llc Vision-based driving scenario generator for autonomous driving simulation
US20190049970A1 (en) * 2017-08-08 2019-02-14 Uber Technologies, Inc. Object Motion Prediction and Autonomous Vehicle Control

Similar Documents

Publication Publication Date Title
Li et al. End-to-end contextual perception and prediction with interaction transformer
CN108051002B (en) Transport vehicle space positioning method and system based on inertial measurement auxiliary vision
Asadi et al. Vision-based integrated mobile robotic system for real-time applications in construction
US10878288B2 (en) Database construction system for machine-learning
Schneider et al. Pedestrian path prediction with recursive bayesian filters: A comparative study
Fernández-Caballero et al. Optical flow or image subtraction in human detection from infrared camera on mobile robot
CN114474061B (en) Cloud service-based multi-sensor fusion positioning navigation system and method for robot
JP5782088B2 (en) System and method for correcting distorted camera images
CN107491071B (en) Intelligent multi-robot cooperative mapping system and method thereof
CN112740274A (en) System and method for VSLAM scale estimation on robotic devices using optical flow sensors
Zou et al. Real-time full-stack traffic scene perception for autonomous driving with roadside cameras
JP2016009487A (en) Sensor system for determining distance information on the basis of stereoscopic image
CN114723955A (en) Image processing method, device, equipment and computer readable storage medium
CN114419098A (en) Moving target trajectory prediction method and device based on visual transformation
Cardarelli et al. Multisensor data fusion for obstacle detection in automated factory logistics
EP3555854B1 (en) A method of tracking objects in a scene
US20210382495A1 (en) Method for representing an environment of a mobile platform
Hirose et al. ExAug: Robot-conditioned navigation policies via geometric experience augmentation
Golovnin et al. Video processing method for high-definition maps generation
Gayanov et al. Estimating the trajectory of a thrown object from video signal with use of genetic programming
Vatavu et al. Modeling and tracking of dynamic obstacles for logistic plants using omnidirectional stereo vision
Pathirana et al. Robust video/ultrasonic fusion-based estimation for automotive applications
Rabie et al. Mobile active‐vision traffic surveillance system for urban networks
CN111598920A (en) Track prediction method, device and storage medium
CN115222815A (en) Obstacle distance detection method, obstacle distance detection device, computer device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination