CN113112524A - Method and device for predicting track of moving object in automatic driving and computing equipment - Google Patents

Method and device for predicting track of moving object in automatic driving and computing equipment Download PDF

Info

Publication number
CN113112524A
CN113112524A CN202110429261.5A CN202110429261A CN113112524A CN 113112524 A CN113112524 A CN 113112524A CN 202110429261 A CN202110429261 A CN 202110429261A CN 113112524 A CN113112524 A CN 113112524A
Authority
CN
China
Prior art keywords
data
moving object
current image
image frame
obtaining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110429261.5A
Other languages
Chinese (zh)
Other versions
CN113112524B (en
Inventor
郭波
黄硕
朱磊
贾双成
李成军
李倩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhidao Network Technology Beijing Co Ltd
Original Assignee
Zhidao Network Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhidao Network Technology Beijing Co Ltd filed Critical Zhidao Network Technology Beijing Co Ltd
Priority to CN202110429261.5A priority Critical patent/CN113112524B/en
Publication of CN113112524A publication Critical patent/CN113112524A/en
Application granted granted Critical
Publication of CN113112524B publication Critical patent/CN113112524B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/251Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to a method and a device for predicting a track of a moving object in automatic driving and computing equipment. The prediction method comprises the following steps: obtaining a video stream of an environment surrounding an autonomous vehicle; acquiring detection data of a moving object through a preset target detector for a current image frame in a video stream; acquiring state estimation data of the moving object in the next frame according to the detection data of the moving object and a preset Kalman filter; acquiring road related feature data corresponding to the current image frame; and obtaining the moving track prediction data of the moving object in the preset time length after the current image frame according to the state estimation data of the moving object in the next frame, the road related characteristic data and a prestored track prediction model. The scheme provided by the application improves the accuracy of detecting the motion track of the moving object, and is favorable for improving the running safety of the automatic driving vehicle.

Description

Method and device for predicting track of moving object in automatic driving and computing equipment
Technical Field
The present application relates to the field of automatic driving technologies, and in particular, to a method and an apparatus for predicting a trajectory of a moving object in automatic driving, and a computing device.
Background
Currently, in the field of automatic driving, research on moving objects mostly focuses on detection and identification of moving objects and target tracking, and focuses more on the current position of the moving object, rather than predicting the future position of the moving object. When the automatic driving automobile detects that a moving object passes by the front, the automatic driving automobile can stop to wait for the moving object to pass through the safety envelope line and then continue to drive.
In the related art, the main problem of intelligent decision-making behavior of an automatic driving automobile when facing a moving object is that the moving object is used as a general obstacle for prediction, so that the track prediction accuracy of the moving object in an actual scene is low, and the actual requirement cannot be met.
Disclosure of Invention
In order to solve the problems in the related art, the application provides a method, a device and a computing device for predicting the moving track of a moving object in automatic driving, which can improve the accuracy of predicting the moving track of the moving object and further improve the driving safety of an automatic driving vehicle.
The first aspect of the present application provides a trajectory prediction method for a moving object in automatic driving, including:
s11: obtaining a video stream of an environment surrounding an autonomous vehicle;
s12: obtaining detection data of a moving object through a preset target detector for a current image frame in the video stream;
s13: acquiring state estimation data of the moving object in the next frame according to the detection data of the moving object and a preset Kalman filter;
s14: acquiring road related feature data corresponding to the current image frame;
s15: and obtaining the moving track prediction data of the moving object in a preset time length after the current image frame according to the state estimation data, the road related characteristic data and a pre-stored track prediction model.
According to an embodiment of the present application, obtaining state estimation data of the moving object in a next frame according to the moving object detection data and a preset kalman filter specifically includes: acquiring Kalman filtering state estimation data of the moving object in a previous frame; obtaining state observation data of the moving object in the current image frame; and obtaining the state estimation data of the moving object in the next frame according to the Kalman filtering state estimation data of the previous frame and the state observation data in the current image frame.
In some embodiments, the obtaining the road-related feature data corresponding to the current image frame specifically includes: obtaining position data and course angle data of the automatic driving vehicle and the current image frame at the same time; and obtaining road related feature data corresponding to the current image frame according to the position data, the course angle data, and pre-stored high-precision map data and the current image frame.
In some embodiments, obtaining, for a current image frame in the video stream, detection data of a moving object through a preset target detector specifically includes: obtaining a plurality of detection data corresponding to a plurality of moving objects through a preset target detector for a current image frame in the video stream; obtaining state estimation data of the moving object in the next frame according to the detection data of the moving object and a preset Kalman filter, specifically comprising: obtaining a plurality of detection data of a plurality of moving objects in the current image frame and matching data of a plurality of detection data in the previous frame by a preset multi-target tracking algorithm for the current image frame and the previous frame in the video stream; and, according to the matching data, for each moving object: acquiring Kalman filtering state estimation data of the moving object in a previous frame; obtaining state observation data of the moving object in a current image frame; and obtaining the state estimation data of the moving object in the next frame according to the Kalman filtering state estimation data of the previous frame and the state observation data in the current image frame.
In some embodiments, after acquiring the video stream of the environment around the autonomous vehicle, a plurality of image frames are obtained from the video stream, and S12 to S15 are respectively performed with each image frame as a current image frame, to obtain a plurality of pieces of trajectory prediction data of the same moving object; and obtaining a piece of track prediction data according to the plurality of pieces of track prediction data.
In some embodiments, the detecting the moving object by a preset target detector specifically includes: and detecting the moving object through a YOLOv5 detection network.
In some embodiments, the state estimation data comprises at least part of direction data, velocity data and position data of the moving object; and/or the road-related feature data comprises at least part of road route data, marking data and intersection data around the location of the moving object.
A second aspect of the present application provides an apparatus for predicting a trajectory of a moving object in automatic driving, including: a first acquisition module for acquiring a video stream of an environment surrounding an autonomous vehicle; the target detection module is used for acquiring detection data of a moving object for a current image frame in the video stream through a preset target detector; the state estimation module is used for acquiring state estimation data of the moving object in the next frame according to the detection data of the moving object and a preset Kalman filter; the second acquisition module is used for acquiring road related characteristic data corresponding to the current image frame; and the track prediction module is used for obtaining the moving track prediction data of the moving object in a preset time length after the current image frame according to the state estimation data, the road related characteristic data and a prestored track prediction model.
A third aspect of the present application provides a computing device comprising:
a processor; and
a memory having executable code stored thereon, which when executed by the processor, causes the processor to perform the method as described above.
According to the method for predicting the track of the moving object in automatic driving, provided by the embodiment of the application, the state estimation data of the moving object in the next frame is obtained according to the detection data of the moving object in the current image frame and a preset Kalman filter; and then obtaining the moving track prediction data of the moving object in the preset time length after the current image frame according to the state estimation data of the moving object in the next frame, the road related characteristic data corresponding to the current image frame and a pre-stored track prediction model. On the basis that more accurate state estimation data of the moving object in the next frame can be obtained through the Kalman filter, the moving track of the moving object is predicted by using the road related characteristic data, so that the track prediction of the moving object can be more accurate, and the safety of the automatic driving vehicle is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The foregoing and other objects, features and advantages of the application will be apparent from the following more particular descriptions of exemplary embodiments of the application, as illustrated in the accompanying drawings wherein like reference numbers generally represent like parts throughout the exemplary embodiments of the application.
Fig. 1 is a schematic flowchart of a trajectory prediction method of a moving object in automatic driving according to an embodiment of the present application;
fig. 2 is a flowchart illustrating a trajectory prediction method of a moving object in automatic driving according to another embodiment of the present application;
FIG. 3 is a block diagram schematically illustrating a structure of a trajectory prediction device according to an embodiment of the present application;
FIG. 4 is a block diagram illustrating a computing device according to an embodiment of the present application.
Description of reference numerals:
100. a trajectory prediction device; 310. a first acquisition module; 320. a target detection module; 330. a state estimation module; 340. a second acquisition module; 350. a trajectory prediction model;
400. a computing device; a memory, 410; a processor, 420.
Detailed Description
Preferred embodiments of the present application will be described in more detail below with reference to the accompanying drawings. While the preferred embodiments of the present application are shown in the drawings, it should be understood that the present application may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms "first," "second," "third," etc. may be used herein to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present application, "a plurality" means two or more unless specifically limited otherwise.
In the related art, the main problem of intelligent decision-making behavior of an automatic driving automobile when facing a moving object is that the moving object is used as a general obstacle for prediction, so that the track prediction accuracy of the moving object in an actual scene is low, and the actual requirement cannot be met.
In view of the foregoing problems, embodiments of the present application provide a method and an apparatus for predicting a moving trajectory of a moving object during autonomous driving, which can improve the driving safety of an autonomous vehicle by improving the accuracy of predicting the moving trajectory of the moving object.
The technical solutions of the embodiments of the present application are described in detail below with reference to the accompanying drawings. It is to be appreciated that aspects of the present disclosure may be performed by a computing device in an autonomous vehicle, but are not limited to such, and may also be performed by a cloud server, for example.
Fig. 1 is a schematic flowchart of a trajectory prediction method of a moving object in automatic driving according to an embodiment of the present application. Referring to fig. 1, the trajectory prediction method of the present embodiment includes:
s11: a video stream of an environment surrounding an autonomous vehicle is obtained.
One or more capture devices may be provided on the autonomous vehicle to capture the surrounding environment, particularly the video stream in front of the vehicle, during travel of the vehicle. The capture device transmits the captured video stream to a computing device of the autonomous vehicle or to a cloud server over a mobile network.
S12: and obtaining the detection data of the moving object through a preset target detector for the current image frame in the video stream.
In some embodiments, a video stream of the vehicle surroundings is captured by an onboard camera of the autonomous vehicle. The vehicle-mounted camera can possibly generate distortion when acquiring images, so that the images are distorted, and therefore, before the detection of the moving object is carried out, the distortion correction can be carried out on the current image frame to ensure that the images subjected to the distortion correction are closest to the reality. Therefore, the accuracy of the moving object track prediction according to the image collected by the camera can be improved.
The trajectory prediction of the moving object is performed by first detecting the moving object in the image. In the present specification, the moving object is described as an example, but it is to be understood that the moving object may be, for example, another vehicle or the like, not only the moving object.
In some embodiments, mobile object detection may be performed, for example, by a yolo (you Only Look once) v5 detection network. The YOLOv5 detection network can quickly and accurately identify a target moving object in an environmental image. It is understood that the detection of the moving object may be performed by a target detection method such as a full convolution network (R-CNN) based on a region, a ssd (single Shot multi box detector), or other series of YOLO.
The detection data of the moving object output by the target detector may be position data indicating a bounding box of the moving object, and may be, for example, the center coordinates and the scale size of the bounding box, or may be the vertex coordinates of the bounding box, or the like.
It is understood that if a plurality of moving objects exist in the current image frame, respective detection data of each moving object is obtained.
S13: and obtaining the state estimation data of the moving object in the next frame according to the detection data of the moving object and a preset Kalman filter.
In the embodiment of the application, a kalman filter may be used to predict the state of the moving object at the next time. Kalman filtering can predict the trend of the next step of the system in a dynamic system containing uncertain information. In the embodiment of the application, the moving object has a difference from the moving process in an ideal state in the actual moving process, and the error can be incorporated into the calculation when the Kalman filter is used for calculation, so that the state estimation data of the moving object in the next frame can be accurately obtained.
In one implementation, obtaining the state estimation data of the moving object in the next frame according to the detection data of the moving object and a preset kalman filter may include:
and then obtaining the state estimation data of the moving object in the next frame according to the Kalman filtering state estimation data of the previous frame and the state observation data of the moving object in the current image frame. The state estimation data may include, among other things, velocity estimation data and pose (position and/or orientation) estimation data for moving objects. The state observation data may include velocity observation data and pose observation data of the moving object, and may be obtained by known methods, for example, by combining computer vision and image processing techniques with satellite positioning data of an autonomous vehicle and/or measurement data of an inertial measurement unit, which are not described in detail in this application.
S14: and acquiring road related feature data corresponding to the current image frame.
In the present application, the movement locus of a moving object is predicted by combining road-related feature data around the moving object. The road-related feature data around the moving object may be obtained by recognition and extraction from pre-stored high-precision map data and the current image frame.
In one implementation, the acquiring the road-related feature data corresponding to the current image frame specifically includes: obtaining position data and course angle data of the automatic driving vehicle and the current image frame at the same time, and obtaining road related map data corresponding to the current image frame according to the position data and the course angle data of the automatic driving vehicle and prestored high-precision map data; and identifying and acquiring real-time road characteristic data from the current image frame.
The position data and the heading angle data of the autonomous vehicle can be obtained, for example, by satellite positioning data of the autonomous vehicle and/or measurement data of an inertial measurement unit.
The road-related feature data comprises road-related map data as well as real-time road feature data (e.g. real-time traffic signal indications, poses of other moving objects, speed), etc. The road-related map data includes, for example, at least part of road route data around a position where the moving object is located, mark data including, for example, lane data such as a motor lane, a bicycle lane, a sidewalk, and the like, and intersection data including, for example, an intersection, a crosswalk, and the like.
S15: and inputting the state estimation data of the moving object in the next frame and the road related characteristic data into a prestored track prediction model to obtain the moving track prediction data of the moving object in the preset time length after the current image frame.
The track prediction model can be stored in advance, the state estimation data of the moving object in the next frame and the road related characteristic data are used as input data and input into the track prediction model, the pre-stored track prediction model carries out model prediction based on the input data, and the moving track prediction data of the moving object in the preset time length after the current image frame are obtained.
It is understood that the predetermined trajectory prediction model may be obtained from a network or a server. Furthermore, a track prediction model can be constructed by utilizing a deep learning algorithm, the established track prediction model is trained by utilizing a pre-obtained training data set, and the trained track prediction model is finally obtained through continuous iteration until convergence.
In some embodiments, after acquiring the video stream of the environment around the autonomous vehicle, a plurality of image frames are obtained from the video stream, and S12 to S15 are respectively performed with each image frame as a current image frame, to obtain a plurality of pieces of trajectory prediction data of the same moving object, and one piece of trajectory prediction data is obtained from the plurality of pieces of trajectory prediction data. Specifically, for example, one of the plurality of tracks may be selected according to a preset rule, or the plurality of tracks may be merged into one track according to a preset method.
According to the track prediction method for the moving object in automatic driving, provided by the embodiment of the application, the state estimation data of the moving object in the next frame is obtained according to the detection data of the moving object in the current image frame and a preset Kalman filter; and then obtaining the moving track prediction data of the moving object in the preset time length after the current image frame according to the state estimation data of the moving object in the next frame, the road related characteristic data corresponding to the current image frame and a pre-stored track prediction model. On the basis that more accurate state estimation data of the moving object in the next frame can be obtained through the Kalman filter, the moving track of the moving object is predicted by using the road related characteristic data, so that the track prediction of the moving object can be more accurate, and the safety of the automatic driving vehicle is improved.
Fig. 2 illustrates a trajectory prediction method of a moving object in autonomous driving according to another embodiment of the present application. Referring to fig. 2, the method of the present embodiment includes:
s21: a video stream of an environment surrounding an autonomous vehicle is obtained.
S22: and obtaining a plurality of detection data corresponding to a plurality of moving objects through a preset target detector for a current image frame in the video stream.
The plurality of pieces of detection data corresponding to the plurality of obtained moving objects may be, for example, a plurality of sets of position data indicating a plurality of bounding boxes of the plurality of moving objects. Each set of position data may be, for example, the center coordinates and the scale size of the corresponding bounding box, or may also be the vertex coordinates of the bounding box, or the like.
S23: the method comprises the steps of obtaining a plurality of detection data of a plurality of moving objects in a current image frame and a plurality of detection data of a plurality of moving objects in a previous image frame in a video stream through a preset multi-target tracking algorithm.
When there are multiple moving objects in an image, it is necessary to perform tracking matching on the same moving object in different frame images according to the detection result of the moving object. In this embodiment, for example, but not limited to, implementation may be achieved through a known hungarian algorithm, which is not described in detail in this application.
S24: and according to the matching data, for each moving object, obtaining state estimation data of the moving object in the next frame according to the detection data of the moving object and a preset Kalman filter.
A kalman filter may be employed to predict the state of the moving object at the next time.
In one implementation, obtaining the state estimation data of the moving object in the next frame according to the detection data of the moving object and a preset kalman filter may include: acquiring Kalman filtering state estimation data of the same moving object in the previous frame according to the matching data; acquiring state observation data of a moving object in a current image frame; and obtaining the state estimation data of the moving object in the next frame according to the Kalman filtering state estimation data of the previous frame and the state observation data in the current image frame.
S25: acquiring road related feature data corresponding to a current image frame;
the road-related feature data around the moving object may be obtained by recognition and extraction from pre-stored high-precision map data and the current image frame.
In one implementation, the acquiring the road-related feature data corresponding to the current image frame specifically includes: obtaining position data and course angle data of the automatic driving vehicle and the current image frame at the same time, and obtaining road related map data corresponding to the current image frame according to the position data and the course angle data of the automatic driving vehicle and prestored high-precision map data; and identifying and acquiring real-time road characteristic data from the current image frame.
S26: and respectively inputting the state estimation data and road related characteristic data of each moving object in the next frame into a prestored track prediction model to obtain the moving track prediction data of each moving object in a preset time length after the current image frame.
The track prediction model can be stored in advance, the state estimation data of the moving object in the next frame and the road related characteristic data are used as input data and input into the track prediction model, the pre-stored track prediction model carries out model prediction based on the input data, and the moving track prediction data of the moving object in the preset time length after the current image frame are obtained.
After obtaining the plurality of pieces of movement trajectory prediction data of the plurality of moving objects, a travel decision of the autonomous vehicle may be made based on the plurality of pieces of movement trajectory prediction data.
Corresponding to the embodiment of the method for predicting the track of the moving object in automatic driving, the application also provides a track prediction device.
Fig. 3 is a schematic structural diagram of a trajectory prediction device for a moving object in automatic driving according to an embodiment of the present application.
Referring to fig. 3, the trajectory prediction apparatus 100 provided in this embodiment includes:
a first obtaining module 310 for obtaining a video stream of an environment surrounding an autonomous vehicle;
the target detection module 320 is configured to obtain detection data of a moving object through a preset target detector for a current image frame in the video stream;
the state estimation module 330 is configured to obtain state estimation data of the moving object in a next frame according to the detection data of the moving object and a preset kalman filter;
a second obtaining module 340, configured to obtain road-related feature data corresponding to the current image frame;
and a track prediction module 350, configured to obtain moving track prediction data of the moving object within a preset time length after the current image frame according to the state estimation data of the moving object in the next frame, the road-related feature data, and a pre-stored track prediction model.
In some embodiments, the state estimation module 330 obtains the state estimation data of the moving object in the next frame according to the moving object detection data and a preset kalman filter, and specifically includes:
acquiring Kalman filtering state estimation data of the moving object in a previous frame;
obtaining state observation data of the moving object in a current image frame;
and obtaining the state estimation data of the moving object in the next frame according to the Kalman filtering state estimation data of the moving object in the previous frame and the state observation data in the current image frame.
In some embodiments, the second obtaining module 340 obtains the road-related feature data corresponding to the current image frame, specifically including:
obtaining position data and course angle data of the automatic driving vehicle and the current image frame at the same time;
and obtaining road related characteristic data corresponding to the current image frame according to the position data, the course angle data, and pre-stored high-precision map data and the current image frame.
FIG. 4 is a block diagram of a computing device 400 according to an embodiment of the present application. The electronic device of the present embodiment may be, for example, a device mounted on an autonomous vehicle or a cloud server.
Referring to fig. 4, computing device 400 includes memory 410 and processor 420.
The Processor 420 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 410 may include various types of storage units, such as system memory, Read Only Memory (ROM), and permanent storage. Wherein the ROM may store static data or instructions that are required by the processor 420 or other modules of the computer. The persistent storage device may be a read-write storage device. The persistent storage may be a non-volatile storage device that does not lose stored instructions and data even after the computer is powered off. In some embodiments, the persistent storage device employs a mass storage device (e.g., magnetic or optical disk, flash memory) as the persistent storage device. In other embodiments, the permanent storage may be a removable storage device (e.g., floppy disk, optical drive). The system memory may be a read-write memory device or a volatile read-write memory device, such as a dynamic random access memory. The system memory may store instructions and data that some or all of the processors require at runtime. Further, the memory 410 may include any combination of computer-readable storage media, including various types of semiconductor memory chips (DRAM, SRAM, SDRAM, flash memory, programmable read-only memory), magnetic and/or optical disks, may also be employed. In some embodiments, memory 410 may include a removable storage device that is readable and/or writable, such as a Compact Disc (CD), a read-only digital versatile disc (e.g., DVD-ROM, dual layer DVD-ROM), a read-only Blu-ray disc, an ultra-density optical disc, a flash memory card (e.g., SD card, min SD card, Micro-SD card, etc.), a magnetic floppy disc, or the like. Computer-readable storage media do not contain carrier waves or transitory electronic signals transmitted by wireless or wired means.
The memory 410 has stored thereon executable code that, when processed by the processor 420, may cause the processor 420 to perform some or all of the methods described above.
The aspects of the present application have been described in detail hereinabove with reference to the accompanying drawings. In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments. Those skilled in the art should also appreciate that the acts and modules referred to in the specification are not necessarily required in the present application. In addition, it can be understood that the steps in the method of the embodiment of the present application may be sequentially adjusted, combined, and deleted according to actual needs, and the modules in the device of the embodiment of the present application may be combined, divided, and deleted according to actual needs.
Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the disclosure herein may be implemented as electronic hardware, computer software, or combinations of both.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems and methods according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Having described embodiments of the present application, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (10)

1. A trajectory prediction method for a moving object in automatic driving, comprising:
s11: obtaining a video stream of an environment surrounding an autonomous vehicle;
s12: obtaining detection data of a moving object through a preset target detector for a current image frame in the video stream;
s13: according to the detection data of the moving object and a preset Kalman filter, obtaining state estimation data of the moving object in the next frame;
s14: acquiring road related feature data corresponding to the current image frame;
s15: and obtaining the moving track prediction data of the moving object in a preset time length after the current image frame according to the state estimation data, the road related characteristic data and a pre-stored track prediction model.
2. The method according to claim 1, wherein obtaining the state estimation data of the moving object in the next frame according to the moving object detection data and a preset kalman filter specifically comprises:
acquiring Kalman filtering state estimation data of the moving object in a previous frame;
obtaining state observation data of the moving object in the current image frame;
and obtaining the state estimation data of the moving object in the next frame according to the Kalman filtering state estimation data of the previous frame and the state observation data in the current image frame.
3. The method of claim 2, wherein: acquiring road-related feature data corresponding to the current image frame, specifically comprising:
obtaining position data and course angle data of the automatic driving vehicle and the current image frame at the same time, and obtaining road related map data corresponding to the current image frame according to the position data, the course angle data and prestored high-precision map data; and
and identifying and acquiring preset real-time road characteristic data from the current image frame.
4. The method of claim 1, wherein:
obtaining detection data of a moving object for a current image frame in the video stream through a preset target detector, specifically comprising: obtaining a plurality of detection data corresponding to a plurality of moving objects through a preset target detector for a current image frame in the video stream;
obtaining state estimation data of the moving object in the next frame according to the detection data of the moving object and a preset Kalman filter, specifically comprising:
obtaining a plurality of detection data of a plurality of moving objects in the current image frame and matching data of a plurality of detection data in the previous frame by a preset multi-target tracking algorithm for the current image frame and the previous frame in the video stream; and the number of the first and second electrodes,
-for each moving object, in dependence of the matching data:
acquiring Kalman filtering state estimation data of the moving object in a previous frame;
obtaining state observation data of the moving object in a current image frame;
and obtaining the state estimation data of the moving object in the next frame according to the Kalman filtering state estimation data of the previous frame and the state observation data in the current image frame.
5. The method of claim 1, wherein: after acquiring a video stream of the surrounding environment of the autonomous vehicle, acquiring a plurality of image frames from the video stream, performing S12 to S15 with each image frame as a current image frame, respectively, and acquiring a plurality of pieces of trajectory prediction data of the same moving object; and obtaining a piece of track prediction data according to the plurality of pieces of track prediction data.
6. The method according to any one of claims 1 to 5, wherein: the method for detecting the moving object through the preset target detector specifically comprises the following steps:
and detecting the moving object through a YOLOv5 detection network.
7. The method according to any one of claims 1 to 5, wherein:
the state estimation data comprises at least part of direction data, velocity data and position data of the moving object; and/or the presence of a gas in the gas,
the road-related feature data includes at least part of road route data, sign data, and intersection data around a position where the mobile object is located.
8. An apparatus for predicting a trajectory of a moving object in automatic driving, comprising:
a first acquisition module for acquiring a video stream of an environment surrounding an autonomous vehicle;
the target detection module is used for acquiring detection data of a moving object for a current image frame in the video stream through a preset target detector;
the state estimation module is used for acquiring state estimation data of the moving object in the next frame according to the detection data of the moving object and a preset Kalman filter;
the second acquisition module is used for acquiring road related characteristic data corresponding to the current image frame;
and the track prediction module is used for obtaining the moving track prediction data of the moving object in a preset time length after the current image frame according to the state estimation data, the road related characteristic data and a prestored track prediction model.
9. The trajectory prediction device according to claim 8, wherein the state estimation module obtains state estimation data of the moving object in a next frame according to the moving object detection data and a preset kalman filter, and specifically includes:
acquiring Kalman filtering state estimation data of the moving object in a previous frame;
obtaining state observation data of the moving object in the current image frame;
and obtaining the state estimation data of the moving object in the next frame according to the Kalman filtering state estimation data of the previous frame and the state observation data in the current image frame.
10. A computing device, comprising:
a processor; and
a memory having executable code stored thereon, which when executed by the processor, causes the processor to perform the method of any one of claims 1-7.
CN202110429261.5A 2021-04-21 2021-04-21 Track prediction method and device for moving object in automatic driving and computing equipment Active CN113112524B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110429261.5A CN113112524B (en) 2021-04-21 2021-04-21 Track prediction method and device for moving object in automatic driving and computing equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110429261.5A CN113112524B (en) 2021-04-21 2021-04-21 Track prediction method and device for moving object in automatic driving and computing equipment

Publications (2)

Publication Number Publication Date
CN113112524A true CN113112524A (en) 2021-07-13
CN113112524B CN113112524B (en) 2024-02-20

Family

ID=76719026

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110429261.5A Active CN113112524B (en) 2021-04-21 2021-04-21 Track prediction method and device for moving object in automatic driving and computing equipment

Country Status (1)

Country Link
CN (1) CN113112524B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113570595A (en) * 2021-08-12 2021-10-29 上汽大众汽车有限公司 Vehicle track prediction method and optimization method of vehicle track prediction model
CN114245177A (en) * 2021-12-17 2022-03-25 智道网联科技(北京)有限公司 Smooth display method and device of high-precision map, electronic equipment and storage medium
CN116883915A (en) * 2023-09-06 2023-10-13 常州星宇车灯股份有限公司 Target detection method and system based on front and rear frame image association

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106023244A (en) * 2016-04-13 2016-10-12 南京邮电大学 Pedestrian tracking method based on least square locus prediction and intelligent obstacle avoidance model
US9552648B1 (en) * 2012-01-23 2017-01-24 Hrl Laboratories, Llc Object tracking with integrated motion-based object detection (MogS) and enhanced kalman-type filtering
US20190049970A1 (en) * 2017-08-08 2019-02-14 Uber Technologies, Inc. Object Motion Prediction and Autonomous Vehicle Control
CN111091591A (en) * 2019-12-23 2020-05-01 百度国际科技(深圳)有限公司 Collision detection method and device, electronic equipment and storage medium
CN111292352A (en) * 2020-01-20 2020-06-16 杭州电子科技大学 Multi-target tracking method, device, equipment and storage medium
CN111340855A (en) * 2020-03-06 2020-06-26 电子科技大学 Road moving target detection method based on track prediction
US20200218913A1 (en) * 2019-01-04 2020-07-09 Qualcomm Incorporated Determining a motion state of a target object
CN111402293A (en) * 2020-03-10 2020-07-10 北京邮电大学 Vehicle tracking method and device for intelligent traffic
CN111476817A (en) * 2020-02-27 2020-07-31 浙江工业大学 Multi-target pedestrian detection tracking method based on yolov3
WO2020164089A1 (en) * 2019-02-15 2020-08-20 Bayerische Motoren Werke Aktiengesellschaft Trajectory prediction using deep learning multiple predictor fusion and bayesian optimization
CN111666891A (en) * 2020-06-08 2020-09-15 北京百度网讯科技有限公司 Method and apparatus for estimating obstacle motion state
CN111693972A (en) * 2020-05-29 2020-09-22 东南大学 Vehicle position and speed estimation method based on binocular sequence images
CN112118537A (en) * 2020-11-19 2020-12-22 蘑菇车联信息科技有限公司 Method and related device for estimating movement track by using picture

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9552648B1 (en) * 2012-01-23 2017-01-24 Hrl Laboratories, Llc Object tracking with integrated motion-based object detection (MogS) and enhanced kalman-type filtering
CN106023244A (en) * 2016-04-13 2016-10-12 南京邮电大学 Pedestrian tracking method based on least square locus prediction and intelligent obstacle avoidance model
US20190049970A1 (en) * 2017-08-08 2019-02-14 Uber Technologies, Inc. Object Motion Prediction and Autonomous Vehicle Control
US20200218913A1 (en) * 2019-01-04 2020-07-09 Qualcomm Incorporated Determining a motion state of a target object
WO2020164089A1 (en) * 2019-02-15 2020-08-20 Bayerische Motoren Werke Aktiengesellschaft Trajectory prediction using deep learning multiple predictor fusion and bayesian optimization
CN111091591A (en) * 2019-12-23 2020-05-01 百度国际科技(深圳)有限公司 Collision detection method and device, electronic equipment and storage medium
CN111292352A (en) * 2020-01-20 2020-06-16 杭州电子科技大学 Multi-target tracking method, device, equipment and storage medium
CN111476817A (en) * 2020-02-27 2020-07-31 浙江工业大学 Multi-target pedestrian detection tracking method based on yolov3
CN111340855A (en) * 2020-03-06 2020-06-26 电子科技大学 Road moving target detection method based on track prediction
CN111402293A (en) * 2020-03-10 2020-07-10 北京邮电大学 Vehicle tracking method and device for intelligent traffic
CN111693972A (en) * 2020-05-29 2020-09-22 东南大学 Vehicle position and speed estimation method based on binocular sequence images
CN111666891A (en) * 2020-06-08 2020-09-15 北京百度网讯科技有限公司 Method and apparatus for estimating obstacle motion state
CN112118537A (en) * 2020-11-19 2020-12-22 蘑菇车联信息科技有限公司 Method and related device for estimating movement track by using picture

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113570595A (en) * 2021-08-12 2021-10-29 上汽大众汽车有限公司 Vehicle track prediction method and optimization method of vehicle track prediction model
CN114245177A (en) * 2021-12-17 2022-03-25 智道网联科技(北京)有限公司 Smooth display method and device of high-precision map, electronic equipment and storage medium
CN114245177B (en) * 2021-12-17 2024-01-23 智道网联科技(北京)有限公司 Smooth display method and device of high-precision map, electronic equipment and storage medium
CN116883915A (en) * 2023-09-06 2023-10-13 常州星宇车灯股份有限公司 Target detection method and system based on front and rear frame image association
CN116883915B (en) * 2023-09-06 2023-11-21 常州星宇车灯股份有限公司 Target detection method and system based on front and rear frame image association

Also Published As

Publication number Publication date
CN113112524B (en) 2024-02-20

Similar Documents

Publication Publication Date Title
Datondji et al. A survey of vision-based traffic monitoring of road intersections
CN113112524B (en) Track prediction method and device for moving object in automatic driving and computing equipment
JP7040374B2 (en) Object detection device, vehicle control system, object detection method and computer program for object detection
US7970529B2 (en) Vehicle and lane recognizing device
CN111311902B (en) Data processing method, device, equipment and machine readable medium
TW202020811A (en) Systems and methods for correcting a high -definition map based on detection of obstructing objects
JP2022535839A (en) Broken lane detection method, device and electronic device
US11436815B2 (en) Method for limiting object detection area in a mobile system equipped with a rotation sensor or a position sensor with an image sensor, and apparatus for performing the same
JP2018081545A (en) Image data extraction device and image data extraction method
CN110967018B (en) Parking lot positioning method and device, electronic equipment and computer readable medium
CN111091037A (en) Method and device for determining driving information
KR20180067199A (en) Apparatus and method for recognizing object
CN112598743B (en) Pose estimation method and related device for monocular vision image
CN111539305B (en) Map construction method and system, vehicle and storage medium
CN113465615A (en) Lane line generation method and related device
JP2019203750A (en) Vehicle position correcting device, navigation system, and vehicle position correcting program
Zhao et al. An ISVD and SFFSD-based vehicle ego-positioning method and its application on indoor parking guidance
CN114264310A (en) Positioning and navigation method, device, electronic equipment and computer storage medium
JP2023116424A (en) Method and device for determining position of pedestrian
CN115355919A (en) Precision detection method and device of vehicle positioning algorithm, computing equipment and medium
CN113724390A (en) Ramp generation method and device
JP2020067818A (en) Image selection device and image selection method
US11645838B2 (en) Object detection system, object detection method, and program
US20230024799A1 (en) Method, system and computer program product for the automated locating of a vehicle
CN116433771A (en) Visual SLAM positioning method and device, electronic equipment and storable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant