CN114563007A - Obstacle motion state prediction method, obstacle motion state prediction device, electronic device, and storage medium - Google Patents

Obstacle motion state prediction method, obstacle motion state prediction device, electronic device, and storage medium Download PDF

Info

Publication number
CN114563007A
CN114563007A CN202210455799.8A CN202210455799A CN114563007A CN 114563007 A CN114563007 A CN 114563007A CN 202210455799 A CN202210455799 A CN 202210455799A CN 114563007 A CN114563007 A CN 114563007A
Authority
CN
China
Prior art keywords
data
result
frame
motion state
target obstacle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210455799.8A
Other languages
Chinese (zh)
Other versions
CN114563007B (en
Inventor
张馨元
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Neolithic Unmanned Vehicle Songyang Co ltd
Original Assignee
Neolix Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Neolix Technologies Co Ltd filed Critical Neolix Technologies Co Ltd
Priority to CN202210455799.8A priority Critical patent/CN114563007B/en
Publication of CN114563007A publication Critical patent/CN114563007A/en
Application granted granted Critical
Publication of CN114563007B publication Critical patent/CN114563007B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Automation & Control Theory (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Traffic Control Systems (AREA)

Abstract

The disclosure relates to the technical field of automatic driving, and provides a method and a device for predicting a motion state of an obstacle, electronic equipment and a storage medium. The method comprises the following steps: acquiring perception time sequence data of a target obstacle, wherein the perception time sequence data comprise multi-frame perception data with continuous output time; performing data analysis on each frame of perception data to obtain a corresponding preliminary motion state prediction result; if a dynamic jump result appears in the preliminary motion state prediction result, extracting multi-frame data to be verified, wherein the multi-frame data to be verified are continuous in output time and the output time is before the output time of the dynamic jump result, from the multi-frame sensing data; and verifying the multi-frame data to be verified, and determining the final motion state prediction result of the target obstacle according to the verification result. The method and the device can effectively reduce the situation that the automatic driving vehicle predicts the static target obstacle as the dynamic target obstacle in the driving process to cause error braking, and improve the safety and intelligence of subsequent planning decision and motion control.

Description

Obstacle motion state prediction method, obstacle motion state prediction device, electronic device, and storage medium
Technical Field
The present disclosure relates to the field of automatic driving technologies, and in particular, to a method and an apparatus for predicting a motion state of an obstacle, an electronic device, and a storage medium.
Background
The existing automatic driving system basically predicts the relative speed, the position which may be reached, the relative distance and the like of an obstacle (such as other traffic participants) at the future time and the corresponding probability of the prediction items based on a deep learning network model, and then decides the next driving action according to the prediction values. E.g., whether braking is required, etc.
However, because the current mainstream prediction model has the reasons of output errors of the deep learning network model due to recall rate and precision errors of the model itself or speed measurement errors caused by distance measurement errors, and the like, the current mainstream prediction method still has the situation that static obstacles are easily predicted to be dynamic by errors, so that wrong decision and motion control of wrong braking are easily caused when the automatic driving vehicle runs to the vicinity of the static obstacles.
Therefore, the existing obstacle prediction method still has the problem that the motion state of the obstacle at the future moment cannot be accurately predicted, so that the subsequent planning decision of the automatic driving vehicle and the problem of insufficient safety and intelligence of motion control are caused.
Disclosure of Invention
In view of this, embodiments of the present disclosure provide a method and an apparatus for predicting a motion state of an obstacle, an electronic device, and a storage medium, so as to solve the problem that the existing obstacle prediction method still cannot accurately predict the motion state of the obstacle at a future time, which results in a planning decision of a subsequent autonomous vehicle and an insufficient safety and intelligence of motion control.
In a first aspect of the embodiments of the present disclosure, a method for predicting a motion state of an obstacle is provided, including:
acquiring perception time sequence data of a target obstacle, wherein the perception time sequence data comprise multi-frame perception data with continuous output time;
performing data analysis on each frame of perception data to obtain a corresponding preliminary motion state prediction result, wherein the preliminary motion state prediction result at least comprises the size and the type of a target obstacle, and the prediction speed and the prediction motion trail within a preset time length;
if a dynamic jumping result appears in the preliminary motion state prediction result, extracting multi-frame data to be checked, wherein the multi-frame data to be checked are continuous in output time and the output time is before the output time of the dynamic jumping result, from the multi-frame sensing data according to the dynamic jumping result;
checking the preliminary motion state prediction result of the multi-frame data to be checked to obtain a checking result;
and determining the final motion state prediction result of the target obstacle according to the verification result.
In a second aspect of the embodiments of the present disclosure, there is provided an obstacle motion state prediction apparatus, including:
the acquisition module is configured to acquire perception time sequence data of the target obstacle, and the perception time sequence data comprise multi-frame perception data with continuous output time;
the analysis module is configured to perform data analysis on each frame of sensing data to obtain a corresponding preliminary motion state prediction result, and the preliminary motion state prediction result at least comprises the size and the type of the target obstacle, and the prediction speed and the prediction motion trail within a preset time length;
the extraction module is configured to extract multi-frame data to be verified, which are continuous in output time and have output time before the output time of the dynamic jump result, from the multi-frame sensing data according to the dynamic jump result if the dynamic jump result occurs in the preliminary motion state prediction result;
the checking module is configured to check the preliminary motion state prediction result of the multi-frame data to be checked to obtain a checking result;
and the determining module is configured to determine a final motion state prediction result of the target obstacle according to the verification result.
In a third aspect of the embodiments of the present disclosure, an electronic device is provided, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor implements the steps of the above method when executing the computer program.
In a fourth aspect of the embodiments of the present disclosure, a computer-readable storage medium is provided, which stores a computer program, which when executed by a processor, implements the steps of the above-mentioned method.
Compared with the prior art, the beneficial effects of the embodiment of the disclosure at least comprise: the method comprises the steps that perception time sequence data of a target obstacle are collected, wherein the perception time sequence data comprise multi-frame perception data with continuous output time; performing data analysis on each frame of perception data to obtain a corresponding preliminary motion state prediction result, wherein the preliminary motion state prediction result at least comprises the size and the type of a target obstacle, and the prediction speed and the prediction motion trail within a preset time length; if a dynamic jumping result appears in the preliminary motion state prediction result, extracting multi-frame data to be checked, wherein the multi-frame data to be checked are continuous in output time and the output time is before the output time of the dynamic jumping result, from the multi-frame sensing data according to the dynamic jumping result; checking the preliminary motion state prediction result of the multi-frame data to be checked to obtain a checking result; determining the final motion state prediction result of the target obstacle according to the verification result, and determining the final motion state prediction result of the target obstacle according to the verification result, under the condition that the motion state prediction result is suddenly jumped to be dynamic, extracting data to be checked, of which the multi-frame output time is continuous and the output time is before the output time of the dynamic jump result, from multi-frame perception time sequence data according to the dynamic jump result, and the preliminary motion state prediction results of the multi-frame data to be verified are further verified, and the final motion state prediction result of the target obstacle is determined according to the verification result, so that the static target obstacle can be effectively prevented from being predicted as a dynamic target obstacle, therefore, the condition that the subsequent automatic driving vehicle brakes by mistake when the automatic driving vehicle runs to the position near the static target obstacle is caused, and the safety and the intelligence of the subsequent planning decision and the motion control of the automatic driving vehicle are improved.
Drawings
To more clearly illustrate the technical solutions in the embodiments of the present disclosure, the drawings needed for the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present disclosure, and other drawings can be obtained by those skilled in the art without inventive efforts.
FIG. 1 is a scenario diagram of an application scenario of an embodiment of the present disclosure;
fig. 2 is a schematic flow chart of a method for predicting a motion state of an obstacle according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a perceptual time series data in a motion state prediction of an obstacle according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of a device for predicting a motion state of an obstacle according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of an electronic device provided in an embodiment of the present disclosure.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the disclosed embodiments. However, it will be apparent to one skilled in the art that the present disclosure may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present disclosure with unnecessary detail.
A method and an apparatus for predicting a movement state of an obstacle according to an embodiment of the present disclosure will be described in detail below with reference to the accompanying drawings.
Fig. 1 is a scene schematic diagram of an application scenario according to an embodiment of the present disclosure. The application scenario may include an autonomous vehicle 101, a target obstacle 102.
In general, the core systems of autonomous vehicle 101 include a perception system, a prediction system, a decision system, and a motion control system. The sensing system may include various sensors (such as laser radar, camera, millimeter wave radar, etc.), a vehicle positioning system (such as an integrated navigation system, global positioning system, etc.), and an Inertial Measurement Unit (IMU). The prediction system may include various algorithmic models for predicting the motion trajectory, speed, etc. of the vehicle. The autonomous vehicle 101 may be a car, truck, bus, logistics car, low speed (15 km/h) unmanned vehicle for end delivery scenarios, etc. that integrates the above described core system and algorithm model, etc.
The target obstacle 102 generally refers to an object that is not present in a high-precision map of the travel area of the autonomous vehicle 101, for example, a vehicle, a pedestrian, or another target obstacle (such as a logistics box) that stops at the road side (or on a non-motor lane) of a travel road in the travel area.
In some embodiments, when the autonomous vehicle 101 travels on a traveling road in a traveling area, sensing time series data of a target obstacle (e.g., a vehicle stopped at a road side or a pedestrian walking on the road, etc.) may be collected by various sensors in a sensing system on the vehicle, and the sensing time series data includes a plurality of frames of sensing data whose output times are continuous; inputting the collected multi-frame sensing data into a prediction system, and performing data analysis on each frame of sensing data through the prediction system to obtain a corresponding preliminary motion state prediction result, wherein the preliminary motion state prediction result at least comprises the size and type of the target obstacle, the prediction speed and the prediction motion track within a preset time length; if a dynamic jumping result appears in the preliminary motion state prediction result, extracting multi-frame data to be checked, wherein the multi-frame data to be checked are continuous in output time and the output time is before the output time of the dynamic jumping result, from the multi-frame sensing data according to the dynamic jumping result; checking the preliminary motion state prediction result of the multi-frame data to be checked to obtain a checking result; and determining a final motion state prediction result of the target obstacle according to the verification result, sending the final motion state prediction result to a decision-making system, so that the decision-making system can make a corresponding driving decision (such as braking or continuous driving) by using the final motion state prediction result, and then sending the decision-making result to a motion control system, so that the motion control system can make corresponding motion control according to the decision-making result. According to the method, when the motion state prediction result is always a static target obstacle, under the condition that the motion state prediction result suddenly jumps to be dynamic at a certain moment, the data to be verified with continuous output time and output time before the output time of the dynamic jump result is extracted from the multi-frame sensing data according to the dynamic jump result, the preliminary motion state prediction result of the multi-frame data to be verified is further verified, the final motion state prediction result of the target obstacle is determined according to the verification result, the static target obstacle can be effectively prevented from being predicted to be the dynamic target obstacle, the condition that the follow-up automatic driving vehicle drives to be in error braking nearby the static target obstacle is caused, and the safety and the intelligent degree of follow-up planning decision and motion control of the automatic driving vehicle are improved.
It should be noted that the specific type, number and combination of the autonomous vehicles 101 and the target obstacles 102 may be adjusted according to the actual requirements of the application scenario, and the embodiment of the present disclosure does not limit this. For example, the autonomous vehicle may transmit the acquired sensing time series data of the target obstacle to the remote server, so as to perform analysis and subsequent verification steps on the sensing time series data through the remote server, obtain a final motion state prediction result of the target obstacle, and then feed the final motion state prediction result back to the autonomous vehicle, so that the autonomous vehicle performs subsequent planning decision and motion control according to the final motion state prediction result.
Fig. 2 is a schematic flow chart of a method for predicting a motion state of an obstacle according to an embodiment of the present disclosure. The method of predicting the movement state of the obstacle of fig. 2 may be performed by the autonomous vehicle 101 of fig. 1. As shown in fig. 2, the method for predicting the movement state of an obstacle includes:
step S201, collecting perception time sequence data of the target obstacle, wherein the perception time sequence data comprises multi-frame perception data with continuous output time.
The perception time sequence data refers to a perception data sequence recorded by the same unified index according to time sequence collected by the automatic driving vehicle. The sensing data column comprises a plurality of frames of sensing data with continuous output time. The sensing data may be data such as an image and a position of a target obstacle acquired by the autonomous vehicle in the driving area.
For example, it is assumed that within 1 second, the autonomous vehicle can acquire 10 frames of sensing data of the target obstacle, and a data column formed by the 10 frames of sensing data is sensing time series data of the target obstacle. With reference to fig. 3, the autonomous driving vehicle may acquire and record the sensing data a1 at 0.1 second, acquire and record the sensing data a2 at 0.2 second, and acquire and record the sensing data a10 at 0.3 second and acquire and record the sensing data A3 … 1 second, thereby obtaining sensing time series data with 10 frames of output time continuity of the target obstacle.
Step S202, performing data analysis on each frame of sensing data to obtain a corresponding preliminary motion state prediction result, wherein the preliminary motion state prediction result at least comprises the size and the type of the target obstacle, the prediction speed and the prediction motion trail within a preset time length.
In combination with the above example, assuming that 10 frames of perception data a 1-a 10 are collected, the perception data a 1-a 10 can be input to a preset deep learning neural network prediction model in a prediction system of an autonomous vehicle for data analysis, so as to obtain a preliminary motion state prediction result corresponding to each frame of perception data. The deep learning neural network prediction model may be a prediction model obtained by training in advance a sensing data set of various target obstacles in a driving area acquired by an autonomous vehicle. The prediction model may be a YOLO series algorithm model (e.g., Yolov3, Yolov4 algorithm, Yolov5 algorithm, Yolox algorithm), or may be other prediction algorithm models.
The size of the target obstacle can be characterized by the length, width and height of the target obstacle.
The type of the target obstacle is to indicate whether the target obstacle is a vehicle, a pedestrian or other target obstacle (such as an animal).
The preset time duration generally refers to a time span in which it is desired to predict a motion state of the target obstacle for how long in the future. The value may be set according to actual conditions, and may be set to 8 seconds, 10 seconds, or the like, for example.
The predicted speed generally refers to a possible movement speed of the target obstacle and a probability value thereof at each moment of the target obstacle, which are inferred by the autonomous vehicle according to a preset model, within a preset time length.
The predicted movement trajectory generally refers to a movement route which a target obstacle estimated by the autonomous vehicle according to a preset model may move from a position at a previous time to a position at a current time at each time within a preset time period.
And step S203, if a dynamic jump result appears in the preliminary motion state prediction result, extracting data to be verified, of which the multi-frame output time is continuous and the output time is before the output time of the dynamic jump result, from the multi-frame sensing data according to the dynamic jump result.
As an example, assuming that a target obstacle is a vehicle P parked on the right side of a driving road (such as a non-motor lane) in a driving area, when an autonomous vehicle drives on the driving road in the driving area, 10 frames of sensing data of the vehicle P are collected, which are respectively A1-A10, wherein the prediction results of A1-A6 all indicate that the motion state of the vehicle P is static, and the prediction result of A7 indicates that the motion state of the vehicle P is dynamic. At this point, the autonomous vehicle may consider that a dynamic jump result occurred at A7.
And S204, verifying the preliminary motion state prediction result of the multi-frame data to be verified to obtain a verification result.
By combining the above example, assuming that a dynamic jump result occurs at a7, and the extracted data to be verified are a 2-a 6, the preliminary motion state prediction results of a 2-a 6 can be verified to obtain a verification result.
And S205, determining a final motion state prediction result of the target obstacle according to the verification result.
And further determining whether the preliminary motion state prediction result of the target obstacle at the output time of the a7 is wrong according to the check result obtained in the step S204, so as to determine the final motion state prediction result of the target obstacle. If the preliminary motion state prediction result of the target obstacle at the output moment of the A7 is determined to be wrong according to the verification result, the output final motion state prediction result still represents that the motion state of the target obstacle is static, otherwise, the motion state of the target obstacle is dynamic.
According to the technical scheme provided by the embodiment of the disclosure, sensing time sequence data of a target obstacle are collected, wherein the sensing time sequence data comprise multi-frame sensing data with continuous output time; performing data analysis on each frame of sensing data to obtain a corresponding preliminary motion state prediction result, wherein the preliminary motion state prediction result at least comprises the size and the type of the target obstacle, and the prediction speed and the prediction motion trail within a preset time length; if a dynamic jumping result appears in the preliminary motion state prediction result, extracting multi-frame data to be checked, wherein the multi-frame data to be checked are continuous in output time and the output time is before the output time of the dynamic jumping result, from the multi-frame sensing data according to the dynamic jumping result; checking the preliminary motion state prediction result of the multi-frame data to be checked to obtain a checking result; according to the check result, determining the final motion state prediction result of the target obstacle, namely the target obstacle with the motion state prediction result being static all the time, at a certain moment, under the condition that the motion state prediction result is suddenly jumped to be dynamic, extracting multi-frame data to be verified, of which the output time is continuous and is before the output time of the dynamic jump result, from multi-frame sensing time sequence data according to the dynamic jump result, the preliminary motion state prediction results of the multi-frame data to be verified are further verified, the final motion state prediction result of the target obstacle is determined according to the verification result, the static target obstacle can be effectively prevented from being predicted as the dynamic target obstacle, therefore, the condition that the subsequent automatic driving vehicle brakes by mistake when the automatic driving vehicle runs to the position near the static target obstacle is caused, and the safety and the intelligence of the subsequent planning decision and the motion control of the automatic driving vehicle are improved.
In some embodiments, the step S201 includes:
collecting surrounding environment data of a driving area in an automatic driving process;
determining a target obstacle according to the surrounding environment data and a preset driving area map;
and screening out the perception time sequence data containing the target obstacle from the surrounding environment data.
As an example, during autonomous driving of an autonomous vehicle over a driving area, ambient data of the driving area may be collected by various sensors (e.g., laser radar, camera, inertial navigation, infrared sensor, etc.) mounted on the vehicle. Ambient data, including image data, position data, etc. of target obstacles (e.g., moving obstacles such as a vehicle or vehicles parked on the side of the non-motor lane, or pedestrians, animals, etc. walking on the non-motor lane) and non-target obstacles (e.g., moving obstacles such as goods temporarily stacked on the non-motor lane, etc.).
The autonomous vehicle can obtain a corresponding driving area map by calling a driving area map (high-precision map) stored in a database of the autonomous vehicle or sending a request to a remote server. The driving area map comprises detailed information such as traffic lights, lane marks (such as white lines, yellow lines, double lanes or single lanes, solid lines and dotted lines), curbs, obstacles, telegraph poles, overpasses, underground passages and the like, and the information has corresponding geocodes. An object that is not on the travel area map may be considered an obstacle. The target obstacle may be an obstacle detected by tracking of the autonomous vehicle through a computer vision system or the like during driving, where the obstacle or obstacles may need subsequent motion control such as avoidance.
After the target obstacle is determined, perceptual time series data including the target obstacle may be filtered out from the collected ambient data.
The perception time series data comprises target obstacle image data and three-dimensional space perception data containing target obstacles.
The target obstacle image data may be an image including a target obstacle scanned by a camera on the autonomous vehicle. The three-dimensional space sensing data including the target obstacle may include three-dimensional space sensing of the laser radar on the autonomous vehicle to the surrounding environment, acquired space sensing information including the three-dimensional size, distance, and the like of the target obstacle, target distance information of the target obstacle acquired by the millimeter radar, and various information including the target obstacle acquired by other photoelectric sensors such as other ultrasonic sensors, infrared sensors, and the like.
In some embodiments, performing data analysis on each frame of perceptual time series data to obtain a corresponding preliminary motion state prediction result includes:
collecting position data and posture data of a vehicle;
and carrying out feature extraction and feature fusion on the image data of the target obstacle, the three-dimensional space perception data, the self-vehicle position data and the self-posture data to obtain a preliminary motion state prediction result corresponding to each frame of perception data.
The automatic driving vehicle can acquire position data of the vehicle through a positioning device (such as integrated navigation) arranged on the automatic driving vehicle, and can acquire self attitude data of the vehicle through an inertia measurement unit and the like. The self-attitude data mainly includes the three-axis attitude angle (or angular velocity) and acceleration of the self-vehicle.
In an embodiment, the image data of the target obstacle, the three-dimensional spatial perception data, the self-vehicle position data and the self-posture data can be input into a preset deep learning neural network model for feature extraction and feature fusion, so as to obtain a preliminary motion state prediction result corresponding to each frame of perception data.
In some embodiments, extracting data to be verified from multi-frame sensing data according to a dynamic jump result, where the multi-frame output time is continuous and the output time is before the output time of the dynamic jump result, includes:
finding out a result hopping frame corresponding to the dynamic hopping result, and determining the output time of the result hopping frame;
and extracting the data to be checked continuously at the multi-frame output time before the output time of the result jump frame from the multi-frame sensing data.
As an example, in connection with the example of step S203 described above, when the autonomous vehicle determines that the dynamic jump result occurs at a7, the ID of the result jump frame corresponding to a7 and the corresponding output time may be searched first. In this example, the perception data A1-A10 can be respectively assigned with a data ID in advance, such as 01-10 respectively, that is, the data ID of A1 is 01, and the data ID of A2 is 02 … A10 is 10. The output time corresponding to each sensing data is 0.1 second and 0.2 second … 1 second, namely the output time corresponding to a1 is 0.1 second, and the output time corresponding to a2 is 0.2 second … and the output time corresponding to a10 is 1 second. From this, it is understood that the data ID corresponding to a7 is 07, and the output time is 0.7 seconds.
Next, data to be checked with continuous multi-frame output time can be extracted from the 10 frames of sensing data according to the sensing data frame (i.e. a 7) corresponding to the dynamic jump result. For example, A2 to A6, A3 to A6, A4 to A6 and the like before A7 can be extracted as data to be inspected. Preferably, the number of extracted frames of the data to be verified is generally 2-5 frames. Certainly, more frames of data to be verified can be extracted, the number of frames of the extracted data to be verified needs to be specifically increased or reduced according to the number of frames of sensing data before the output time of the data frame with the dynamic jump result, the extracted number of frames cannot be less than 2 frames, the sensing data closer to the output time with the dynamic jump result is extracted, the subsequent verification result is more accurate, the 'false dynamic' jump result is favorably eliminated, and the accuracy of the prediction result of the automatic driving vehicle is improved.
In some embodiments, the step S204 includes:
determining a frame with the earliest output moment in the multi-frame data to be verified as prior data, and determining other data to be verified except the prior data as comparison data;
determining the corresponding prior prediction speed of the target barrier at the output time of the prior data and the corresponding comparison prediction speed at the output time of each frame of comparison data;
and verifying the compared predicted speed according to the prior predicted speed to obtain a verification result.
As an example, assume that the autonomous vehicle determines that a dynamic jump result occurs at A7, and extracts, as data to be checked, the perception data A2 to A6 before the output time of A7 from the 10 frames of perception data A1 to A10 according to the dynamic jump result. The output times of the perception data A2-A6 are 0.2 second, 0.3 second, 0.4 second, 0.5 second and 0.6 second, respectively, and the earliest output time is perception data A2. And determining the sensing data A2 as prior data, and determining the sensing data A3-A6 as comparison data.
Then, the a priori prediction speed (i.e. the prediction speed in the prediction result) can be determined according to the preliminary motion state prediction result corresponding to the sensing data a 2. Similarly, the comparison prediction speed (namely the prediction speed in the prediction results) of the initial motion state prediction results corresponding to the perception data A3-A6 is determined. And then, comparing and checking the comparison predicted speed of the sensing data A3-A6 and the priori predicted speed of the sensing data A2 one by one to obtain a checking result.
In some embodiments, the predicted speed variation value between the comparison predicted speed and the prior predicted speed corresponding to each frame of comparison data can be calculated respectively; and determining a final motion state prediction result of the target obstacle according to the predicted speed change value.
Specifically, a predicted speed change value (i.e., a speed difference value between the predicted speed by comparison of the sensing data A3 and the a priori predicted speed of the sensing data a 2) is calculated to obtain a predicted speed change value 01. And calculating a predicted speed change value (namely, a speed difference value between the compared predicted speed of the sensing data A4 and the prior predicted speed of the sensing data A2) to obtain a predicted speed change value 02. And calculating a predicted speed change value (namely, a speed difference value between the compared predicted speed of the sensing data A5 and the prior predicted speed of the sensing data A2) to obtain a predicted speed change value 03. And calculating a predicted speed change value (namely, a speed difference value between the compared predicted speed of the sensing data A6 and the prior predicted speed of the sensing data A2) to obtain a predicted speed change value 04. Then, the final motion state prediction result of the target obstacle is determined based on the predicted speed change values 01, 02, 03, and 04.
In some embodiments, determining a final motion state prediction result of the target obstacle according to the predicted speed change value specifically includes:
if the predicted speed change value corresponding to each frame of comparison data is larger than the allowable change threshold, determining that the motion state of the target obstacle within the preset time length is dynamic, and outputting a dynamic prediction result;
and if the predicted speed change value corresponding to at least one frame of comparison data is equal to the allowable change threshold, determining that the motion state of the target obstacle in the preset time length is static, and outputting a static prediction result.
The allowable variation threshold may be set according to actual conditions, and may be set to 0 in general.
With reference to the above example, assuming that the predicted speed change values 01, 02, 03, and 04 are all greater than the allowable change threshold 0, that is, the predicted speeds of the target obstacle at the output time of 0.2 seconds to 0.6 seconds all change (that is, the autonomous vehicle senses that the target obstacle has a lateral/longitudinal speed in the output time), it may be determined that the movement state of the target obstacle within the preset time period is dynamic, and a dynamic prediction result is output. The dynamic prediction result comprises the size and the type of the target obstacle, and the predicted speed and the predicted motion trail within the preset time length. If any one or at least two of the predicted speed change values 01, 02, 03 and 04 are equal to the allowable change threshold value 0, namely, the predicted speed of the target obstacle is changed within 0.2-0.6 seconds at the output moment (namely, the autonomous vehicle senses that the target obstacle has a part of output frames with horizontal/longitudinal speed and a part of output frames without horizontal/longitudinal speed) in the period), the motion state of the target obstacle in the preset time period can be determined to be still static, and a static prediction result is output. The static prediction result comprises the size and the type of the target obstacle, and the predicted speed and the predicted motion trail within a preset time length.
According to the technical scheme provided by the embodiment of the disclosure, under the condition that the preliminary motion state prediction result of the target obstacle is always a static prediction result and suddenly jumps to a dynamic prediction result at a certain moment, multiple frames of data to be checked continuously at the output moment before the sensing data at the output moment of the dynamic jump result can be further extracted, and some misdetected dynamic jump results are further eliminated by checking whether the prediction speed of the data to be checked changes, so that the condition that the subsequent automatic driving vehicle wrongly brakes when driving to the vicinity of the static target obstacle due to the fact that the static target obstacle is predicted to be the dynamic target obstacle can be effectively avoided, and the safety and the intelligence of subsequent planning decision and motion control of the automatic driving vehicle are improved.
All the above optional technical solutions may be combined arbitrarily to form optional embodiments of the present application, and are not described in detail herein.
The following are embodiments of the disclosed apparatus that may be used to perform embodiments of the disclosed methods. For details not disclosed in the embodiments of the apparatus of the present disclosure, refer to the embodiments of the method of the present disclosure.
Fig. 4 is a schematic diagram of a device for predicting a motion state of an obstacle according to an embodiment of the present disclosure. As shown in fig. 4, the obstacle motion state prediction device includes:
the acquisition module 401 is configured to acquire multi-frame sensing data of a target obstacle, and output time of the multi-frame sensing data is continuous;
an analysis module 402, configured to perform data analysis on each frame of sensing data to obtain a corresponding preliminary motion state prediction result, where the preliminary motion state prediction result at least includes the size and type of the target obstacle, and a predicted speed and a predicted motion trajectory within a preset time length;
an extracting module 403, configured to extract, if a dynamic jump result occurs in the preliminary motion state prediction result, data to be verified, where the multi-frame output time is continuous and the output time is before the output time of the dynamic jump result, from the multi-frame sensing data according to the dynamic jump result;
the checking module 404 is configured to check the preliminary motion state prediction result of the multiple frames of data to be checked to obtain a check result;
a determination module 405 configured to determine a final motion state prediction result of the target obstacle according to the verification result.
According to the technical scheme provided by the embodiment of the disclosure, sensing time sequence data of a target obstacle is acquired through an acquisition module 401, wherein the sensing time sequence data comprises multi-frame sensing data with continuous output time; the analysis module 402 performs data analysis on each frame of sensing data to obtain a corresponding preliminary motion state prediction result, wherein the preliminary motion state prediction result at least comprises the size and the type of the target obstacle, and the prediction speed and the prediction motion trajectory within a preset time period; if a dynamic jump result occurs in the preliminary motion state prediction result, the extracting module 403 extracts data to be verified, which have continuous multi-frame output time and output time before the output time of the dynamic jump result, from the multi-frame sensing data according to the dynamic jump result; the checking module 404 checks the preliminary motion state prediction result of the multiple frames of data to be checked to obtain a checking result; the determination module 405 determines a final motion state prediction result of the target obstacle according to the verification result, that is, when the motion state prediction result of a target obstacle, which is always static, suddenly changes to dynamic at a certain time, the data to be checked with continuous multi-frame output time and output time before the output time of the dynamic jump result can be extracted from the multi-frame sensing time sequence data according to the dynamic jump result, the preliminary motion state prediction results of the multi-frame data to be verified are further verified, the final motion state prediction result of the target obstacle is determined according to the verification result, the static target obstacle can be effectively prevented from being predicted as the dynamic target obstacle, therefore, the condition that the subsequent automatic driving vehicle brakes by mistake when the automatic driving vehicle runs to the position near the static target obstacle is caused, and the safety and the intelligence of the subsequent planning decision and the motion control of the automatic driving vehicle are improved.
In some embodiments, the above-mentioned acquisition module 401 includes:
an environment data acquisition unit configured to acquire surrounding environment data of a driving area during automatic driving;
an obstacle determination unit configured to determine a target obstacle based on the surrounding environment data and a preset travel area map;
a data screening unit configured to screen perception time series data including the target obstacle from the surrounding environment data.
In some embodiments, the perception data includes target obstacle image data, three-dimensional spatial perception data including a target obstacle. The analysis module 402 includes:
a data acquisition unit configured to acquire position data and posture data of a host vehicle;
and the prediction unit is configured to perform feature extraction and feature fusion on the target obstacle image data, the three-dimensional space perception data, the self-vehicle position data and the self-posture data to obtain a preliminary motion state prediction result corresponding to each frame of perception data.
In some embodiments, extracting data to be verified from multi-frame sensing data according to a dynamic jump result, where the multi-frame output time is continuous and the output time is before the output time of the dynamic jump result, includes:
finding out a result hopping frame corresponding to the dynamic hopping result, and determining the output time of the result hopping frame;
and extracting the data to be checked continuously at the multi-frame output time before the output time of the result jump frame from the multi-frame sensing data.
In some embodiments, the verification module 404 includes:
the data determining unit is configured to determine one frame with the earliest output time in the multiple frames of data to be verified as prior data, and determine other data to be verified except the prior data as comparison data;
a speed determination unit configured to determine a prior prediction speed corresponding to the target obstacle at an output time of the prior data and a comparison prediction speed corresponding to the output time of the comparison data for each frame;
and the verification unit is configured to verify the compared predicted speed according to the prior predicted speed to obtain a verification result.
In some embodiments, the determining module 405 includes:
the calculation unit is configured to calculate a predicted speed change value between a comparison predicted speed corresponding to each frame of comparison data and a priori predicted speed respectively;
a final result determination unit configured to determine a final movement state prediction result of the target obstacle according to the predicted speed change value.
In some embodiments, the final result determining unit may be specifically configured to:
if the predicted speed change value corresponding to each frame of comparison data is larger than the allowable change threshold, determining that the motion state of the target obstacle in the preset time length is dynamic, and outputting a dynamic prediction result;
and if the predicted speed change value corresponding to at least one frame of comparison data is less than or equal to the allowable change threshold, determining that the motion state of the target obstacle in the preset time length is static, and outputting a static prediction result.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present disclosure.
Fig. 5 is a schematic diagram of an electronic device 5 provided by the embodiment of the present disclosure. As shown in fig. 5, the electronic apparatus 5 of this embodiment includes: a processor 501, a memory 502, and a computer program 503 stored in the memory 502 and operable on the processor 501. The steps in the various method embodiments described above are implemented when the processor 501 executes the computer program 503. Alternatively, the processor 501 implements the functions of the respective modules/units in the above-described respective apparatus embodiments when executing the computer program 503.
The electronic device 5 may be a desktop computer, a notebook, a palm computer, a cloud server, or other electronic devices. The electronic device 5 may include, but is not limited to, a processor 501 and a memory 502. Those skilled in the art will appreciate that fig. 5 is merely an example of the electronic device 5, and does not constitute a limitation of the electronic device 5, and may include more or less components than those shown, or different components.
The Processor 501 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, or the like.
The storage 502 may be an internal storage unit of the electronic device 5, for example, a hard disk or a memory of the electronic device 5. The memory 502 may also be an external storage device of the electronic device 5, such as a plug-in hard disk provided on the electronic device 5, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like. The memory 502 may also include both internal and external storage units of the electronic device 5. The memory 502 is used for storing computer programs and other programs and data required by the electronic device.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules, so as to perform all or part of the functions described above. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method of the above embodiments may be realized by the present disclosure, and the computer program may be stored in a computer readable storage medium to instruct related hardware, and when the computer program is executed by a processor, the steps of the above method embodiments may be realized. The computer program may comprise computer program code which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain suitable additions or additions that may be required in accordance with legislative and patent practices within the jurisdiction, for example, in some jurisdictions, computer readable media may not include electrical carrier signals or telecommunications signals in accordance with legislative and patent practices.
The above examples are only intended to illustrate the technical solutions of the present disclosure, not to limit them; although the present disclosure has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present disclosure, and are intended to be included within the scope of the present disclosure.

Claims (10)

1. A method for predicting a motion state of an obstacle, comprising:
acquiring perception time sequence data of a target obstacle, wherein the perception time sequence data comprise multi-frame perception data with continuous output time;
performing data analysis on each frame of the perception data to obtain a corresponding preliminary motion state prediction result, wherein the preliminary motion state prediction result at least comprises the size and the type of the target obstacle, and the prediction speed and the prediction motion trail within a preset time length;
if a dynamic hopping result appears in the preliminary motion state prediction result, extracting data to be verified, of which the multi-frame output time is continuous and the output time is before the output time of the dynamic hopping result, from the multi-frame sensing data according to the dynamic hopping result;
checking the preliminary motion state prediction result of the multi-frame data to be checked to obtain a checking result;
and determining a final motion state prediction result of the target obstacle according to the verification result.
2. The method according to claim 1, wherein extracting data to be verified that the multi-frame output time is continuous and the output time is before the output time of the dynamic hopping result from the multi-frame perceptual data according to the dynamic hopping result comprises:
finding out a result hopping frame corresponding to the dynamic hopping result, and determining the output time of the result hopping frame;
and extracting the data to be checked which are continuous at the multi-frame output time before the output time of the result jump frame from the multi-frame sensing data.
3. The method according to claim 1, wherein verifying the preliminary motion state prediction result of the plurality of frames of data to be verified to obtain a verification result comprises:
determining a frame with the earliest output moment in the multiple frames of data to be verified as prior data, and determining other data to be verified except the prior data as comparison data;
determining a prior prediction speed corresponding to the target obstacle at the output time of the prior data and a comparison prediction speed corresponding to the output time of the comparison data of each frame;
and verifying the compared predicted speed according to the prior predicted speed to obtain a verification result.
4. The method of claim 3, wherein determining a final motion state prediction result for the target obstacle based on the verification result comprises:
respectively calculating a predicted speed change value between a comparison predicted speed corresponding to the comparison data of each frame and the prior predicted speed;
and determining a final motion state prediction result of the target obstacle according to the predicted speed change value.
5. The method of claim 4, wherein determining a final motion state prediction of the target obstacle based on the predicted speed change value comprises:
if the predicted speed change value corresponding to each frame of comparison data is larger than the allowable change threshold, determining that the motion state of the target obstacle in the preset time length is dynamic, and outputting a dynamic prediction result;
and if the predicted speed change value corresponding to at least one frame of comparison data is smaller than or equal to the allowable change threshold, determining that the motion state of the target obstacle in the preset time length is static, and outputting a static prediction result.
6. The method of claim 1, wherein collecting perceptual temporal data of a target obstacle comprises:
collecting surrounding environment data of a driving area in an automatic driving process;
determining a target obstacle according to the surrounding environment data and a preset driving area map;
and screening out perception time sequence data containing the target obstacle from the surrounding environment data.
7. The method of claim 1, wherein the perception data comprises target obstacle image data, three-dimensional spatial perception data including the target obstacle;
performing data analysis on each frame of the perception data to obtain a corresponding preliminary motion state prediction result, wherein the preliminary motion state prediction result comprises the following steps:
collecting position data and posture data of a vehicle;
and carrying out feature extraction and feature fusion on the target obstacle image data, the three-dimensional space perception data, the self-vehicle position data and the self-posture data to obtain an initial motion state prediction result corresponding to each frame of perception data.
8. An obstacle motion state prediction device, comprising:
the acquisition module is configured to acquire multi-frame sensing data of a target obstacle, and the output time of the multi-frame sensing data is continuous;
the analysis module is configured to perform data analysis on each frame of the perception data to obtain a corresponding preliminary motion state prediction result, and the preliminary motion state prediction result at least comprises the size and the type of the target obstacle, and a prediction speed and a prediction motion track within a preset time length;
the extraction module is configured to extract data to be verified, which has continuous multi-frame output time and output time before the output time of the dynamic jump result, from the multi-frame sensing data according to the dynamic jump result if the dynamic jump result occurs in the preliminary motion state prediction result;
the checking module is configured to check the preliminary motion state prediction result of the multi-frame data to be checked to obtain a checking result;
a determination module configured to determine a final motion state prediction result of the target obstacle according to the verification result.
9. An electronic device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
CN202210455799.8A 2022-04-28 2022-04-28 Obstacle motion state prediction method, obstacle motion state prediction device, electronic device, and storage medium Active CN114563007B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210455799.8A CN114563007B (en) 2022-04-28 2022-04-28 Obstacle motion state prediction method, obstacle motion state prediction device, electronic device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210455799.8A CN114563007B (en) 2022-04-28 2022-04-28 Obstacle motion state prediction method, obstacle motion state prediction device, electronic device, and storage medium

Publications (2)

Publication Number Publication Date
CN114563007A true CN114563007A (en) 2022-05-31
CN114563007B CN114563007B (en) 2022-07-29

Family

ID=81721247

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210455799.8A Active CN114563007B (en) 2022-04-28 2022-04-28 Obstacle motion state prediction method, obstacle motion state prediction device, electronic device, and storage medium

Country Status (1)

Country Link
CN (1) CN114563007B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109255341A (en) * 2018-10-30 2019-01-22 百度在线网络技术(北京)有限公司 Extracting method, device, equipment and the medium of barrier perception wrong data
CN109829386A (en) * 2019-01-04 2019-05-31 清华大学 Intelligent vehicle based on Multi-source Information Fusion can traffic areas detection method
CN110018496A (en) * 2018-01-10 2019-07-16 北京京东尚科信息技术有限公司 Obstacle recognition method and device, electronic equipment, storage medium
CN111091591A (en) * 2019-12-23 2020-05-01 百度国际科技(深圳)有限公司 Collision detection method and device, electronic equipment and storage medium
CN111361570A (en) * 2020-03-09 2020-07-03 福建汉特云智能科技有限公司 Multi-target tracking reverse verification method and storage medium
CN113050122A (en) * 2021-03-24 2021-06-29 的卢技术有限公司 Method and system for sensing speed of dynamic obstacle based on convolutional neural network
CN113239719A (en) * 2021-03-29 2021-08-10 深圳元戎启行科技有限公司 Track prediction method and device based on abnormal information identification and computer equipment
CN113264066A (en) * 2021-06-03 2021-08-17 阿波罗智能技术(北京)有限公司 Obstacle trajectory prediction method and device, automatic driving vehicle and road side equipment
US20210350715A1 (en) * 2020-05-11 2021-11-11 Honeywell International Inc. System and method for database augmented ground collision avoidance
CN114104006A (en) * 2022-01-28 2022-03-01 阿里巴巴达摩院(杭州)科技有限公司 Method and device for automatically driving vehicle to realize vehicle crossing by mistake

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110018496A (en) * 2018-01-10 2019-07-16 北京京东尚科信息技术有限公司 Obstacle recognition method and device, electronic equipment, storage medium
CN109255341A (en) * 2018-10-30 2019-01-22 百度在线网络技术(北京)有限公司 Extracting method, device, equipment and the medium of barrier perception wrong data
CN109829386A (en) * 2019-01-04 2019-05-31 清华大学 Intelligent vehicle based on Multi-source Information Fusion can traffic areas detection method
CN111091591A (en) * 2019-12-23 2020-05-01 百度国际科技(深圳)有限公司 Collision detection method and device, electronic equipment and storage medium
CN111361570A (en) * 2020-03-09 2020-07-03 福建汉特云智能科技有限公司 Multi-target tracking reverse verification method and storage medium
US20210350715A1 (en) * 2020-05-11 2021-11-11 Honeywell International Inc. System and method for database augmented ground collision avoidance
CN113050122A (en) * 2021-03-24 2021-06-29 的卢技术有限公司 Method and system for sensing speed of dynamic obstacle based on convolutional neural network
CN113239719A (en) * 2021-03-29 2021-08-10 深圳元戎启行科技有限公司 Track prediction method and device based on abnormal information identification and computer equipment
CN113264066A (en) * 2021-06-03 2021-08-17 阿波罗智能技术(北京)有限公司 Obstacle trajectory prediction method and device, automatic driving vehicle and road side equipment
CN114104006A (en) * 2022-01-28 2022-03-01 阿里巴巴达摩院(杭州)科技有限公司 Method and device for automatically driving vehicle to realize vehicle crossing by mistake

Also Published As

Publication number Publication date
CN114563007B (en) 2022-07-29

Similar Documents

Publication Publication Date Title
CN109927719B (en) Auxiliary driving method and system based on obstacle trajectory prediction
CN112700470B (en) Target detection and track extraction method based on traffic video stream
US11460851B2 (en) Eccentricity image fusion
RU2742213C1 (en) Method to control information on lanes, method of traffic control and device for control of information on lanes
CN113561963B (en) Parking method and device and vehicle
CN109871787B (en) Obstacle detection method and device
RU2744012C1 (en) Methods and systems for automated determination of objects presence
RU2750243C2 (en) Method and system for generating a trajectory for a self-driving car (sdc)
RU2757234C2 (en) Method and system for calculating data for controlling the operation of a self-driving car
CN113034970A (en) Safety system, automated driving system and method thereof
CN112660128A (en) Apparatus for determining lane change path of autonomous vehicle and method thereof
CN111914691A (en) Rail transit vehicle positioning method and system
Virdi Using deep learning to predict obstacle trajectories for collision avoidance in autonomous vehicles
CN116703966A (en) Multi-object tracking
US11531349B2 (en) Corner case detection and collection for a path planning system
CN114475656A (en) Travel track prediction method, travel track prediction device, electronic device, and storage medium
CN110497906B (en) Vehicle control method, apparatus, device, and medium
CN114563007B (en) Obstacle motion state prediction method, obstacle motion state prediction device, electronic device, and storage medium
CN114730492A (en) Assertion vehicle detection model generation and implementation
CN110696828A (en) Forward target selection method and device and vehicle-mounted equipment
US11983918B2 (en) Platform for perception system development for automated driving system
Franke et al. Towards holistic autonomous obstacle detection in railways by complementing of on-board vision with UAV-based object localization
CN114426030B (en) Pedestrian passing intention estimation method, device, equipment and automobile
Guo et al. Understanding surrounding vehicles in urban traffic scenarios based on a low-cost lane graph
EP4141482A1 (en) Systems and methods for validating camera calibration in real-time

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20230824

Address after: Building 2, No. 209 Changxing Third Road, Xiping Street, Songyang County, Lishui City, Zhejiang Province, 323000

Patentee after: Neolithic Unmanned Vehicle (Songyang) Co.,Ltd.

Address before: 100176 room 613, 6 / F, area 2, building a, 12 Hongda North Road, Beijing Economic and Technological Development Zone, Daxing District, Beijing

Patentee before: NEOLIX TECHNOLOGIES Co.,Ltd.

TR01 Transfer of patent right