CN112541371A - Target object intention prediction method and system - Google Patents

Target object intention prediction method and system Download PDF

Info

Publication number
CN112541371A
CN112541371A CN201910880200.3A CN201910880200A CN112541371A CN 112541371 A CN112541371 A CN 112541371A CN 201910880200 A CN201910880200 A CN 201910880200A CN 112541371 A CN112541371 A CN 112541371A
Authority
CN
China
Prior art keywords
target object
vehicle
future
information
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910880200.3A
Other languages
Chinese (zh)
Inventor
王威仁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Automotive Research and Testing Center
Original Assignee
Automotive Research and Testing Center
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Automotive Research and Testing Center filed Critical Automotive Research and Testing Center
Priority to CN201910880200.3A priority Critical patent/CN112541371A/en
Publication of CN112541371A publication Critical patent/CN112541371A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Geometry (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention provides a target object intention prediction method, which comprises an information acquisition step and a calculation and graph data corresponding step. In the information obtaining step, the vehicle positioning information and a plurality of target object information of the target object are obtained, the target object information respectively corresponds to a plurality of time points on a time axis, and each target object information comprises a target object position and a target object speed. The steps of calculating and mapping data comprise: the vehicle positioning information is corresponded to the map information, the target object position corresponding to the last time point is corresponded to the map information, the updating speed of the target object is calculated according to the speed of the target object in the target object information, the updating speed is corresponded to the map information, and the future position of the target object on the map information at the future time point of the target object is predicted according to the updating speed. Thereby contributing to the intended prediction of the target object.

Description

Target object intention prediction method and system
Technical Field
The present invention relates to a prediction method and a system thereof, and more particularly, to a prediction method and a system thereof for analyzing the intention of a target object during driving.
Background
The development of vehicles is advanced by the technological progress, so that the development of vehicles enters an automation stage of intelligent auxiliary driving or full-automatic driving. Since the movement of the vehicle is dynamic, it is necessary to avoid collision with obstacles such as vehicles or pedestrians on the route, and therefore, manufacturers develop a method for predicting the position or trajectory of an object in response to the obstacles, so as to improve the driving safety.
However, the urban environment is complex and the flow of the vehicles and the locomotives is mixed, especially in the crossing environment with multiple sign directions, the number of obstacles is large and the tracks are variable, which easily causes the difficulty in analysis and judgment to increase. In addition, most obstacle analyses in the market predict the situation at the next time point by the situation at a single time point, but do not have continuity in time, and are also prone to cause misjudgment of obstacles.
Therefore, how to improve the analysis accuracy of the target object such as the obstacle is an object of the related art.
Disclosure of Invention
The invention provides a target object intention prediction method and a system thereof, which can be helpful for the intention prediction of a target object and improve the analysis accuracy of the target object by using the vehicle positioning information and a plurality of target object information corresponding to a plurality of time points on a time axis and using the vehicle positioning information and the target object information corresponding to one image.
According to an embodiment of an aspect of the present invention, a method for predicting an intention of a target object is provided, which includes an information obtaining step (dataset associating step) and a calculating and mapping step (calculating and mapping step). In the information obtaining step, a vehicle positioning information of a vehicle and a plurality of target object information of at least one target object are obtained, the target object information respectively corresponds to a plurality of time points on a time axis, and each target object information comprises a target object position and a target object speed. The steps of calculating and mapping data comprise: the method comprises the steps of corresponding the vehicle positioning information to a map data, corresponding the target position at the last time point to the map data, calculating an updating speed of the at least one target according to the target speed in the target information, corresponding the updating speed to the map data, and predicting a future position of the at least one target on the map data at a future time point according to the updating speed.
Therefore, the position of the target object can be clearly known by corresponding the vehicle and the target object to the map data, and the intention of the target object can be judged. In addition, the target object speed at different time points is calculated to obtain the update speed of the target object, so that the future position of the target object is predicted, the prediction of the target object has time continuity, and the accuracy of target object analysis can be improved.
The method for predicting the intention of the target object may further include a future trajectory selection step, wherein the vehicle may include a plurality of future trajectories on the map, and one of the future trajectories is selected according to the future position of the target object of the at least one target object so as to avoid the at least one target object.
According to the method for predicting the intention of the target object, a region of interest on the map data can be adjusted according to the future trajectory in the calculating and map data corresponding steps, and the updating speed of the at least one target object is calculated when the at least one target object is located within the region of interest.
According to the aforementioned method for predicting the intention of the target object, a future position of the vehicle at the last future time point in the future trajectory can be found, the future position of the vehicle is added with a first set distance to form a front boundary of the region of interest, the rear end of a vehicle body of the vehicle is added with a second set distance to form a rear boundary of the region of interest, the left side of the vehicle body is added with a lane width to form a left boundary of the region of interest, and the right side of the vehicle body is added with a lane width to form a right boundary of the region of interest.
According to the method for predicting the intention of the object, the speed of the object within a time threshold is taken to calculate the update speed, the time threshold is defined as the speed of the object corresponding to the last time point multiplied by an estimated value, and the estimated value is between 0 and 1.
According to the method for predicting the intention of the object, the at least one object can be classified into a vehicle type, a pedestrian type or a pedestrian type, and different estimation values are set according to the vehicle type, the pedestrian type and the pedestrian type.
According to the method for predicting the intention of the target object, the target object can be classified according to the position of the at least one target object on the map, the length and the width of the at least one target object or the length-width ratio of the at least one target object.
According to the aforementioned target object intention prediction method, the future time point may be set to be equal to the time threshold.
According to the aforementioned target object intention prediction method, in the calculation and map data mapping step, the update speed is calculated by a weighted moving average method.
According to the method for predicting the intention of the target object, the positioning information of the vehicle can be obtained by a real-time dynamic positioning technology in the steps of calculation and map data correspondence.
According to another aspect of the present invention, an object intention prediction system is provided, which is applied to the object intention prediction method, and includes a host vehicle, at least one sensor and a processor. The at least one sensor is arranged on the vehicle and used for detecting the at least one target object, and the processor is in signal connection with the at least one sensor to obtain information of the target object.
According to the object intention prediction system, the at least one sensor may have an optical sensor structure.
Drawings
FIG. 1 is a block diagram illustrating a method for predicting an intention of a target object according to an embodiment of the invention;
FIG. 2 is a flow chart illustrating steps of the method for predicting the intention of an object in FIG. 1;
FIG. 3 is a diagram illustrating a diagram of the target object intent prediction method of FIG. 1; and
FIG. 4 is a schematic diagram illustrating an architecture of a target object intent prediction system according to another embodiment of the invention.
[ notation ] to show
100 method for predicting intention of target object
110 information acquisition step
120 calculation and mapping step
130 future trajectory selection step
K1 schema
H1 host vehicle
R1 road
S01, S02, S03 steps
S04, S05, S06 steps
S07, S08, S09 steps
S10, S11, S12 steps
G1, G2, G3 targets
G4 and G5 target
Future trajectories of D1, D2 and D3
W1 region of interest range
P1 target position
P2 future position of target
200 target intention prediction system
210 host vehicle
220 sensor
230 processor
Detailed Description
Embodiments of the present invention will be described below with reference to the accompanying drawings. For the purpose of clarity, numerous implementation details are set forth in the following description. However, the reader should understand that these implementation details should not be used to limit the invention. That is, in some embodiments of the invention, these implementation details are not necessary. In addition, for the sake of simplicity, some conventional structures and elements are shown in the drawings in a simplified schematic manner; and repeated elements will likely be referred to using the same reference number or similar reference numbers.
In addition, when an element (or a mechanism or module, etc.) is "connected," "disposed" or "coupled" to another element, it can be directly connected, disposed or coupled to the other element, or it can be indirectly connected, disposed or coupled to the other element, that is, there are other elements between the element and the other element. When an element is explicitly connected, directly disposed, or directly coupled to another element, it is intended that no other element is interposed between the element and the other element. The terms first, second, third, etc. are used merely to describe various elements or components, but the elements/components themselves are not limited, so that the first element/component can be also referred to as the second element/component. Moreover, the combination of elements/components/mechanisms/modules herein is not a commonly known, conventional or custom combination in the art, and can not be readily determined by one of ordinary skill in the art based on whether the elements/components/mechanisms/modules themselves are well known.
Referring to fig. 1, fig. 1 is a block diagram illustrating a method 100 for predicting the intention of a target object according to an embodiment of the invention. The object intent prediction method 100 includes an information acquisition step 110 and a calculation and map data mapping step 120.
In the information obtaining step 110, a vehicle positioning information of a vehicle and a plurality of target object information of a target object are obtained, the target object information respectively corresponds to a plurality of time points on a time axis, and each target object information includes a target object position and a target object speed.
The calculating and mapping step 120 includes: the method comprises the steps of corresponding the vehicle positioning information to a map data, corresponding the target position at the last time point to the map data, calculating an updating speed of the target according to the speed of the target in the target information, corresponding the updating speed to the map data, and predicting the future position of the target on the map data at least one future time point according to the updating speed.
Therefore, the position of the target object can be clearly known by corresponding the vehicle and the target object to the map data, and the intention of the target object can be judged. In addition, the target object speed at different time points is calculated to obtain the update speed of the target object, so that the future position of the target object is predicted, the prediction of the target object has time continuity, and the accuracy of target object analysis can be improved. The details of the target object intent prediction method 100 will be described in more detail later.
The vehicle travels on a road, and the vehicle positioning information may be obtained by Real Time Kinematic (RTK) technique, so the vehicle positioning information may include a longitude, a latitude, and a heading angle of the vehicle.
The target object information may be obtained by at least one sensor disposed on the vehicle. In other embodiments, the target material information may be obtained from an environmental message transmitted from a Vehicle networking (V2X), but not limited thereto.
Therefore, in the information obtaining step 110, when the vehicle is moving, the sensor continuously detects the target object, obtains the target object information corresponding to the target object at each time point, and stores the target object information in a processor disposed on the vehicle in a matrix manner. It should be noted that the position of the target object included in the information of each target object obtained by the sensor is the relative position of the target object with the host vehicle, and the speed of the target object is the relative speed of the target object with the host vehicle.
In the calculate and map data mapping step 120, the main purpose is to map all information to map data. More specifically, the map data has abundant map information, and preferably has high-precision map information, such as road signs, lane numbers, sidewalks, buildings, etc., so that when all the information corresponds to the map data, the positions of the vehicle and the target object on the map data can be clearly known, which is more helpful for improving the accuracy of the intention prediction. For example, when an object is located on a sidewalk, its intent is to predict that it should be heading along the sidewalk; when the object is located at an intersection, it is presumed that the object intends to cross the road. In addition, if the target object is on the sidewalk, the target object can be judged to be a pedestrian or a pedestrian-like object, so the speed change of the target object is low, but the mobility of the direction change is high. Therefore, through the graph data, the intention prediction can be effectively assisted.
In an embodiment, the map data may be stored in the processor of the vehicle in advance, or obtained in real time through the internet of vehicles, but not limited thereto. The vehicle positioning information obtained by the real-time dynamic positioning technique can include longitude, latitude and course angle, and the vehicle positioning information can correspond to the map data by the course angle rotation map data and the longitude and latitude translation map data.
Since the target position included in each target information is the relative position between itself and the vehicle, a coordinate transfer formula is used, as shown in formula (1), the target position is transferred to the map material, and the origin after the transfer is the position of the vehicle.
Figure BDA0002205641220000061
Wherein X and Y represent original target positions, X 'and Y' represent target positions (corresponding to X axis and Y axis) transferred to the image material, theta is the angle value of the heading angle, and the X axis is 0 degree.
In the calculating and mapping step 120, the position of the target object at the last time point may be mapped to the mapping material, the updating speed of the target object is calculated, and the updating speed is mapped to the mapping material; or the update speed of the target object is calculated first, and then the position of the target object at the last time point and the update speed are simultaneously mapped to the map data, but the invention is not limited thereto.
Preferably, the update speed is calculated by a Weighted Moving Average (WMA) method, which is shown in formula (2).
Figure BDA0002205641220000062
Wherein VwmaxRepresenting the component of the update speed in the X-axis, VwmayRepresenting the component of the update speed on the Y-axis, i being a positive integer, n being the number of time points, VxiRepresenting the component of the target velocity on the X-axis, V, for each time pointyiRepresenting the component of the target velocity on the Y-axis for each time point. For example, the target speed from the 1 st time point to the 10 th time point may be substituted into equation (2) to calculate so that n is 10, and when the target speed at the 11 th time point is obtained, n is maintained at 10, the target speed data at the 1 st time point is deleted, the target speed at the 2 nd time point is changed to the target speed at the 1 st time point, and so on, the target speed at the 11 th time point is changed to the target speed at the 10 th time point. In other words, the oldest target speed is deleted and the newest target speed is added to maintain the same n.
After the update speed is calculated, the update speed can be substituted into the formula (3) to correspond to the graph resources.
Figure BDA0002205641220000063
Wherein Vwmax' component of the update Rate transferred to the map data on the X-axis, Vwmay' represents the component of the update rate on the Y-axis that is transferred to the map resource.
Therefore, the future position of the target object on the map data after 1 second can be obtained by multiplying the updating speed transferred to the map data by the future time point, such as 1 second.
In the calculating and mapping step 120, the update speed is calculated according to the speed of the object within a time threshold, where the time threshold is defined as the last time point when the speed of the object is multiplied by an estimated value, and the estimated value is between 0 and 1. Through the setting of the time threshold, the time threshold of the target object can be adjusted according to the target object speed (i.e. the current speed) at the last time point, so that the updating speed is more in line with the actual situation, and the accuracy of the target object intention prediction method 100 is improved.
In one embodiment, the method 100 for predicting the intention of the object may classify the object into a pedestrian category, a pedestrian category or a vehicle category in the calculating and mapping step 120, and set different estimation values for the vehicle category, the pedestrian category and the pedestrian category. Setting the estimated value to 1/5 seconds if the object is classified as a vehicle category; if the target object is classified into the pedestrian-like category, setting the estimated value to be 1/8 seconds; if the object is classified as a pedestrian category, the estimated value is set to 1/10 seconds.
In the classification of the object, the object may be classified according to a position of the object on the figure, a length and a width of the object, or a length-width ratio of the object. If the target object is not positioned on the pedestrian path, further judging whether the length and the width of the target object are respectively larger than 4 meters and 1.5 meters, if so, judging that the target object is of a vehicle type; otherwise, further judging whether the length-width ratio of the target object is more than 2:1, if so, judging that the target object is of a pedestrian-like type, such as a locomotive or a bicycle; otherwise, the target object is determined to be the pedestrian category. The length and width of the target object can be detected by the vehicle, and then the transformation of the equations (4) and (5) is performed, corresponding to the diagram data, to classify the target object.
Hr=Wo×cosθ2+Ho×Sinθ2 (4)。
Wr=Wo×sinθ2+Ho×cosθ2 (5)。
Wherein HrDenotes converted length, WrDenotes the converted width, HoDenotes the original length, WoRepresenting the original width.
After classifying the objects, different estimation values can be set for different object classes in consideration of factors such as flexibility, degree of freedom, and whether the path is easy to change. Therefore, when the object is classified into the vehicle category and the target speed at the last time point is 40km/h (11.1m/s), the estimated value is 1/5 seconds, so the time threshold value can be set to 2.2 seconds, and all the object speeds at all the time points within 2.2 seconds are grasped back at the last time point to be calculated instead of equation (2). When the object is classified into the pedestrian-like category and the target speed at the last time point is 40km/h (11.1m/s), the estimated value is 1/8 seconds, so the time threshold value can be set to 1.3 seconds, and all the object speeds at all the time points within 1.3 seconds are grasped back at the last time point for calculation in place of equation (2). When the object is classified as a pedestrian category and the target speed at the last time point is 16km/h (4.4m/s), the estimated value is 1/10 seconds, so the time threshold value can be set to 0.44 seconds, and all the object speeds at all the time points within 0.44 seconds are grasped back at the last time point for calculation instead of equation (2), but the invention is not limited thereto.
Further, a future point in time may be set equal to the time threshold. Thus, when the object is classified as a vehicle class, the future time point may be set to 2.2 seconds, i.e., the coordinate of the future position on the X-axis is equal to X' + (Vwma)x') x2.2, seat on the Y-axis for future positionsMarked with y' + (Vwma)y') x 2.2. By analogy, when the object is classified as a pedestrian-like category, the future time point can be set to 1.3 seconds, i.e., the coordinate of the future position on the X-axis is equal to X' + (Vwma)x') x1.3, the future position being on the Y axis with the coordinate equal to Y' + (Vwma)y') x 1.3; when the object is classified as a pedestrian category, the future time point may be set to 0.44 seconds, i.e., the coordinate of the future position on the X-axis is equal to X' + (Vwma)x') x0.44, the future position being on the Y axis with the coordinate equal to Y' + (Vwma)y')x0.44。
In addition, in the embodiment of fig. 1, the target object intent prediction method 100 may further include a future trajectory selection step 130. The vehicle comprises a plurality of future tracks positioned on the map data, and one of the future tracks can be selected according to the future position of the target object so as to avoid the target object.
Through the maximum value, the minimum value, the course angle, the path curvature and the like of the speed of the vehicle, a plurality of future tracks which can be traveled by the vehicle can be obtained, namely, each coming track comprises the future positions of the vehicle corresponding to a plurality of future time points of the vehicle. Since the future position of the target object at the future time point is predicted, a suitable future trajectory can be selected, and the future position of the vehicle can be different from the future position of the target object at the same future time point to avoid the target object.
In the step 120 of calculating and mapping data, a Region of interest (ROI) on the map data can be adjusted according to the future trajectory, and when the target is located within the Region of interest, the update speed of the target is calculated. In the case where the number of target objects is too large, since the efficiency of the processor in the host vehicle is likely to be reduced by analyzing or calculating the target objects one by one, it is possible to set an area within a certain distance from the host vehicle as the region-of-interest range, where the target objects have a possibility of colliding with the host vehicle, and where the target objects outside the region-of-interest range have no possibility of colliding with the host vehicle.
The region-of-interest range may include ranges traveled by a plurality of future trajectories, for example, a future position (farthest position) of the vehicle at a last future time point in the future trajectories may be found, a first set distance is added to the future position of the vehicle forward to serve as a front boundary of the region-of-interest range, the first set distance may be, for example, 3 meters, and a second set distance is added to the rear end of a vehicle body of the vehicle backward to serve as a rear boundary of the region-of-interest range, and the second set distance may be, for example, 3 meters; the left side of the vehicle body plus the width of one lane is taken as a left boundary of the region of interest range, and the right side of the vehicle body plus the width of one lane is taken as a right boundary of the region of interest range, including but not limited to. Therefore, the region of interest range can be adjusted through the dynamic and future tracks of the vehicle, and the flexibility is higher.
Referring to fig. 2 to fig. 3 in conjunction with fig. 1, fig. 2 is a flowchart illustrating steps of the target object intent prediction method 100 of fig. 1, and fig. 3 is a schematic diagram illustrating the target object intent prediction method 100 of fig. 1 corresponding to the graph K1.
The vehicle H1 travels on a road R1, the vehicle H1 can be continuously dynamically positioned, so that step S02 can be performed to obtain vehicle positioning information, step S03 can be performed to detect a plurality of targets G1, G2, G3, G4, and G5, and step S01 can be performed to obtain map data K1 in real time, and step S04 can be performed to correspond the vehicle positioning information and the target positions of the targets G1, G2, G3, G4, and G5 (only the target position P1 of the target G1 is indicated in fig. 3) to the map data K1.
The target information detected in step S03 can be stored and processed for subsequent use. In one embodiment, the plurality of target objects G1, G2, G3, G4, G5 are numbered from near to far, i.e., target object G1 is closest to the host vehicle H1 and target object G5 is farthest from the host vehicle H1. If the same target (e.g., target G1) is detected at a plurality of consecutive time points, an accumulated number of times of target G1 is accumulated, whereas if the target (e.g., target G2) disappears, the accumulated number of times of target G2 is zeroed and recalculated, as shown in table 1.
TABLE 1 cumulative number of target at different time points
G1 cumulative count G2 cumulative count G3 cumulative count G4 cumulative count G5 cumulative count
Time 1 1 1 1 1 -
Time point 2 2 2 2 - -
Time point 3 3 - 3 1 -
Time point 4 4 1 - - 1
Specifically, as shown in table 1, when the target objects G1, G2, G3, and G4 are detected at time point 1, the target object information of the target objects G1, G2, G3, and G4 is stored in the processor, and the cumulative number of times corresponding to the target objects G1, G2, G3, and G4 is 1. When the target objects G1, G2 and G3 are detected at the time point 2, the target object information of the target objects G1, G2 and G3 is stored in the processor, and all the target object information corresponding to the undetected target object G4 before the time point 2 is cleared. When the target objects G1, G3 and G4 are detected at the time point 3, the target object information of the target objects G1, G3 and G4 is stored in the processor, and all the target object information corresponding to the undetected target object G2 before the time point 3 is emptied; since the target G4 disappeared at time 2, the cumulative count is reset to zero and recalculated, and the cumulative count of the target G4 corresponding to time 3 is 1. It should be noted that object G4 at time 1 and object G4 at time 3 may be different objects and are similarly numbered only by distance. In addition, when the target objects G1, G2 and G5 are detected at the time point 4, the target object information of the target objects G1, G2 and G5 is stored in the processor, and all the target object information of the corresponding undetected target objects G3 and G4 before the time point 4 is emptied; since the target G2 disappeared at time point 3, the cumulative count is reset to zero and recalculated, and the cumulative count of the target G2 corresponding to time point 4 is 1. It should be noted that the data processing may be completed before step S09, and is not necessarily completed in step S03.
Step S05 may be executed in advance, which may obtain a plurality of future trajectories D1, D2, D3 that the vehicle H1 may travel according to the maximum value, the minimum value, the heading angle, the path curvature, and the like of the vehicle speed, so that after step S04 is completed, the information of step S05 is obtained, and step S06 may be executed again to adjust the region of interest W1 according to the future trajectories D1, D2, D3.
Thereafter, step S07 is performed to confirm whether or not the objects G1, G2, G3, G4, G5 are located in the region of interest range W1. As shown in fig. 3, after only the object G1 is located in the region of interest W1, the process may proceed to step S08 to classify the object G1.
The object G1 can be classified into a pedestrian category by the above classification method, and the time threshold can be set to 0.44 seconds by setting the time threshold.
Therefore, assuming that the vehicle H1 is detected every 0.02 second, if the cumulative number of times of the data-processed target G1 is equal to 22, it means that the vehicle is continuously detected within 0.44 seconds, so that the target speed of 22 times is calculated by substituting formula (2), and n is set to 22, and the update speed is obtained in step S09.
Then, step S10 is performed to convert the update speed by equation (3), and step S11 is performed to predict the target future position P2 at 0.44 seconds at the future time point according to the update speed of map K1 and the target position P1 at the last time point.
As shown in fig. 3, step S12 is performed to select one of the future trajectories D1, D2 and D3 according to the future time and the future position P2 of the target object. It should be noted that neither of the future trajectories D2, D3 intersects the target object G1, but the future trajectory D2 does not need to change speed or direction, and thus is the best choice, which may be preferred.
Referring to fig. 4, fig. 4 is a schematic diagram illustrating an architecture of a target object intent prediction system 200 according to another embodiment of the invention. The object intent prediction system 200 is applied to the object intent prediction method 100 of fig. 1. The target intention prediction system 200 includes a host vehicle 210, at least one sensor 220, and a processor 230. The at least one sensor 220 is disposed on the vehicle 210 and configured to detect a target object, and the processor 230 is in signal connection with the at least one sensor 220 to obtain information of the target object.
In the embodiment of fig. 4, the number of the sensors 220 may be two, and preferably, the sensors 220 have an optical sensor structure, such as an optical radar structure, but in other embodiments, the invention is not limited thereto.
As can be seen from the above embodiments, the present invention has the following advantages.
Firstly, by mapping the vehicle and the target object to the map data, the future changing capability of the static or dynamic target object around the vehicle can be effectively predicted, and the moving prediction of the target object in a complex environment (such as an urban area) can be solved.
And secondly, selecting the optimal future track through a future track selection step so as to avoid the target object.
Although the present invention has been described with reference to the above embodiments, it should be understood that various changes and modifications can be made therein by those skilled in the art without departing from the spirit and scope of the invention.

Claims (12)

1. A method for predicting an intention of a target object, comprising:
an information obtaining step, obtaining a vehicle positioning information of a vehicle and a plurality of target object information of at least one target object, wherein the target object information respectively corresponds to a plurality of time points on a time axis, and each target object information comprises a target object position and a target object speed; and
a step of calculating and mapping data, comprising:
mapping the vehicle positioning information to a map data;
corresponding the target position of the last time point to the image data; and
calculating an updating speed of the at least one object according to the speed of the objects in the information of the objects, corresponding the updating speed to the map data, and predicting a future position of the at least one object on the map data at a future time point according to the updating speed.
2. The method of predicting the intention of a target object according to claim 1, further comprising:
a future track selection step, wherein the vehicle comprises a plurality of future tracks positioned on the map data, and one of the future tracks is selected according to the future position of the at least one target object so as to avoid the at least one target object.
3. The method of claim 2, wherein in the calculating and mapping steps, a region of interest range on the map is adjusted according to the future trajectories, and the update speed of the at least one object is calculated when the at least one object is within the region of interest range.
4. The method as claimed in claim 3, wherein the future position of the vehicle at the last future time point in the future trajectory is found, the future position of the vehicle plus a first predetermined distance is taken as a front boundary of the ROI, the rear end of a vehicle body of the vehicle plus a second predetermined distance is taken as a rear boundary of the ROI, the left side of the vehicle plus a lane width is taken as a left boundary of the ROI, and the right side of the vehicle plus a lane width is taken as a right boundary of the ROI.
5. The method as claimed in claim 1, wherein the update speed is calculated by taking the object speeds within a time threshold, and the time threshold is defined as the last time point when the object speed is multiplied by an estimated value, wherein the estimated value is between 0 and 1.
6. The method as claimed in claim 5, wherein the at least one object is classified as a vehicle type, a pedestrian type or a pedestrian type, and the estimation value is set differently for the vehicle type, the pedestrian type and the pedestrian type.
7. The method of claim 6, wherein the at least one object is classified according to a position of the at least one object on the map, a length and a width of the at least one object, or a length-width ratio of the at least one object.
8. The method of claim 7, wherein the future time point is set equal to the time threshold.
9. The method of claim 1, wherein in the calculating and mapping steps, the update rate is calculated by a weighted moving average method.
10. The method as claimed in claim 1, wherein in the step of calculating and mapping, the vehicle location information is obtained by a real-time dynamic location technique.
11. An object intention prediction system applied to the object intention prediction method according to claim 1, the object intention prediction system comprising:
the host vehicle;
at least one sensor disposed on the vehicle for detecting the at least one target object; and
a processor in signal connection with the at least one sensor for obtaining the information of the objects.
12. The system of claim 11, wherein the at least one sensor has an optical sensor structure.
CN201910880200.3A 2019-09-18 2019-09-18 Target object intention prediction method and system Pending CN112541371A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910880200.3A CN112541371A (en) 2019-09-18 2019-09-18 Target object intention prediction method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910880200.3A CN112541371A (en) 2019-09-18 2019-09-18 Target object intention prediction method and system

Publications (1)

Publication Number Publication Date
CN112541371A true CN112541371A (en) 2021-03-23

Family

ID=75012171

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910880200.3A Pending CN112541371A (en) 2019-09-18 2019-09-18 Target object intention prediction method and system

Country Status (1)

Country Link
CN (1) CN112541371A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103879404A (en) * 2012-12-19 2014-06-25 财团法人车辆研究测试中心 Moving-object-traceable anti-collision warning method and device thereof
CN105730330A (en) * 2014-12-11 2016-07-06 财团法人车辆研究测试中心 Traffic safety system and barrier screening method thereof
CN106864454A (en) * 2015-11-06 2017-06-20 福特全球技术公司 For the method and apparatus of the manipulation process of auxiliary maneuvering vehicle
CN109960261A (en) * 2019-03-22 2019-07-02 北京理工大学 A kind of dynamic barrier preventing collision method based on collision detection
CN110197027A (en) * 2019-05-28 2019-09-03 百度在线网络技术(北京)有限公司 A kind of automatic Pilot test method, device, smart machine and server

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103879404A (en) * 2012-12-19 2014-06-25 财团法人车辆研究测试中心 Moving-object-traceable anti-collision warning method and device thereof
CN105730330A (en) * 2014-12-11 2016-07-06 财团法人车辆研究测试中心 Traffic safety system and barrier screening method thereof
CN106864454A (en) * 2015-11-06 2017-06-20 福特全球技术公司 For the method and apparatus of the manipulation process of auxiliary maneuvering vehicle
CN109960261A (en) * 2019-03-22 2019-07-02 北京理工大学 A kind of dynamic barrier preventing collision method based on collision detection
CN110197027A (en) * 2019-05-28 2019-09-03 百度在线网络技术(北京)有限公司 A kind of automatic Pilot test method, device, smart machine and server

Similar Documents

Publication Publication Date Title
US12050259B2 (en) Extended object tracking using RADAR and recursive least squares
US11161525B2 (en) Foreground extraction using surface fitting
CN108628300B (en) Route determination device, vehicle control device, route determination method, and storage medium
US11648939B2 (en) Collision monitoring using system data
JP2022514975A (en) Multi-sensor data fusion method and equipment
CN112041633A (en) Data segmentation using masks
US11697412B2 (en) Collision monitoring using statistic models
US10845813B2 (en) Route setting method and route setting device
WO2023070258A1 (en) Trajectory planning method and apparatus for vehicle, and vehicle
CN113228040A (en) Multi-level object heading estimation
WO2023213018A1 (en) Car following control method and system
JP6171499B2 (en) Risk determination device and risk determination method
JP2020060369A (en) Map information system
JP2023548879A (en) Methods, devices, electronic devices and storage media for determining traffic flow information
WO2019172104A1 (en) Moving body behavior prediction device
CN112673230A (en) Driving assistance method and driving assistance device
TWI728470B (en) Target intention predicting method and system thereof
US11922701B2 (en) Method and system for creating a semantic representation of the environment of a vehicle
CN112541371A (en) Target object intention prediction method and system
US12061255B2 (en) Scan matching and radar pose estimator for an autonomous vehicle based on hyper-local submaps
US20230342954A1 (en) Method for Estimating an Ego Motion of a Vehicle on the Basis of Measurements of a Lidar Sensor and Computing Device
CN114817765A (en) Map-based target course disambiguation
CN114730495A (en) Method for operating an environment detection device with grid-based evaluation and with fusion, and environment detection device
CN114670851A (en) Driving assistance system, method, terminal and medium based on optimizing tracking algorithm
JP2018185156A (en) Target position estimation method and target position estimation device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination