CN116300848A - Track prediction method, track prediction device, terminal equipment and readable storage medium - Google Patents

Track prediction method, track prediction device, terminal equipment and readable storage medium Download PDF

Info

Publication number
CN116300848A
CN116300848A CN202211093943.4A CN202211093943A CN116300848A CN 116300848 A CN116300848 A CN 116300848A CN 202211093943 A CN202211093943 A CN 202211093943A CN 116300848 A CN116300848 A CN 116300848A
Authority
CN
China
Prior art keywords
track
moment
motion
target
avoidance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211093943.4A
Other languages
Chinese (zh)
Inventor
罗沛
曹晟
范忠银
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Uditech Co Ltd
Original Assignee
Uditech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Uditech Co Ltd filed Critical Uditech Co Ltd
Priority to CN202211093943.4A priority Critical patent/CN116300848A/en
Publication of CN116300848A publication Critical patent/CN116300848A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0214Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0268Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
    • G05D1/0274Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means using mapping information stored in a memory device

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application is applicable to the technical field of intelligent control, and provides a track prediction method, a track prediction device, terminal equipment and a readable storage medium. The track prediction method specifically comprises the following steps: acquiring environment data of an environment in which a target object is located, wherein the environment comprises at least one avoidance target related to the target object; determining at least one motion track of each avoidance target from the current moment to the future moment and track confidence coefficient of each motion track according to the environment data; and adjusting the track confidence coefficient of each motion track according to the map information of the current area to obtain an adjusted track confidence coefficient, wherein the current area comprises an area through which each motion track passes, and the adjusted track confidence coefficient is used for the target object to avoid the obstacle for each avoidance target. According to the embodiment of the application, the safety of the target object during automatic driving can be improved.

Description

Track prediction method, track prediction device, terminal equipment and readable storage medium
Technical Field
The application belongs to the technical field of intelligent control, and particularly relates to a track prediction method, a track prediction device, terminal equipment and a readable storage medium.
Background
The track prediction algorithm plays an important role in the automatic driving technology, and the excellent track prediction algorithm can reduce the pressure of the motion planning module, so that the driving process is more stable and smooth. In some specific environments, the intent of the avoidance target to move is subjective. For example, unmanned vehicle distribution in a closed park is usually carried on a small road with many pedestrians and bicycles, the movement intention of pedestrians and bicycles is highly subjective, and vehicles in the closed park are not carried according to a predetermined rule. The current track prediction mode can not well cope with the sudden situation of avoiding the target, so that erroneous judgment occurs when the avoiding target runs abnormally, and further traffic accidents occur.
Disclosure of Invention
The embodiment of the application provides a track prediction method, a track prediction device, a terminal device and a readable storage medium, which can improve the safety of automatic driving.
An embodiment of the present application provides a track prediction method, including: acquiring environment data of an environment in which a target object is located, wherein the environment comprises at least one avoidance target related to the target object; determining at least one motion track of each avoidance target from the current moment to the future moment and track confidence coefficient of each motion track according to the environment data; and adjusting the track confidence coefficient of each motion track according to the map information of the current area to obtain an adjusted track confidence coefficient, wherein the current area comprises an area through which each motion track passes, and the adjusted track confidence coefficient is used for the target object to avoid the obstacle for each avoidance target.
In some embodiments of the present application, the determining, according to the environmental data, at least one motion track of each avoidance target from the current time to the future time includes: determining positions to be matched of the avoidance targets at a plurality of movement moments within a preset time length according to the environmental data, wherein the preset time length is a time period from the current moment to the future moment; acquiring motion data of each avoidance target; according to the motion data, matching the positions to be matched for each avoidance target to obtain predicted positions of the avoidance targets at each motion moment, wherein one or more predicted positions of each motion moment are obtained; and connecting the predicted positions of each avoidance target according to a time sequence to obtain at least one motion track within a preset duration.
In some embodiments of the present application, the matching the position to be matched for each avoidance target according to the motion data, to obtain a predicted position of the avoidance target at each motion moment, includes: determining the position offset of each avoidance target at each movement moment relative to the previous movement moment according to the movement data; and determining the position, of the positions to be matched, at which the offset between the first positions at the first moment meets the position offset related to the second moment, as the predicted position at the second moment, wherein the second moment is any one movement moment in the preset duration, and the first moment is the previous movement moment of the second moment.
In some embodiments of the present application, the determining, according to the environmental data, a position to be matched where the avoidance target appears at a plurality of motion moments within a preset duration includes: determining a plurality of movement moments within a preset time period according to the movement data, and generating a position to be matched of the avoidance target at each movement moment and a reference type of the avoidance target at each position to be matched; the determining, among the plurality of positions to be matched, a position where an offset amount between the first positions at the first time satisfies a position offset amount related to the second time as a predicted position at the second time includes: and taking the position to be matched, of which the distance from the first position meets the position offset related to the second moment and the reference type is the same as the type of the avoidance target, as a predicted position of the second moment.
In some embodiments of the present application, determining the track confidence of each of the motion tracks includes: determining the position confidence of the avoidance target at each predicted position; and determining the track confidence coefficient corresponding to the motion track according to the position confidence coefficient of the predicted position on the same motion track.
In some embodiments of the present application, the map information records an unpunchable area of the avoidance target; the track confidence coefficient of each motion track is adjusted according to the map information of the current area, and the adjusted track confidence coefficient is obtained, which comprises the following steps: adjusting a position confidence of the predicted position located within the non-passable region; and determining the adjusted track confidence coefficient corresponding to the motion track according to the adjusted position confidence coefficient.
In some embodiments of the present application, the determining, according to the environmental data, at least one motion track of each avoidance target from a current time to a future time, and a track confidence coefficient of each motion track includes: inputting the environment data into a target neural network, and acquiring at least one motion track of each avoidance target output by the target neural network from the current moment to the future moment and track confidence coefficient of each motion track, wherein the target neural network is trained by sample environment data comprising a plurality of types of the avoidance targets, and at least part of types of the avoidance targets are different in size.
A track prediction apparatus provided in a second aspect of the present application includes: the device comprises an acquisition unit, a control unit and a control unit, wherein the acquisition unit is used for acquiring environment data of an environment where a target object is located, and the environment comprises at least one avoidance target related to the target object; the determining unit is used for determining at least one motion track of each avoidance target from the current moment to the future moment and track confidence coefficient of each motion track according to the environment data; the adjusting unit is used for adjusting the track confidence coefficient of each motion track according to the map information of the current area to obtain an adjusted track confidence coefficient, the current area comprises an area through which each motion track passes, and the adjusted track confidence coefficient is used for the target object to avoid the obstacle for each avoidance target.
A third aspect of the embodiments of the present application provides a terminal device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the track prediction method described above when the processor executes the computer program.
A fourth aspect of the present embodiments provides a computer readable storage medium storing a computer program which, when executed by a processor, implements the steps of the trajectory prediction method described above.
A fifth aspect of the embodiments of the present application provides a computer program product, which, when run on a terminal device, causes the terminal device to perform the trajectory prediction method according to any one of the above first aspects.
In the embodiment of the application, the environmental data of the environment where the target object is located is acquired, at least one motion track of each avoidance target from the current moment to the future moment and the track confidence coefficient of each motion track in the environment are determined according to the environmental data, then the track confidence coefficient of each motion track is adjusted according to the map information of the current area, and the target object can avoid the obstacle for each avoidance target based on the adjusted track confidence coefficient, so that the target object effectively refers to each possible track and the occurrence probability of each motion track when avoiding the obstacle, rather than directly determining the unique running track of each avoidance target from the map, the occurrence of traffic accidents caused by missing the motion track when avoiding the abnormal running of the target is avoided, and the safety of automatic driving can be improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required for the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic implementation flow chart of a track prediction method provided in an embodiment of the present application;
FIG. 2 is a schematic flow chart of a specific implementation of trajectory prediction provided in an embodiment of the present application;
fig. 3 is a schematic structural diagram of a track prediction apparatus according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a terminal device provided in an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, are intended to be protected herein.
In order to illustrate the technical solution of the present application, the following description is made by specific examples.
Fig. 1 shows a schematic implementation flow chart of a track prediction method provided in an embodiment of the present application, where the method may be applied to a terminal device, and may be applicable to situations where automatic driving security needs to be improved. The terminal device may refer to electronic devices such as a computer, a mobile phone, an automatic driving automobile, a robot, and the like.
Specifically, the track prediction method may include the following steps S101 to S103.
Step S101, acquiring environment data of an environment where a target object is located.
In an embodiment of the present application, the environment in which the target object is located may include at least one avoidance target related to the target object. The target object may refer to a robot, an automobile, an unmanned vehicle, or other equipment with automatic driving capability. Avoidance targets, i.e., obstacles that a target object needs to avoid during travel, include, but are not limited to, pedestrians, vehicles, pets, robots, and the like.
In order to implement trajectory prediction, the terminal device needs to acquire environmental data of an environment in which the target object is located, where the environmental data may include an environmental image and/or environmental point cloud data. In some embodiments, one or more sensors may be disposed on the target object, and the terminal device may acquire environmental data acquired by the sensors of the target object. In other embodiments, if the target object travels in the preset area, the terminal device may acquire environmental data acquired by the sensor disposed in the preset area, for example, if the target object travels in the closed park, the terminal device may acquire an environmental image captured by a camera in the closed park. In other embodiments, the terminal device may obtain the avoidance target and the motion information reported by the target object, to obtain the environmental data. Specifically, the avoidance target and the target object can be provided with a motion monitoring module, the motion monitoring module can be used for detecting motion information of the avoidance target and the target object, for example, the speed sensor and the acceleration sensor can be used for respectively acquiring running speed and running acceleration of the avoidance target and the target object, and the current position and the like can be acquired through Beidou positioning or GPS positioning.
It should be noted that the environmental data may include environmental data of a current time and a historical time, where the current time is the starting time of track prediction, and the historical time refers to a sampling time before the current time, for example, a sampling time before the current time.
Step S102, determining at least one motion track of each avoidance target from the current moment to the future moment and track confidence of each motion track according to the environmental data.
In the embodiment of the application, through analysis of the environmental data, the movement information of the avoidance target and the road information around the position where the avoidance target is located can be obtained. The road information includes, but is not limited to, lane markings, rail locations, step locations, and the like. According to the motion information and the road information around the position of the avoidance targets, the terminal equipment can predict the track of each avoidance target, and determine at least one motion track of each avoidance target from the current moment to the future moment and the track confidence coefficient of each motion track. The track confidence degree is the probability that the avoidance target runs according to the motion track.
And step S103, adjusting the track confidence coefficient of each motion track according to the map information of the current area to obtain the adjusted track confidence coefficient.
The current area comprises an area through which each movement track passes, and corresponding map information can be recorded with a passing area and an unpowerable area of the current area, wherein the passing area and the unpowerable area avoid targets.
It will be appreciated that for different types of avoidance targets, there is a difference in the traffic zone from the non-passable zone, e.g. for vehicles, the pavement should belong to the non-passable zone. Meanwhile, if the allowed traveling direction of the lane is different from the traveling direction of the vehicle (i.e., the vehicle is traveling backward), the lane also belongs to the non-passable area. However, in some special scenarios, such as in a closed park, the vehicle will not tend to travel according to established rules, such as possibly traveling to a sidewalk for parking, or reverse travel of the vehicle will occur. Conventional trajectory predictions typically determine only a single path within a passable zone, and have poor coping with these bursty behaviors. In the embodiment of the application, each possible motion track is reserved, and the track confidence of each motion track is adjusted according to the map information of the current area, namely the probability of each track is adjusted.
In the embodiment of the application, the adjusted track confidence coefficient can be used for avoiding the obstacle of the target object, so that the target object can avoid all avoidance targets during automatic driving.
Specifically, based on the adjusted track confidence, the terminal equipment can sort all the motion tracks of the same avoidance target to obtain a track sorting result corresponding to each avoidance target, and then perform obstacle avoidance decision by using the track sorting result.
In some embodiments, the terminal device may refer to the track sequencing result of each avoidance target, and use the track with the high N before the track confidence coefficient of each avoidance target as the reference for avoiding the obstacle. Wherein N is a positive integer greater than or equal to 1, and the specific value can be adjusted according to the total number of avoidance targets. If the total number of avoidance targets is less than the preset number, it is indicated that there are fewer traffic participants, and the value of N may be greater at this time, thereby referencing more occurring motion trajectories. If the total number of avoidance targets is greater than or equal to the preset number value, more traffic participants are indicated, and the value of N can be smaller at the moment, so that the too low decision efficiency is avoided. In other embodiments, the terminal device may also reserve, for each obstacle avoidance object, at least one motion trajectory with a trajectory confidence greater than or equal to a confidence threshold as an obstacle avoidance reference. The preset quantity value and the confidence threshold value can be adjusted according to actual conditions.
In the embodiment of the application, the environmental data of the environment where the target object is located is acquired, at least one motion track of each avoidance target from the current moment to the future moment and the track confidence coefficient of each motion track in the environment are determined according to the environmental data, then the track confidence coefficient of each motion track is adjusted according to the map information of the current area, and the target object can avoid the obstacle for each avoidance target based on the adjusted track confidence coefficient, so that the target object effectively refers to each possible track and the occurrence probability of each motion track when avoiding the obstacle, rather than directly determining the unique running track of each avoidance target from the map, the occurrence of traffic accidents caused by missing the motion track when avoiding the abnormal running of the target is avoided, and the safety of automatic driving can be improved.
The following describes the track prediction process in detail.
Referring to fig. 2, in some embodiments of the present application, the terminal device may determine at least one motion trajectory through the following steps S201 to S204.
Step S201, determining positions to be matched, where avoidance targets appear at a plurality of motion moments within a preset time period, according to environmental data.
The preset duration is a time period of occurrence of the motion track, that is, a time period taking the current moment as a starting point and taking the future moment as an end point, and the specific value of the duration can be selected according to actual conditions, for example, 3s, 5s and the like. The preset duration may include a plurality of motion moments, for example, every 1s as one motion moment, and by performing position prediction on each motion moment in the preset duration, a motion track of the avoidance target in the preset duration may be obtained.
Specifically, the terminal device may use the environmental information of the current moment and the historical moment as input of the neural network, where the output of the neural network is a position (i.e. a position to be matched) where the avoidance target may occur at a plurality of motion moments within a preset duration, and a reference type corresponding to the position to be matched. The number of the positions to be matched at each moment can be equal or unequal, and all the positions to be matched are reserved, so that the positions to be matched can contain positions in an impassable area of the avoidance target.
Step S202, motion data of each avoidance target is acquired.
Step S203, matching the positions to be matched for each avoidance target according to the motion data, and obtaining the predicted positions of the avoidance targets at each motion moment.
In the embodiment of the application, the positions to be matched are positions where avoidance targets appear, however, it is not clear which avoidance target each position to be matched belongs to, so that motion data of each avoidance target needs to be acquired, and the positions to be matched are matched for each avoidance target according to the motion data.
The motion data may be data such as a speed and a position of a historical moment and a current moment reported by the avoidance target, or may be motion data determined by environment data.
In the embodiment of the present application, the terminal device may determine the position offset amount at each movement time with respect to the previous time based on the movement data. Specifically, the amount of positional offset between the position of each avoidance target in each movement time with respect to the position of the avoidance target in the previous time may be predicted in the neural network described in step S201. In order to simplify the prediction, the position offset may be an offset of two positions in a Bird's Eye View (bev), that is, only the x and y offsets of the two positions in a target coordinate system need to be predicted, and the z offset does not need to be predicted, and the target coordinate system may refer to a radar coordinate system or a camera coordinate system, where the z axis of the target coordinate system is generally perpendicular to the ground.
Accordingly, among the plurality of positions to be matched, a position at which the offset amount between the first positions at the first time satisfies the position offset amount associated with the second time may be determined as the predicted position at the second time. The second moment is any one movement moment within a preset duration, and the first moment is the previous movement moment of the second moment. When the second time is the current time, the first time is the historical time, and the first position is the historical position of the historical time. When the second time is the future time, the first time is the previous movement time of the second time in the preset time, and the first position is the predicted position of the previous movement time. That is, if the predicted position to be matched (second position) at time t+1 is A, the position offset is (x) 1 ,y 1 ) And the distance between the first position B where the just avoidance target is located at the moment t and the position A to be matched meets the position offset (x) 1 ,y 1 ) The two positions are considered to be matched, namely, the position A to be matched is the position reached at the time t+1 after the avoidance target reaches the first position B at the time t. Then, starting from the historical moment, the positions to be matched at each movement moment in the preset time period can be matched in sequence, and the successfully matched position is the predicted position.
It should be noted that during matching, the same position to be matched may have a competition relationship, that is, the same position to be matched may be matched by different avoidance targets at the same time, and meanwhile, the same avoidance target may have a plurality of matched positions to be matched at the same time.
And, the neural network can output a reference type corresponding to each position to be matched at each movement moment, that is, the avoidance target appearing at the position to be matched belongs to pedestrians, bicycles, automobiles or other types. In order to ensure the reliability of the motion trail, the terminal device may set the position to be matched, which is the same as the avoidance target, as the predicted position at the second time, with the distance from the first position satisfying the position offset related to the second time.
Step S204, connecting the predicted positions of each avoidance target according to the time sequence to obtain at least one motion track within a preset duration.
Through the matching of the positions, the terminal equipment can obtain the predicted positions of each avoidance target possibly appearing at each movement moment, and at the moment, the predicted positions which are matched in sequence are connected according to the time sequence, so that the movement track of each avoidance target in the preset duration can be obtained. It should be understood that, since the predicted position at each movement time may be one or more, for a certain avoidance target, if the predicted position at each movement time is one, the movement track obtained by connection is unique; if the motion track has a plurality of predicted positions at any motion moment, the motion track obtained by connecting the motion track can be a plurality of motion tracks.
Correspondingly, for the track confidence coefficient, the terminal equipment can determine the position confidence coefficient of the avoidance target on each predicted position, and determine the track confidence coefficient of the corresponding motion track according to the position confidence coefficient of the predicted position on the same motion track.
For example, if the motion trajectory C is formed by sequentially connecting the predicted positions D, E, F, the trajectory confidence of the motion trajectory C may be determined by the position confidence of the predicted positions D, E, F, and may be obtained by adding or weighting the position confidence of the predicted positions D, E, F. The confidence level of each predicted position may be a preset confidence level value, or may be output by the neural network. The neural network may assign a location confidence to each predicted location with reference to the context information.
Correspondingly, the terminal device can adjust the position confidence coefficient of the predicted position in the non-passable area, so as to determine the adjusted track confidence coefficient of the corresponding motion track according to the adjusted position confidence coefficient. For example, a preset adjustment parameter p may be obtained, and for each predicted position, if the predicted position is located in the non-passable area, p may be subtracted from the position confidence, and the trajectory confidence of the motion trajectory including the predicted position may be recalculated. Wherein, 0< p <1, the specific value can be adjusted according to actual conditions.
Therefore, the predicted motion trail is optimized by using the map information, corresponding motion trail can be reserved for some vehicles running abnormally, and corresponding trail confidence is output, so that the predicted motion trail (such as abnormal running behaviors such as driving out of a lane, reversing and the like) which cannot appear can be reserved for the subsequent decision-making module, and traffic accidents are avoided.
In practical application, considering the diversity of the types of the avoidance targets, the environmental data can be input into the neural networks corresponding to the avoidance targets of different types, namely the neural networks are in one-to-one correspondence with the types of the avoidance targets, but the detection efficiency is lower. Therefore, in some embodiments of the present application, environmental data may be input into the target neural network, and at least one motion track of each avoidance target output by the target neural network from the current time to the future time and a track confidence coefficient of each motion track are obtained. The target neural network is trained by sample environment data comprising a plurality of types of avoidance targets, wherein at least part of the types of avoidance targets are different in size. That is, the target neural network can detect avoidance targets of different types and sizes at the same time, so that the detection efficiency is improved.
Specifically, the target neural network adopted in the method can be a Bi-FPN network structure, the Bi-FPN network structure considers the feature fusion of high semantic information and low semantic information, so that the target neural network can incorporate the high and low semantic features during operation, and a reliable result is output. Specifically, after the extraction of the high semantic features is completed, the low semantic features are connected through deconvolution, so that more detail information is reserved, and the omission ratio of small targets such as pedestrians or bicycles is reduced.
Because the common perception module flow sequentially comprises target detection, target tracking and track prediction, the track prediction result is calculated based on the target detection and tracking result, and if the target detector and the target tracker cannot achieve the qualified effect, the performance of the track prediction module can be seriously affected. The prediction positions of the current moment and the future moment can be output in parallel by adopting the fusion of the high semantic information and the low semantic information, so that the interference of the obstacle track prediction task by the front-end task is avoided; meanwhile, the target neural network can capture more original environmental data, so that the motion trail can be predicted more accurately.
It should be noted that, for the sake of simplicity of description, the foregoing method embodiments are all expressed as a series of combinations of actions, but it should be understood by those skilled in the art that the present application is not limited by the order of actions described, as some steps may be performed in other order according to the present application.
Fig. 3 is a schematic structural diagram of a track prediction apparatus 300 according to an embodiment of the present application, where the track prediction apparatus 300 is configured on a terminal device.
Specifically, the trajectory prediction apparatus 300 may include:
an obtaining unit 301, configured to obtain environmental data of an environment where a target object is located, where the environment includes at least one avoidance target related to the target object;
a determining unit 302, configured to determine, according to the environmental data, at least one motion track of each avoidance target from a current time to a future time, and a track confidence coefficient of each motion track;
the adjusting unit 303 is configured to adjust the track confidence coefficient of each motion track according to map information of a current area, so as to obtain an adjusted track confidence coefficient, where the current area includes an area through which each motion track passes, and the adjusted track confidence coefficient is used for the target object to avoid an obstacle for each avoidance target.
In some embodiments of the present application, the determining unit 302 may specifically be configured to: determining positions to be matched of the avoidance targets at a plurality of movement moments within a preset time length according to the environmental data, wherein the preset time length is a time period from the current moment to the future moment; acquiring motion data of each avoidance target; according to the motion data, matching the positions to be matched for each avoidance target to obtain predicted positions of the avoidance targets at each motion moment, wherein one or more predicted positions of each motion moment are obtained; and connecting the predicted positions of each avoidance target according to a time sequence to obtain at least one motion track within a preset duration.
In some embodiments of the present application, the determining unit 302 may specifically be configured to: determining the position offset of each avoidance target at each movement moment relative to the previous movement moment according to the movement data; and determining the position, of the positions to be matched, at which the offset between the first positions at the first moment meets the position offset related to the second moment, as the predicted position at the second moment, wherein the second moment is any one movement moment in the preset duration, and the first moment is the previous movement moment of the second moment.
In some embodiments of the present application, the determining unit 302 may specifically be configured to: determining a plurality of movement moments within a preset time period according to the movement data, and generating a position to be matched of the avoidance target at each movement moment and a reference type of the avoidance target at each position to be matched; and taking the position to be matched, of which the distance from the first position meets the position offset related to the second moment and the reference type is the same as the type of the avoidance target, as a predicted position of the second moment.
In some embodiments of the present application, the determining unit 302 may specifically be configured to: determining the position confidence of the avoidance target at each predicted position; and determining the track confidence coefficient corresponding to the motion track according to the position confidence coefficient of the predicted position on the same motion track.
In some embodiments of the present application, map information records an unvented area where the target is avoided; the adjusting unit 303 may specifically be configured to: adjusting a position confidence of the predicted position located within the non-passable region; and determining the adjusted track confidence coefficient corresponding to the motion track according to the adjusted position confidence coefficient.
In some embodiments of the present application, the determining unit 302 may specifically be configured to: inputting the environment data into a target neural network, and acquiring at least one motion track of each avoidance target output by the target neural network from the current moment to the future moment and track confidence coefficient of each motion track, wherein the target neural network is trained by sample environment data comprising a plurality of types of the avoidance targets, and at least part of types of the avoidance targets are different in size.
It should be noted that, for convenience and brevity, the specific working process of the track prediction device 300 may refer to the corresponding process of the method described in fig. 1 to 2, and will not be described herein again.
Fig. 4 is a schematic diagram of a terminal device according to an embodiment of the present application. The terminal device 4 may include: a processor 40, a memory 41 and a computer program 42, such as a trajectory prediction program, stored in the memory 41 and executable on the processor 40. The steps of the respective trajectory prediction method embodiments described above, such as steps S101 to S103 shown in fig. 1, are implemented when the processor 40 executes the computer program 42. Alternatively, the processor 40 may implement the functions of the modules/units in the above-described device embodiments when executing the computer program 42, for example, the obtaining unit 301, the determining unit 302, and the adjusting unit 303 shown in fig. 3.
The computer program may be divided into one or more modules/units, which are stored in the memory 41 and executed by the processor 40 to complete the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing the specified functions, which instruction segments are used for describing the execution of the computer program in the terminal device.
For example, the computer program may be split into: the device comprises an acquisition unit, a determination unit and an adjustment unit.
The specific functions of each unit are as follows: the device comprises an acquisition unit, a control unit and a control unit, wherein the acquisition unit is used for acquiring environment data of an environment where a target object is located, and the environment comprises at least one avoidance target related to the target object; the determining unit is used for determining at least one motion track of each avoidance target from the current moment to the future moment and track confidence coefficient of each motion track according to the environment data; the adjusting unit is used for adjusting the track confidence coefficient of each motion track according to the map information of the current area to obtain an adjusted track confidence coefficient, the current area comprises an area through which each motion track passes, and the adjusted track confidence coefficient is used for the target object to avoid the obstacle for each avoidance target.
The terminal device may include, but is not limited to, a processor 40, a memory 41. It will be appreciated by those skilled in the art that fig. 4 is merely an example of a terminal device and is not meant to be limiting, and that more or fewer components than shown may be included, or certain components may be combined, or different components may be included, for example, the terminal device may also include input and output devices, network access devices, buses, etc.
The processor 40 may be a central processing unit (Central Processing Unit, CPU), other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 41 may be an internal storage unit of the terminal device, such as a hard disk or a memory of the terminal device. The memory 41 may also be an external storage device of the terminal device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the terminal device. Further, the memory 41 may also include both an internal storage unit and an external storage device of the terminal device. The memory 41 is used for storing the computer program as well as other programs and data required by the terminal device. The memory 41 may also be used for temporarily storing data that has been output or is to be output.
It should be noted that, for convenience and brevity of description, the structure of the above terminal device may also refer to a specific description of the structure in the method embodiment, which is not repeated herein.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other manners. For example, the apparatus/terminal device embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical function division, and there may be additional divisions in actual implementation, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated modules/units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of each method embodiment described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the computer readable medium contains content that can be appropriately scaled according to the requirements of jurisdictions in which such content is subject to legislation and patent practice, such as in certain jurisdictions in which such content is subject to legislation and patent practice, the computer readable medium does not include electrical carrier signals and telecommunication signals.
The above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.

Claims (10)

1. A track prediction method, comprising:
acquiring environment data of an environment in which a target object is located, wherein the environment comprises at least one avoidance target related to the target object;
determining at least one motion track of each avoidance target from the current moment to the future moment and track confidence coefficient of each motion track according to the environment data;
and adjusting the track confidence coefficient of each motion track according to the map information of the current area to obtain an adjusted track confidence coefficient, wherein the current area comprises an area through which each motion track passes, and the adjusted track confidence coefficient is used for the target object to avoid the obstacle for each avoidance target.
2. The trajectory prediction method according to claim 1, wherein the determining at least one motion trajectory of each avoidance target from the current time to the future time according to the environmental data includes:
determining positions to be matched of the avoidance targets at a plurality of movement moments within a preset time length according to the environmental data, wherein the preset time length is a time period from the current moment to the future moment;
acquiring motion data of each avoidance target;
according to the motion data, matching the positions to be matched for each avoidance target to obtain predicted positions of the avoidance targets at each motion moment, wherein one or more predicted positions of each motion moment are obtained;
and connecting the predicted positions of each avoidance target according to a time sequence to obtain at least one motion track within the preset duration.
3. The track prediction method according to claim 2, wherein the matching the positions to be matched for each avoidance target according to the motion data, to obtain the predicted positions of the avoidance targets at each motion time, includes:
Determining the position offset of each avoidance target at each movement moment relative to the previous movement moment according to the movement data;
and determining the position, of the positions to be matched, at which the offset between the first positions at the first moment meets the position offset related to the second moment, as the predicted position at the second moment, wherein the second moment is any one movement moment in the preset duration, and the first moment is the previous movement moment of the second moment.
4. The track prediction method according to claim 3, wherein the determining, according to the environmental data, positions to be matched where the avoidance target appears at a plurality of motion moments within a preset duration includes:
determining a plurality of movement moments in the preset time length according to the movement data, and generating a position to be matched of the avoidance target at each movement moment and a reference type of the avoidance target at each position to be matched;
the determining, among the plurality of positions to be matched, a position where an offset amount between the first positions at the first time satisfies a position offset amount related to the second time as a predicted position at the second time includes:
And taking the position to be matched, of which the distance from the first position meets the position offset related to the second moment and the reference type is the same as the type of the avoidance target, as a predicted position of the second moment.
5. The trajectory prediction method of claim 1, wherein determining a trajectory confidence level for each of the motion trajectories comprises:
determining the position confidence of the avoidance target at each predicted position;
and determining the track confidence coefficient corresponding to the motion track according to the position confidence coefficient of the predicted position on the same motion track.
6. The trajectory prediction method according to claim 5, wherein the map information records an unvented area of the avoidance target;
the track confidence coefficient of each motion track is adjusted according to the map information of the current area, and the adjusted track confidence coefficient is obtained, which comprises the following steps:
adjusting a position confidence of the predicted position located within the non-passable region;
and determining the adjusted track confidence coefficient corresponding to the motion track according to the adjusted position confidence coefficient.
7. The trajectory prediction method according to any one of claims 1 to 6, wherein determining at least one motion trajectory of each avoidance target from a current time to a future time, and a trajectory confidence of each of the motion trajectories, based on the environmental data, comprises:
Inputting the environment data into a target neural network, and acquiring at least one motion track of each avoidance target output by the target neural network from the current moment to the future moment and track confidence coefficient of each motion track, wherein the target neural network is trained by sample environment data comprising a plurality of types of the avoidance targets, and at least part of types of the avoidance targets are different in size.
8. A trajectory prediction device, comprising:
the device comprises an acquisition unit, a control unit and a control unit, wherein the acquisition unit is used for acquiring environment data of an environment where a target object is located, and the environment comprises at least one avoidance target related to the target object;
the determining unit is used for determining at least one motion track of each avoidance target from the current moment to the future moment and track confidence coefficient of each motion track according to the environment data;
the adjusting unit is used for adjusting the track confidence coefficient of each motion track according to the map information of the current area to obtain an adjusted track confidence coefficient, the current area comprises an area through which each motion track passes, and the adjusted track confidence coefficient is used for the target object to avoid the obstacle for each avoidance target.
9. Terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the trajectory prediction method according to any one of claims 1 to 7 when the computer program is executed.
10. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the trajectory prediction method according to any one of claims 1 to 7.
CN202211093943.4A 2022-09-08 2022-09-08 Track prediction method, track prediction device, terminal equipment and readable storage medium Pending CN116300848A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211093943.4A CN116300848A (en) 2022-09-08 2022-09-08 Track prediction method, track prediction device, terminal equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211093943.4A CN116300848A (en) 2022-09-08 2022-09-08 Track prediction method, track prediction device, terminal equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN116300848A true CN116300848A (en) 2023-06-23

Family

ID=86791107

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211093943.4A Pending CN116300848A (en) 2022-09-08 2022-09-08 Track prediction method, track prediction device, terminal equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN116300848A (en)

Similar Documents

Publication Publication Date Title
CN109934164B (en) Data processing method and device based on track safety degree
CN109829351B (en) Method and device for detecting lane information and computer readable storage medium
US10409279B2 (en) Efficient situational awareness by event generation and episodic memory recall for autonomous driving systems
CN113492851B (en) Vehicle control device, vehicle control method, and computer program for vehicle control
US11308717B2 (en) Object detection device and object detection method
JP2022514975A (en) Multi-sensor data fusion method and equipment
CN111427369A (en) Unmanned vehicle control method and device
CN113435237B (en) Object state recognition device, recognition method, and computer-readable recording medium, and control device
Giese et al. Road course estimation using deep learning on radar data
CN114925747A (en) Vehicle abnormal running detection method, electronic device, and storage medium
CN113111682A (en) Target object sensing method and device, sensing base station and sensing system
CN110533921B (en) Triggering snapshot method and system for vehicle
CN114841910A (en) Vehicle-mounted lens shielding identification method and device
CN114463372A (en) Vehicle identification method and device, terminal equipment and computer readable storage medium
CN115856872A (en) Vehicle motion track continuous tracking method
US11499833B2 (en) Inferring lane boundaries via high speed vehicle telemetry
Schiegg et al. Object Detection Probability for Highly Automated Vehicles: An Analytical Sensor Model.
CN114663804A (en) Driving area detection method, device, mobile equipment and storage medium
CN112455465B (en) Driving environment sensing method and device, electronic equipment and storage medium
CN117152949A (en) Traffic event identification method and system based on unmanned aerial vehicle
CN116300848A (en) Track prediction method, track prediction device, terminal equipment and readable storage medium
CN115762153A (en) Method and device for detecting backing up
US11257239B2 (en) Image selection device and image selection method
JP2020050047A (en) Object detection device
CN112572471B (en) Automatic driving method, device, electronic equipment and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination