CN116001807B - Multi-scene track prediction method, equipment, medium and vehicle - Google Patents

Multi-scene track prediction method, equipment, medium and vehicle Download PDF

Info

Publication number
CN116001807B
CN116001807B CN202310165278.3A CN202310165278A CN116001807B CN 116001807 B CN116001807 B CN 116001807B CN 202310165278 A CN202310165278 A CN 202310165278A CN 116001807 B CN116001807 B CN 116001807B
Authority
CN
China
Prior art keywords
track
target
scene
information
predicted
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310165278.3A
Other languages
Chinese (zh)
Other versions
CN116001807A (en
Inventor
秦海波
彭琦翔
李传康
吴冰
姚卯青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Weilai Zhijia Technology Co Ltd
Original Assignee
Anhui Weilai Zhijia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Weilai Zhijia Technology Co Ltd filed Critical Anhui Weilai Zhijia Technology Co Ltd
Priority to CN202310165278.3A priority Critical patent/CN116001807B/en
Publication of CN116001807A publication Critical patent/CN116001807A/en
Application granted granted Critical
Publication of CN116001807B publication Critical patent/CN116001807B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention relates to the technical field of automatic driving, in particular to a multi-scene track prediction method, equipment, a medium and a vehicle, and aims to solve the problem that the existing track prediction method is low in prediction precision. For this purpose, the multi-scene track prediction method of the present invention includes: acquiring scene information of a scene to be focused, wherein the scene information at least comprises scene type, scene annotation information, track ID of a first target and first time information; inputting scene annotation information into a track prediction model to obtain a predicted track of at least one second target in the scene annotation information; determining an evaluation index of a track prediction model based on the track ID of the first target, the first time information and the predicted track of the at least one second target; and under the condition that the evaluation index meets the first preset condition, carrying out track prediction on the first target by utilizing a track prediction model. Therefore, a predicted track with higher accuracy is obtained, and the safety and stability of the vehicle are ensured.

Description

Multi-scene track prediction method, equipment, medium and vehicle
Technical Field
The invention relates to the technical field of automatic driving, and particularly provides a multi-scene track prediction method, equipment, a medium and a vehicle.
Background
With the development of the intellectualization of automobiles, the automobile industry gradually changes the technology of the automobile industry, and unmanned intelligent automobiles are greatly researched and researched.
Vehicle trajectory prediction is an important part in automatic driving and is a precondition for realizing automatic driving planning and decision. In some technical routes of vehicle trajectories, a deep learning method is mostly adopted to predict a future driving trajectory of a vehicle. However, the method has a large dependence on the model, and under the condition of poor model precision, the track of the target vehicle cannot be accurately predicted, and the safety performance of the vehicle cannot be ensured.
Accordingly, there is a need in the art for a new multi-scene track prediction scheme to solve the above-described problems.
Disclosure of Invention
The present invention has been made to overcome the above-mentioned drawbacks, and to provide a solution or at least partially solve the above-mentioned technical problems. The invention provides a multi-scene track prediction method, equipment, a medium and a vehicle.
In a first aspect, the present invention provides a multi-scene track prediction method, the method comprising: acquiring scene information of a scene to be focused, wherein the scene information at least comprises scene type, scene annotation information, track ID of a first target and first time information; inputting the scene annotation information into a track prediction model to obtain a predicted track of at least one second target in the scene annotation information; determining an evaluation index based on the track ID of the first target, the first time information and the predicted track of the at least one second target; and under the condition that the evaluation index meets a first preset condition, carrying out track prediction on the first target by utilizing the track prediction model.
In one embodiment, the determining the evaluation index based on the track ID of the first target, the first time information, and the predicted track of the at least one second target includes: acquiring a predicted track of the first target from the predicted tracks of the at least one second target based on the track ID of the first target and the first time information; determining a track type of the first target based on the predicted track of the first target; the evaluation index is determined based on the track type of the first target.
In one embodiment, the obtaining the predicted track of the first target from the predicted tracks of the at least one second target based on the track ID of the first target and the first time information includes: acquiring annotation frame information of the first target based on the track ID of the first target and the first time information; acquiring detection frame information of the at least one second target; and matching the marking frame information of the first target with the detection frame information of the at least one second target respectively, and acquiring a predicted track of the first target based on a matching result.
In one embodiment, matching the label frame information of the first target with the detection frame information of the at least one second target, and obtaining the predicted track of the first target based on the matching result includes: respectively calculating the cross ratio of the marking frame information of the first target and the detection frame information of the at least one second target; judging whether the maximum cross ratio meets a second preset condition or not; if yes, taking the predicted track corresponding to the largest intersection ratio as the predicted track of the first target.
In one embodiment, the determining the track type of the first target based on the predicted track of the first target includes: based on the scene type, acquiring a corresponding track classification rule; a track type of the first object is determined based on the track classification rule and a predicted track of the first object.
In one embodiment, the evaluation index comprises a recall; the determining the evaluation index based on the track type of the first target includes: judging whether the track type and the scene type of the first target are the same; if yes, adding one to the real example; if not, adding one to the false counter example; the recall is determined based on the true case and false case.
In one embodiment, the evaluation index includes delay information; the determining the evaluation index based on the track type of the first target includes: determining second time information when the track type of the first target is the same as the scene type; the delay information is determined based on the first time information and the second time information.
In a second aspect, an electronic device is provided, comprising at least one processor and at least one storage device adapted to store a plurality of program code adapted to be loaded and executed by the processor to perform the multi-scene trajectory prediction method of any of the preceding claims.
In a third aspect, there is provided a computer readable storage medium having stored therein a plurality of program codes adapted to be loaded and executed by a processor to perform the multi-scene track prediction method of any of the preceding claims.
In a fourth aspect, a vehicle is provided, the vehicle comprising the aforementioned electronic device.
The technical scheme provided by the invention has at least one or more of the following beneficial effects:
according to the multi-scene track prediction method, scene annotation information of a scene concerned by a user, track ID of a first target and first time information are obtained; inputting scene annotation information into a track prediction model, and outputting a predicted track of at least one second target; determining an evaluation index based on the track ID of the first target, the first time information and the predicted track of the at least one second target; and under the condition that the evaluation index meets the first preset condition, carrying out track prediction on the first target by utilizing a track prediction model. Thus, a predicted track with higher precision is obtained, and the safety and stability of the vehicle are ensured.
Drawings
The present disclosure will become more readily understood with reference to the accompanying drawings. As will be readily appreciated by those skilled in the art: the drawings are for illustrative purposes only and are not intended to limit the scope of the present invention. Moreover, like numerals in the figures are used to designate like parts, wherein:
FIG. 1 is a flow chart illustrating the main steps of a multi-scene trajectory prediction method according to an embodiment of the invention;
FIG. 2 is a schematic diagram of determining a track type in one embodiment;
FIG. 3 is a schematic diagram of determining an evaluation index based on a track type in one embodiment;
FIG. 4 is a complete flow diagram of a multi-scene track prediction method in one embodiment;
fig. 5 is a schematic diagram of the structure of an electronic device in one embodiment.
Detailed Description
Some embodiments of the invention are described below with reference to the accompanying drawings. It should be understood by those skilled in the art that these embodiments are merely for explaining the technical principles of the present invention, and are not intended to limit the scope of the present invention.
In the description of the present invention, a "module," "processor" may include hardware, software, or a combination of both. A module may comprise hardware circuitry, various suitable sensors, communication ports, memory, or software components, such as program code, or a combination of software and hardware. The processor may be a central processor, a microprocessor, an image processor, a digital signal processor, or any other suitable processor. The processor has data and/or signal processing functions. The processor may be implemented in software, hardware, or a combination of both. Non-transitory computer readable storage media include any suitable medium that can store program code, such as magnetic disks, hard disks, optical disks, flash memory, read-only memory, random access memory, and the like. The term "a and/or B" means all possible combinations of a and B, such as a alone, B alone or a and B. The term "at least one A or B" or "at least one of A and B" has a meaning similar to "A and/or B" and may include A alone, B alone or A and B. The singular forms "a", "an" and "the" include plural referents.
In some technical routes of vehicle trajectories, a deep learning method is mostly adopted to predict a future driving trajectory of a vehicle. However, the method has a large dependence on the model, and under the condition of poor model precision, the track of the target vehicle cannot be accurately predicted, and the safety performance of the vehicle cannot be ensured.
Therefore, the application provides a multi-scene track prediction method, equipment, medium and vehicle, and scene annotation information of a user concerned scene, track ID of a first target and first time information are obtained; inputting scene annotation information into a track prediction model, and outputting a predicted track of at least one second target; determining an evaluation index based on the track ID of the first target, the first time information and the predicted track of the at least one second target; and under the condition that the evaluation index meets the first preset condition, carrying out track prediction on the first target by utilizing a track prediction model. Thus, a predicted track with higher precision is obtained, and the safety and stability of the vehicle are ensured. Meanwhile, the time cost of manually evaluating the model is reduced, the track prediction capability evaluation of the track prediction model in the user attention scene is realized, the evaluation efficiency of the model is improved, and the updating iteration of the model is accelerated.
Referring to fig. 1, fig. 1 is a schematic flow chart of main steps of a multi-scene track prediction method according to an embodiment of the present invention.
As shown in fig. 1, the multi-scene track prediction method in the embodiment of the invention mainly includes the following steps S101 to S104.
Step S101: scene information of a scene to be focused is obtained, wherein the scene information at least comprises scene type, scene annotation information, track ID of a first target and first time information.
Step S102: and inputting the scene annotation information into a track prediction model to obtain a predicted track of at least one second target in the scene annotation information.
Step S103: an evaluation index is determined based on the track ID of the first target, the first time information and the predicted track of the at least one second target.
Step S104: and under the condition that the evaluation index meets a first preset condition, carrying out track prediction on the first target by utilizing the track prediction model.
Based on the steps S101-S104, scene annotation information of a scene concerned by a user, a track ID of a first target and first time information are obtained; inputting scene annotation information into a track prediction model, and outputting a predicted track of at least one second target; determining an evaluation index based on the track ID of the first target, the first time information and the predicted track of the at least one second target; and under the condition that the evaluation index meets the first preset condition, carrying out track prediction on the first target by utilizing a track prediction model. Thus, a predicted track with higher precision is obtained, and the safety and stability of the vehicle are ensured. Meanwhile, the time cost of manually evaluating the model is reduced, the track prediction capability evaluation of the track prediction model in the user attention scene is realized, the evaluation efficiency of the model is improved, and the updating iteration of the model is accelerated.
The following further describes the above steps S101 to S104, respectively.
In the step S101, the user may specifically screen the scene information of the scene of interest from the database.
The scene to be focused is a scene focused by the user.
The scene type may be a scene type of a scene of interest to the user, such as a cut-in, turn around, etc.
The scene annotation information comprises an image acquired by a vehicle-mounted sensor and annotation frame information marked manually, wherein the annotation frame information can be an area frame where each target in the image is located, a type of the target (such as a pedestrian, a vehicle or an obstacle), a track ID of the target and the like.
The first object is a key object of interest to the user, whose track ID (track ID) is recorded.
The first time information refers to a start time and an end time of a scene of interest to the user.
The above is a further explanation of step S101, and the following further explanation of step S102 is continued.
Scene annotation information is data of a scene of interest to a user that includes a plurality of second objects, for example, in the scene of interest to the user, possibly including a plurality of vehicles and a plurality of pedestrians, etc. The first object may be an object of interest to any one of the users of the second objects.
In the step S102, the scene annotation information is specifically input into the track prediction model, so as to obtain a detection frame and a predicted track of at least one second target in the scene annotation information, where the detection frame and the predicted track of each second target are in one-to-one correspondence.
The trajectory prediction model may support multi-modal output and trajectory prediction of different lengths of time. In one embodiment, an end-to-end trajectory prediction model and a long-term trajectory prediction model may be used as one example of the trajectory prediction model. The long-term track prediction model can be a Mutifath++ network, a Wayformer network and the like.
The above is a further explanation of step S102, and the following further explanation of step S103 is continued.
Specifically, the above step S103 may be realized by the following steps S1031 to S1033.
Step S1031: and acquiring a predicted track of the first target from the predicted tracks of the at least one second target based on the track ID of the first target and the first time information.
In one specific embodiment, the obtaining the predicted track of the first target from the predicted tracks of the at least one second target based on the track ID of the first target and the first time information includes: acquiring annotation frame information of the first target based on the track ID of the first target and the first time information; acquiring detection frame information of the at least one second target; and matching the marking frame information of the first target with the detection frame information of the at least one second target respectively, and acquiring a predicted track of the first target based on a matching result.
The annotation frame information of the first target can be obtained from the scene annotation information according to the track ID of the first target and the first time information.
The detection frame information of the at least one second object is the detection frame information of the at least one second object output in step S102.
Specifically, the labeling frame of the first target is matched with each second target in at least one second target one by one, so that a predicted track of the first target is obtained according to a matching result. Specifically, distance matching, cross-over matching algorithm and the like can be adopted. In a preferred embodiment, an algorithm implementation of cross-over matching may be employed.
In a specific embodiment, the matching the label frame information of the first target with the detection frame information of the at least one second target respectively, and obtaining the predicted track of the first target based on the matching result includes: respectively calculating the cross ratio of the marking frame information of the first target and the detection frame information of the at least one second target; judging whether the maximum cross ratio meets a second preset condition or not; if yes, taking the predicted track corresponding to the largest intersection ratio as the predicted track of the first target.
The second preset condition may be that the maximum intersection ratio is greater than a preset threshold, and the preset threshold may be obtained through experiments in advance.
Specifically, the intersection ratio of the labeling frame of the first target and the detection frame of each second target in the at least one second target is calculated respectively, the largest intersection ratio is selected from all intersection ratios, and whether the largest intersection ratio is larger than a preset threshold value is judged. And if the maximum intersection ratio is smaller than a preset threshold value, discarding the frame data, otherwise, obtaining detection frame information and a predicted track corresponding to the maximum intersection ratio, and taking the predicted track as the predicted track of the first target.
Step S1032: a track type of the first target is determined based on the predicted track of the first target.
In one embodiment, as shown in fig. 2, the predicted track of the first target is transformed in a coordinate system to obtain the predicted track in the coordinate system centered on the first target, then a track classification rule is determined based on the scene type, and then the track type of the predicted track is obtained according to the track classification rule.
The track type of the first target is in a vehicle coordinate system, a coordinate conversion matrix from the vehicle coordinate system to the first target serving as a center is further determined according to a vehicle positioning result, and track information of the first target is converted into the coordinate system taking the first target as the center according to the conversion matrix, so that track classification is facilitated.
In one specific embodiment, the determining the track type of the first target based on the predicted track of the first target includes: based on the scene type, acquiring a corresponding track classification rule; a track type of the first object is determined based on the track classification rule and a predicted track of the first object.
The scene type may be a scene type of a scene of interest to the user, such as a cut-in, turn around, etc.
The determination of the track type may be implemented by rules. For example, after determining the scene type, the track classification rule corresponding to the scene type can be further acquired.
Illustratively, taking stuffing as one example of the scene type, the acquired trajectory classification rule may be based on the stuffed scene type: and determining whether the change of the orientation angle of the first target is larger than a preset value or not based on the predicted track, whether the position of the first target and the position of the vehicle are smaller than a distance threshold value or not, and the like.
After the track classification rule is obtained, the track type corresponding to the predicted track of the first target can be further judged.
The trajectory type determination may also be implemented by machine learning, such as a vector machine. And inputting the predicted track into a trained vector machine, and outputting the track type of the predicted track.
Step S1033: the evaluation index is determined based on the track type of the first target.
The evaluation index includes accuracy, precision, recall, and F1 score, etc., where F1 score is a weighted average of precision, recall.
In a preferred embodiment, at least one of recall and delay information may be used to evaluate the performance of the trajectory prediction model.
In one embodiment, the evaluation index comprises a recall; the determining the evaluation index based on the track type of the first target includes: judging whether the track type and the scene type of the first target are the same; if yes, adding one to the real example; if not, adding one to the false counter example; the recall is determined based on the true case and false case.
In one embodiment, the evaluation index includes delay information; the determining the evaluation index based on the track type of the first target includes: determining second time information when the track type of the first target is the same as the scene type; the delay information is determined based on the first time information and the second time information.
The true instance (TP) is the correct positive instance, one instance is a positive class and is determined to be a positive class.
False counter (FN) is a false counter, missing report, this is a positive example but is determined to be a false class.
For the recall, firstly, it is determined whether the track type of the first target determined in step S1032 is the same as the scene type of interest to the user in step S101, if yes, the real instance (TP) is incremented by one; if not, then false counter (FN) is incremented by one. Further, the recall ratio is obtained based on TP/(tp+fn).
The delay information includes at least one of a delay time, an average delay time, a maximum value, a minimum value, a delay variance, and a delay profile of the delay time.
For a scene of interest, first, second time information when the track type of the first object is the same as the scene type is determined, so that a time difference between the second time information and the first time information is determined as a delay time.
The average delay time is the average of all the first target delay times in the scene to be focused.
The delay maximum is the maximum of all first target delay times in the scene to be focused, which reflects the lower limit of the trajectory prediction model.
The delay minimum is the minimum of all first target delay times in the scene to be focused, which reflects the upper limit of the predictive model.
The delay variance is the squared difference of all first target delay times in the scene to be focused.
The above is a further explanation of step S103, and the following further explanation of step S104 is continued.
The first preset condition may be that the evaluation index is greater than an index threshold, wherein the index threshold may be a value obtained through experiments in advance.
The prediction capability of the track prediction model in a specific scene can be estimated through the estimation index, so that the performance of the track prediction model is determined, and under the condition that the performance of the track prediction model is good, the track prediction is carried out on the first target by using the track prediction model.
In one embodiment, the recall ratio is compared with a recall ratio threshold value, so that the false detection probability of the track prediction model is judged, and the prediction precision of the prediction model can be accurately described. And when the recall rate is greater than the recall rate threshold, determining that the evaluation index is greater than a first preset condition.
In another embodiment, the delay information is compared with corresponding thresholds, respectively, to determine the performance of the trajectory prediction model. And when the delay information is smaller than a threshold value, determining that the evaluation index is larger than a first preset condition.
In another embodiment, particularly as shown in FIG. 3, the recall is calculated by the true example TP and false counter example FN, and model performance is further evaluated using at least one of the recall and delay information.
In one embodiment, as particularly shown in FIG. 4, by obtaining scene information, such as scene type, scene annotation information, track ID, first time information, etc., from a set of constructed scenes. The scene annotation information is input into a track prediction model, and the predicted track of at least one second target in the scene data is output. And then, according to the scene type, the track ID and the first time information, acquiring a predicted track of the first target from the predicted tracks of at least one second target, and classifying the tracks to acquire the track type, so as to perform track evaluation based on the classified track type and scene type, and evaluate the performance of the track prediction model. Therefore, time cost brought by manual evaluation is reduced, the evaluation efficiency of the model is improved, and an evaluation report of the track prediction model under multiple scenes is comprehensively and rapidly generated. And under the condition that the model performance is good, carrying out track prediction on the first target by using the model performance.
It should be noted that, although the foregoing embodiments describe the steps in a specific order, it will be understood by those skilled in the art that, in order to achieve the effects of the present invention, the steps are not necessarily performed in such an order, and may be performed simultaneously (in parallel) or in other orders, and these variations are within the scope of the present invention.
It will be appreciated by those skilled in the art that the present invention may implement all or part of the above-described methods according to the above-described embodiments, or may be implemented by means of a computer program for instructing relevant hardware, where the computer program may be stored in a computer readable storage medium, and where the computer program may implement the steps of the above-described embodiments of the method when executed by a processor. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable storage medium may include: any entity or device, medium, usb disk, removable hard disk, magnetic disk, optical disk, computer memory, read-only memory, random access memory, electrical carrier wave signals, telecommunications signals, software distribution media, and the like capable of carrying the computer program code. It should be noted that the computer readable storage medium may include content that is subject to appropriate increases and decreases as required by jurisdictions and by jurisdictions in which such computer readable storage medium does not include electrical carrier signals and telecommunications signals.
The invention further provides electronic equipment. In one embodiment of an electronic device according to the present invention, as particularly shown in fig. 5, the electronic device comprises at least one processor 51 and at least one storage device 52, the storage device may be configured to store a program for performing the multi-scene track prediction method of the above-described method embodiment, and the processor may be configured to execute the program in the storage device, including, but not limited to, the program for performing the multi-scene track prediction method of the above-described method embodiment. For convenience of explanation, only those portions of the embodiments of the present invention that are relevant to the embodiments of the present invention are shown, and specific technical details are not disclosed, please refer to the method portions of the embodiments of the present invention.
The electronic device in the embodiment of the invention can be a control device formed by various devices. In some possible implementations, the electronic device may include multiple storage devices and multiple processors. The program for executing the multi-scene track prediction method of the above method embodiment may be divided into a plurality of sub-programs, and each sub-program may be loaded and executed by a processor to execute different steps of the multi-scene track prediction method of the above method embodiment. Specifically, each of the sub-programs may be stored in different storage devices, and each of the processors may be configured to execute the programs in one or more storage devices to collectively implement the multi-scene track prediction method of the above method embodiment, that is, each of the processors executes different steps of the multi-scene track prediction method of the above method embodiment, respectively, to collectively implement the multi-scene track prediction method of the above method embodiment.
The plurality of processors may be processors disposed on the same device, for example, the electronic device may be a high-performance device composed of a plurality of processors, and the plurality of processors may be processors configured on the high-performance device. In addition, the plurality of processors may be processors disposed on different devices, for example, the electronic device may be a server cluster, and the plurality of processors may be processors on different servers in the server cluster.
Further, the invention also provides a computer readable storage medium. In one embodiment of a computer-readable storage medium according to the present invention, the computer-readable storage medium may be configured to store a program for performing the multi-scene-trajectory prediction method of the above-described method embodiment, which may be loaded and executed by a processor to implement the multi-scene-trajectory prediction method described above. For convenience of explanation, only those portions of the embodiments of the present invention that are relevant to the embodiments of the present invention are shown, and specific technical details are not disclosed, please refer to the method portions of the embodiments of the present invention. The computer readable storage medium may be a storage device including various electronic devices, and optionally, the computer readable storage medium in the embodiments of the present invention is a non-transitory computer readable storage medium.
Further, the invention also provides a vehicle, which comprises the electronic equipment.
Thus far, the technical solution of the present invention has been described in connection with the preferred embodiments shown in the drawings, but it is easily understood by those skilled in the art that the scope of protection of the present invention is not limited to these specific embodiments. Equivalent modifications and substitutions for related technical features may be made by those skilled in the art without departing from the principles of the present invention, and such modifications and substitutions will fall within the scope of the present invention.

Claims (10)

1. A multi-scene track prediction method, the method comprising:
acquiring scene information of a scene to be focused, wherein the scene information at least comprises a scene type, scene annotation information, a track ID of a first target and first time information, the first target is a key target focused by a user, the scene annotation information comprises a plurality of second targets, the first target belongs to the second targets, and the first time information is the starting time and the ending time of the scene focused by the user;
inputting the scene annotation information into a track prediction model, and outputting a predicted track of at least one second target in the scene annotation information;
determining an evaluation index based on the track ID of the first target, the first time information and the predicted track of the at least one second target, comprising:
acquiring annotation frame information of the first target and detection frame information of the at least one second target;
respectively calculating the cross ratio of the marking frame information of the first target and the detection frame information of the at least one second target;
acquiring a predicted track of the first target based on the intersection ratio;
determining an evaluation index based on the predicted trajectory of the first target;
and under the condition that the evaluation index meets a first preset condition, carrying out track prediction on the first target by utilizing the track prediction model.
2. The multi-scene track prediction method according to claim 1, wherein the determining an evaluation index based on the predicted track of the first target includes:
determining a track type of the first target based on the predicted track of the first target;
the evaluation index is determined based on the track type of the first target.
3. The method of claim 1, wherein the obtaining the annotation frame information of the first object comprises: and acquiring the annotation frame information of the first target based on the track ID of the first target and the first time information.
4. The multi-scene track prediction method according to claim 1, wherein the obtaining the predicted track of the first target based on the intersection ratio includes:
judging whether the maximum cross ratio meets a second preset condition or not;
if yes, taking the predicted track corresponding to the largest intersection ratio as the predicted track of the first target.
5. The multi-scene track prediction method according to claim 2, wherein the determining the track type of the first target based on the predicted track of the first target includes:
based on the scene type, acquiring a corresponding track classification rule;
a track type of the first object is determined based on the track classification rule and a predicted track of the first object.
6. The multi-scene track prediction method according to claim 2, wherein the evaluation index includes a recall rate;
the determining the evaluation index based on the track type of the first target includes:
judging whether the track type of the first target is the same as the scene type;
if yes, adding one to the real example; if not, adding one to the false counter example;
the recall is determined based on the true case and false case.
7. The multi-scene track prediction method according to claim 2, wherein the evaluation index includes delay information;
the determining the evaluation index based on the track type of the first target includes:
determining second time information when the track type of the first target is the same as the scene type;
the delay information is determined based on the first time information and the second time information.
8. An electronic device comprising at least one processor and at least one storage means, the storage means being adapted to store a plurality of program code, characterized in that the program code is adapted to be loaded and executed by the processor to perform the multi-scene track prediction method of any of claims 1 to 7.
9. A computer readable storage medium having stored therein a plurality of program codes, wherein the program codes are adapted to be loaded and executed by a processor to perform the multi-scene track prediction method according to any one of claims 1 to 7.
10. A vehicle, characterized in that it comprises the electronic device of claim 8.
CN202310165278.3A 2023-02-27 2023-02-27 Multi-scene track prediction method, equipment, medium and vehicle Active CN116001807B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310165278.3A CN116001807B (en) 2023-02-27 2023-02-27 Multi-scene track prediction method, equipment, medium and vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310165278.3A CN116001807B (en) 2023-02-27 2023-02-27 Multi-scene track prediction method, equipment, medium and vehicle

Publications (2)

Publication Number Publication Date
CN116001807A CN116001807A (en) 2023-04-25
CN116001807B true CN116001807B (en) 2023-07-07

Family

ID=86019513

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310165278.3A Active CN116001807B (en) 2023-02-27 2023-02-27 Multi-scene track prediction method, equipment, medium and vehicle

Country Status (1)

Country Link
CN (1) CN116001807B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117141474B (en) * 2023-10-30 2024-01-30 深圳海星智驾科技有限公司 Obstacle track prediction method and device, vehicle controller, system and vehicle

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113291321A (en) * 2021-06-16 2021-08-24 苏州智加科技有限公司 Vehicle track prediction method, device, equipment and storage medium

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112078592B (en) * 2019-06-13 2021-12-24 魔门塔(苏州)科技有限公司 Method and device for predicting vehicle behavior and/or vehicle track
WO2021003379A1 (en) * 2019-07-03 2021-01-07 Waymo Llc Agent trajectory prediction using anchor trajectories
CN114829225A (en) * 2019-12-27 2022-07-29 伟摩有限责任公司 Conditional behavior prediction for autonomous vehicles
US11987265B1 (en) * 2020-07-28 2024-05-21 Waymo Llc Agent trajectory prediction using target locations
CN113753077A (en) * 2021-08-17 2021-12-07 北京百度网讯科技有限公司 Method and device for predicting movement locus of obstacle and automatic driving vehicle
CN114426032B (en) * 2022-01-05 2024-07-26 重庆长安汽车股份有限公司 Method and system for predicting track of vehicle based on automatic driving, vehicle and computer readable storage medium
CN114715145B (en) * 2022-04-29 2023-03-17 阿波罗智能技术(北京)有限公司 Trajectory prediction method, device and equipment and automatic driving vehicle
CN115257801A (en) * 2022-06-07 2022-11-01 上海仙途智能科技有限公司 Trajectory planning method and device, server and computer readable storage medium
CN115447607A (en) * 2022-09-05 2022-12-09 安徽蔚来智驾科技有限公司 Method and device for planning a vehicle driving trajectory

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113291321A (en) * 2021-06-16 2021-08-24 苏州智加科技有限公司 Vehicle track prediction method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN116001807A (en) 2023-04-25

Similar Documents

Publication Publication Date Title
JP2023027777A (en) Method and apparatus for predicting motion track of obstacle, and autonomous vehicle
CN106774312B (en) Method and device for determining moving track of mobile robot
CN110942072A (en) Quality evaluation-based quality scoring and detecting model training and detecting method and device
CN116001807B (en) Multi-scene track prediction method, equipment, medium and vehicle
CN110827326B (en) Method, device, equipment and storage medium for generating simulation man-vehicle conflict scene model
CN112561960B (en) Multi-target tracking repositioning method based on track similarity measurement learning
CN112309126B (en) License plate detection method and device, electronic equipment and computer readable storage medium
CN114332708A (en) Traffic behavior detection method and device, electronic equipment and storage medium
US20220383736A1 (en) Method for estimating coverage of the area of traffic scenarios
CN113744310A (en) Target tracking method and device, electronic equipment and readable storage medium
CN116257663A (en) Abnormality detection and association analysis method and related equipment for unmanned ground vehicle
CN114708304B (en) Cross-camera multi-target tracking method, device, equipment and medium
Masmoudi et al. Trajectory analysis for parking lot vacancy detection system
CN116778292A (en) Method, device, equipment and storage medium for fusing space-time trajectories of multi-mode vehicles
CN115100739A (en) Man-machine behavior detection method, system, terminal device and storage medium
AU2021251463B2 (en) Generating performance predictions with uncertainty intervals
CN112084968B (en) Semantic characterization method and system based on air monitoring video and electronic equipment
CN113763425A (en) Road area calibration method and electronic equipment
CN112863492A (en) Sound event positioning model training method and device
Delavarian et al. Multi‐camera multiple vehicle tracking in urban intersections based on multilayer graphs
CN113033713B (en) Accident fragment identification method, device, equipment and readable storage medium
CN115169588A (en) Electrographic computation space-time trajectory vehicle code correlation method, device, equipment and storage medium
CN114973173A (en) Method and device for classifying driving scene data, electronic equipment and storage medium
CN113449624A (en) Method and device for determining vehicle behavior based on pedestrian re-recognition
CN117726883B (en) Regional population analysis method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant