CN112000752A - Track generation method, electronic device and storage medium - Google Patents

Track generation method, electronic device and storage medium Download PDF

Info

Publication number
CN112000752A
CN112000752A CN202010673869.8A CN202010673869A CN112000752A CN 112000752 A CN112000752 A CN 112000752A CN 202010673869 A CN202010673869 A CN 202010673869A CN 112000752 A CN112000752 A CN 112000752A
Authority
CN
China
Prior art keywords
track
time information
bayonet
point
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010673869.8A
Other languages
Chinese (zh)
Other versions
CN112000752B (en
Inventor
曹金磊
舒望
朱明浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202010673869.8A priority Critical patent/CN112000752B/en
Publication of CN112000752A publication Critical patent/CN112000752A/en
Application granted granted Critical
Publication of CN112000752B publication Critical patent/CN112000752B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Remote Sensing (AREA)
  • Train Traffic Observation, Control, And Security (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application discloses a track generation method, electronic equipment and a storage medium. The method comprises the following steps: determining an initial bayonet point; generating a first track with an initial bayonet point as a starting point by using a neural network, wherein the neural network is obtained based on real track training, and the first track comprises a plurality of bayonet points; time information is given to the bayonet points in the first track. Through the mode, the reality degree of the first track can be improved.

Description

Track generation method, electronic device and storage medium
Technical Field
The present application relates to the field of intelligent transportation technologies, and in particular, to a trajectory generation method, an electronic device, and a storage medium.
Background
In the face of increasingly complex road traffic conditions, it is currently the practice to obtain vehicle trajectories on roads to analyze the road traffic conditions. However, in an actual process, under the condition that real historical vehicle track data is limited, a track needs to be generated by means of manual self-simulation for analyzing the road traffic condition, but the track generated by the existing method is low in reality degree, and accuracy of a road traffic condition analysis result is affected.
Disclosure of Invention
The application provides a track generation method, electronic equipment and a storage medium, which can solve the problem that tracks generated by the existing method are low in reality degree.
In order to solve the technical problem, the application adopts a technical scheme that: 1. a trajectory generation method, comprising: determining an initial bayonet point; generating a first track with an initial bayonet point as a starting point by using a neural network, wherein the neural network is obtained based on real track training, and the first track comprises a plurality of bayonet points; time information is given to the bayonet points in the first track.
In order to solve the above technical problem, another technical solution adopted by the present application is: an electronic device is provided, which comprises a processor and a memory connected with the processor, wherein the memory stores program instructions; the processor is configured to execute the program instructions stored by the memory to implement the above-described method.
In order to solve the above technical problem, the present application adopts another technical solution that: there is provided a storage medium storing program instructions that when executed enable the above method to be implemented.
Through the mode, the initial bayonet point can be determined, the determined initial bayonet point is input into the neural network, the predicted track, namely the first track, is generated by the neural network based on the initial bayonet point, after the first track is obtained, time information can be given to the bayonet point in the first track, the first track is closer to the real track after the time information is given to the first track, and therefore the truth of the first track can be improved.
Drawings
FIG. 1 is a schematic flow chart diagram illustrating a first embodiment of a trajectory generation method according to the present application;
FIG. 2 is a schematic diagram of the LSTM network generation trace process of the present application;
FIG. 3 is a schematic flow chart diagram illustrating a second embodiment of the trajectory generation method of the present application;
FIG. 4 is a schematic flow chart diagram illustrating a third embodiment of the trajectory generation method of the present application;
FIG. 5 is a schematic flow chart diagram illustrating a fourth embodiment of the trajectory generation method of the present application;
FIG. 6 is a schematic flow chart of a fifth embodiment of the trajectory generation method of the present application;
FIG. 7 is a flowchart illustrating a sixth embodiment of the trajectory generation method of the present application;
FIG. 8 is a schematic flow chart diagram illustrating a seventh embodiment of the trajectory generation method of the present application;
FIG. 9 is a schematic flow chart diagram illustrating an eighth embodiment of the trajectory generation method of the present application;
FIG. 10 is a schematic flow chart diagram illustrating a ninth embodiment of the trajectory generation method of the present application;
FIG. 11 is a schematic flowchart of a tenth embodiment of the trajectory generation method of the present application;
FIG. 12 is a schematic flow chart diagram illustrating an eleventh embodiment of the trajectory generation method of the present application;
FIG. 13 is a flowchart illustrating a twelfth embodiment of the trajectory generation method of the present application;
FIG. 14 is a schematic flow chart diagram illustrating a thirteenth embodiment of the trajectory generation method of the present application;
FIG. 15 is a flowchart illustrating a fourteenth embodiment of a trajectory generation method according to the present application;
FIG. 16 is a schematic flow chart diagram illustrating a fifteenth embodiment of the trajectory generation method of the present application;
FIG. 17 is a schematic structural diagram of an embodiment of an electronic device of the present application;
FIG. 18 is a schematic structural diagram of an embodiment of a storage medium according to the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first", "second" and "third" in this application are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implying any indication of the number of technical features indicated. Thus, a feature defined as "first," "second," or "third" may explicitly or implicitly include at least one of the feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless explicitly specifically limited otherwise.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those skilled in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments without conflict.
Fig. 1 is a schematic flowchart of a first embodiment of the trajectory generation method of the present application. It should be noted that, if the result is substantially the same, the flow sequence shown in fig. 1 is not limited in this embodiment. As shown in fig. 1, the present embodiment may include:
s110: an initial bayonet point is determined.
One or more bayonet points may be extracted from the bayonet database as initial bayonet points. And the bayonet points in the bayonet database are bayonet points included in the real track library.
The real track is the driving route information of the vehicle, and one real track corresponds to one piece of vehicle identification information. The information included in a real track may be a passing bayonet point number, a position of the bayonet point, time information of passing the bayonet point, and the like. For example, if a vehicle passes through the bayonet points 1-5 during the traveling process, the corresponding trajectory of the vehicle is { (bayonet point 1, position of bayonet point 1, time information of passing bayonet point 1), …, (bayonet point 5, position of bayonet point 5, time information of passing bayonet point 5) }.
Since the information included in the real trajectory is arranged in chronological order, the real trajectory can also be regarded as a sequence of travel route information of the vehicle. Since the real trajectory includes a plurality of types of travel route information, the travel route information sequence includes a plurality of sequences of travel route information of different types. For example, the trajectory includes a sequence of bayonet points { bayonet point 1, …, bayonet point 5}, a sequence of positions of bayonet points { position of bayonet point 1, …, position of bayonet point 5}, and a sequence of time information of passing bayonet points { time information of passing bayonet point 1, bayonet point 5, position of bayonet point 5, time information of passing bayonet point 5 }. The bayonet point sequence included in the real track is also referred to as a bayonet point sequence corresponding to the real track.
S120: a first trajectory is generated using a neural network starting at an initial bayonet point.
The neural network is trained based on real tracks, and the first track comprises a plurality of bayonet points.
If there are a plurality of initial bayonet points, a neural network may be used to generate a first trajectory starting from each of the initial bayonet points. Each first track generated by the neural network has the same length, that is, each first track comprises the same number of bayonet points. The neural network may be a long-short term memory (LSTM) network, or may be another neural network.
The LSTM network includes a plurality of neurons that can be further predicted based on the prediction from the previous neuron. Specifically, the first neuron may predict the initial bayonet point to obtain a second bayonet point, the second neuron may predict the second bayonet point predicted by the first neuron to obtain a third bayonet point, …, and so on, and after the last neuron is predicted, the first trajectory is obtained. The number of neurons comprised by the LSTM network thus determines the number of bayonet points in the generated first trajectory (the length of the first trajectory).
The track generation process of the LSTM is illustrated below in conjunction with fig. 2:
as shown in fig. 2, the LSTM network includes 10 neurons (1 to 10), and the generated first trajectory with the initial bayonet point as a starting point is (initial bayonet point, bayonet point 1, bayonet point 2, …, bayonet point 9, bayonet point 10), where the initial bayonet point and the predicted bayonet point i are included, and the bayonet point i (ie [1,10]) represents the predicted ith bayonet point. The neuron 1 may predict the initial bayonet point to obtain a bayonet point 1, the neuron 2 may predict the bayonet point 1 predicted by the neuron 1 to obtain a bayonet point 2, …, and the neuron 10 may predict the bayonet point 9 predicted by the neuron 9 to obtain a bayonet point 10.
Since the neural network successively predicts the subsequent bayonet points based on the determined initial bayonet point, the first trajectory obtained by using the neural network may also be referred to as a bayonet point sequence. In the present application, the first trajectory is referred to as a first bayonet point sequence.
In addition, before the first track is generated by the neural network, the first track can be trained, so that the neural network captures the bayonet point rule in the real track, and the reality degree of the track generated by the neural network is improved. For a specific training process, please refer to the following examples.
S130: time information is given to the bayonet points in the first track.
Corresponding time information can be given to the interface points after the neural elements of the neural network predict the interface points, or time information can be given to each interface point in the first track after all the neural elements are predicted to be finished and the first track is obtained. The time information may be indicated as referring to a point/moment in time. Specifically, please refer to the following embodiments for a method for assigning time information to a bayonet point in a first track.
Through the embodiment, the initial bayonet point can be determined, the determined initial bayonet point is input into the neural network, the predicted track, namely the first track, is generated by the neural network based on the initial bayonet point, after the first track is obtained, time information can be given to the bayonet point in the first track, the first track is closer to the real track after the time information is given to the first track, and therefore the truth of the first track can be further improved.
Fig. 3 is a flowchart illustrating a second embodiment of the trajectory generation method of the present application. It should be noted that, if the result is substantially the same, the flow sequence shown in fig. 3 is not limited in this embodiment. This embodiment is a further extension of S130 in the first embodiment, and as shown in fig. 3, this embodiment may include:
s210: and giving time information to the starting point of the first track as the starting time information of the first track.
The starting point of the first track is the initial bayonet point in the first track. Wherein, time information can be randomly given to the starting point of the first track as the starting time information of the first track; or, a time point may be selected within a preset time period and given as a starting point of the first track, as the starting time information of the first track.
The preset time period may be a time period set according to a vehicle running time rule, and of course, may also be a time period set according to other tracks.
The following description will be given by taking a preset time period as a time period set according to the vehicle running time law, and exemplifying a mode of selecting a time point within the preset time period and assigning the time point to a starting point of the first trajectory: the time of day is divided into a plurality of time periods, namely morning peak (7:00-9:00), morning (9:00-12:00), afternoon (12:00-17:00), evening peak (17:00-19:30), evening (19:30-22:00), and late night (22:00-24:00,24:00-03:00), … …), wherein 1000 first tracks are generated, time points are selected from the morning peak period and given to 250 first tracks, time points are selected from the morning peak period and given to 150 first tracks, time points are selected from the afternoon period and given to 150 first tracks, time points are selected from the evening peak period and given to 250 first tracks, time points are selected from the evening period and given to 120 tracks, and time periods are selected from the late night period and given to 80 tracks.
A time point can be selected in a preset time interval and given as a starting point of the first track, and the starting point serves as starting time information of the first track, so that the probability that the time information of the plurality of first tracks is unreasonably distributed (for example, too many first tracks with time information in an off-peak time interval and too few first tracks with time information in a peak time interval) can be reduced.
When there are a large number of first tracks to which time information needs to be given, if time information is directly given to the start points of all generated first tracks at random, as the start time information of the first tracks, there is a high possibility that the number of first tracks of the start time information in some time period is too large or too small (the distribution of the start time information is not reasonable). For example, the first number of tracks of the start time information during the off-peak hours is less than the first number of tracks of the start time information during the on-peak hours.
Therefore, in one embodiment of the present application, when the number of first tracks to which time information needs to be given is small, time information is randomly given to the start point of each first track as start time information of the first track; when the number of the first tracks needing to be endowed with the time information is large, a time point is selected in a preset time period and is endowed to the starting point of the first track to serve as the starting time information of the first track. That is, when the number of the first tracks to which the time information needs to be given is large, the time information is randomly given to the first tracks in the set time period, so that the probability that the distribution of the start time information of each second track is unreasonable can be reduced.
S220: and giving time information to other bayonet points in the first track based on the starting time information of the first track.
According to the embodiment, a time point can be randomly selected to be assigned to the starting point of the first track to serve as the starting time information of the first track, time information is assigned to other bayonet points based on the starting time information of the first track, and the assignment process of the time information of the first track can be simpler; and, the mode of selecting a time point to assign to the starting point of the first track in the preset time interval is set, so that the distribution of the time information assigned to the plurality of first tracks is more consistent with the vehicle running time rule and more reasonable on the basis of simplifying the assigning process of the time information of the first track, and the plurality of first tracks assigned with the time information are closer to the real track.
Fig. 4 is a flowchart illustrating a third embodiment of the trajectory generation method of the present application. It should be noted that, if the result is substantially the same, the flow sequence shown in fig. 4 is not limited in this embodiment. The present embodiment is a further extension of S220 in the second embodiment, and as shown in fig. 4, the present embodiment may include:
s310: and judging whether a second track matched with the first track exists in the real track library.
The real track library comprises a plurality of real tracks, and for each real track, from the starting point of the first track, the bayonet points in the first track and the bayonet points in the real track are sequentially compared to search out the real track matched with the first track from the real track library to be used as the second track.
In a specific embodiment, the condition that the second track matches the first track may include: the second track comprises the largest number of consecutive identical bayonet points as the first track. In other words, the sequence length of intersection with the first bayonet point sequence (first track) among the bayonet point sequences of the second track is the largest (including the largest number of bayonet points).
For example, the first sequence of bayonet points is { a, B, C, D, E }, where the sequence of bayonet points of real track 1 is { a, B, E, F, R, M, F }, and the sequence of bayonet points of real track 2 is { D, L, a, B, C, D }, and then the sequence of intersection of the first sequence of bayonet points and the sequence of bayonet points of real track 1 is { a, B }, and the length thereof is 2, and the sequence of bayonet points of the first sequence of bayonet points and real track 2 is { a, B, C, D }, and the length thereof is 4, then the real track 2 and the first track include a larger number of consecutive identical bayonet points compared to the real track 1.
In another embodiment, the condition that the second track matches the first track may further include: the number of the continuous same bayonet points included in the second track and the first track exceeds a preset number threshold. Here, the preset number threshold may be set based on the number of bayonet points included in the first track, for example, the preset number threshold may be set to 1/2, 2/3 of the number of bayonet points included in the first track.
The following illustrates the process of retrieving a second track matching the first track from the real track library, with the first track as (initial bayonet point 1, bayonet point 2, bayonet point 3, … …, bayonet point 9, bayonet point 10):
the method comprises the steps of firstly finding out a real track comprising an initial bayonet point 1 from a real track library (assuming that the real track comprising the initial bayonet point 1 forms a set 1), then finding out a real track with a bayonet point 2 next to the initial bayonet point 1 from the set 1 (assuming that the real track comprising the initial bayonet point 2 forms a set 2), then finding out a real track with a bayonet point 3 next to the bayonet point 2 from the set 2, … … and the like.
If a second track matched with the first track exists, executing S320; if there is no second track matching the first track, S330 is performed.
S320: obtaining time information of bayonet points which are continuously the same as the second track in the first track on the basis of the initial time information of the first track and the time information of the bayonet points which are continuously the same as the first track in the second track; and for the remaining bayonet points in the first track, offsetting the time information of the bayonet point before the bayonet point to obtain the time information of the bayonet point.
For convenience of description later, in the present application, a sequence in which a first bayonet point sequence intersects with a bayonet point sequence of a second track is referred to as an intersecting bayonet point sequence, and time information of a first bayonet point in the intersecting bayonet point sequence in the second track is referred to as start time information of the second track.
In the case where a second track matching the first track exists in the real track library, time information may be given to other bayonet points in the first track based on an offset between start time information of the first track with respect to start time information of the second track.
For example, the intersection bayonet point sequence is { initial bayonet point 1, bayonet point 2, bayonet point 3, bayonet point 4, bayonet point 5}, the time information sequence corresponding to the intersection bayonet point sequence in the second track is {11:30,11:35,11:38,11:46,11:50}, the start time information in the first track is 12:10, and the offset between the start time information of the first track and the time information of the initial bayonet point 1 in the second track is 40 min. Therefore, the time information assigned to the bayonet points 2 to 5 in the first track is 12:15, 12:18, 12:26, and 12:30, and the time information sequence corresponding to the intersection bayonet point sequence in the first track is {12:10, 12:15, 12:18, 12:26, 12:30 }.
If the offset is 0, that is, the offset between the start time information of the first track and the start time information of the second track is 0, the time information sequence corresponding to the intersection checkpoint sequence in the second track can be directly used as the time information sequence corresponding to the intersection checkpoint sequence in the first track; alternatively, the time information of the corresponding bayonet point in the first track can be obtained based on the offset between the time information of the adjacent bayonet points in the second track in the intersecting bayonet point sequence. The latter is exemplified below:
the intersection bayonet point sequence is { initial bayonet point 1, bayonet point 2, bayonet point 3, bayonet point 4 and bayonet point 5}, the time information sequence corresponding to the intersection bayonet point sequence in the second track is {11:30,11:35,11:38,11:46,11:50}, the initial time information in the first track is 12:10, the offset between the time information of the initial bayonet point 1 and the time information of the bayonet point 2 in the second track is 5min, and time information 12:15 is given to the bayonet point 2 in the first track; in the second track, the offset between the time information of the bayonet point 2 and the bayonet point 3 in the second track is 3min, and then the time information 12:18, … is given to the bayonet point 3 in the first track, and so on.
If the first track includes remaining bayonet points in the intersecting bayonet point sequence, that is, the bayonet point sequence of the second track partially intersects with the first bayonet point sequence, for each remaining bayonet point, the time information of the previous bayonet point may be shifted to obtain the time information thereof.
The offset of the time information of the previous bayonet point may be determined based on the distance between the current bayonet point and the previous bayonet point, the traffic flow condition, and the like, for example, determined as 10min, 8min, 5min, and the like. Alternatively, an offset range may be set, within which the offset of the time information of the previous bayonet point is within, wherein the offset may be determined based on the distance between each adjacent bayonet point, the traffic flow condition, and the like, and may be determined to be [3min,10min ], for example.
For example, if the first trajectory further includes remaining bayonet points { bayonet point 6, bayonet point 7, bayonet point 8, bayonet point 9, bayonet point 10}, the time information of bayonet point 5 is shifted to obtain the time information of bayonet point 6, the obtained time information of bayonet point 6 is shifted to obtain the time information of bayonet point 7, the obtained time information of bayonet point 7 is shifted to obtain the time information of bayonet point 8, the obtained time information of bayonet point 8 is shifted to obtain the time information of bayonet point 9, and the obtained time information of bayonet point 9 is shifted to obtain the time information of bayonet point 10.
S330: and offsetting the time information of the bayonet point in the first track to obtain the time information of the next bayonet point.
Under the condition that a second track matched with the first track does not exist in the real track library, the time information of the bayonet point in the first track can be directly shifted to obtain the time information of the next bayonet point.
The offset of the time information of the bayonet point in the first trajectory may be determined based on the distance between the bayonet point and the next bayonet point, traffic flow conditions, and the like. Alternatively, an offset range may be set within which the offset of the time information of the bayonet point is within, wherein the offset may be determined based on the distance between each adjacent bayonet point, the traffic flow condition, and the like.
Still describing with the first trajectory as (initial bayonet point 1, bayonet point 2, bayonet point 3, … …, bayonet point 9, bayonet point 10), the time information of initial bayonet point 1 may be shifted to obtain the time information of bayonet point 2, the obtained time information of bayonet point 2 may be shifted to obtain the time information of bayonet point 3, …, and so on.
According to the embodiment, whether the real track matched with the first track exists in the real track library or not is judged, and the mode of giving time information to other bayonet points in the first track is determined according to the judgment result, so that the time information distribution of the first track can be more reasonable, and the distance between the first track and the real track can be shortened.
Fig. 5 is a flowchart illustrating a fourth embodiment of the trajectory generation method of the present application. It should be noted that, if the result is substantially the same, the flow sequence shown in fig. 5 is not limited in this embodiment. This embodiment is a further extension of S220 in the second embodiment, and as shown in fig. 5, this embodiment may include:
s410: and judging whether a second track matched with the first track exists in the real track library.
Unlike S310 described above, in the present embodiment, the conditions for matching the second trajectory with the first trajectory include: the second track and the first track comprise bayonet points which are completely the same.
The second track and the first track comprise bayonet points which are completely the same, that is, the bayonet point sequence of the second track comprises the first bayonet point sequence. In other words, the intersection bayonet point sequence between the bayonet point sequence of the second trajectory and the first bayonet point sequence is the first bayonet point sequence.
For example, the first bayonet point sequence is { A, B, C, D, E }, the bayonet point sequence of the second track is { M, N, A, B, C, D, E, F }, or { A, B, C, D, E, F, M, N }, etc.
The process of retrieving whether there is a second track matching the first track from the real track library can be referred to the description of the third embodiment, and will not be repeated here.
If yes, go to step S420, and if not, go to step S430.
S420: and obtaining the time information of other bayonet points in the first track based on the starting time information of the first track and the time information of the bayonet points in the second track.
Other bayonet points in the first track may be given time information based on an offset of the start time information of the first track relative to the start time information of the second track. For a specific implementation process, please refer to the description of S320 above, which is not repeated here.
S430: and offsetting the time information of the bayonet point in the first track to obtain the time information of the next bayonet point.
For a detailed description of this step, please refer to the description of S330, which is not repeated here.
Fig. 6 is a flowchart illustrating a fifth embodiment of the trajectory generation method of the present application. It should be noted that, if the result is substantially the same, the flow sequence shown in fig. 6 is not limited in this embodiment. The present embodiment is a further extension of S220. As shown in fig. 6, the present embodiment may include:
s510: and judging whether a second track matched with the first track exists in the real track library.
The condition that the second track is matched with the first track comprises the following conditions: the second track and the first track comprise the largest number of continuous same bayonet points, and the starting time information of the second track is the same as that of the first track.
The start time information of the second track is the time information of the initial bayonet point in the second track, that is, the time information of the first bayonet point in the aforementioned intersecting bayonet point sequence in the second track.
For example, the first bayonet point sequence is { a, B, C, D, E }, the bayonet point sequence of the second track is { D, L, a, B, C, D }, the intersection bayonet point sequence is { a, B, C, D }, and the start time information of the second track is the time information of a in { a, B, C, D }.
If yes, executing S520; if not, go to S530.
S520: obtaining time information of bayonet points which are continuously the same as the second track in the first track on the basis of the initial time information of the first track and the time information of the bayonet points which are continuously the same as the first track in the second track; and for the remaining bayonet points in the first track, offsetting the time information of the previous bayonet point of the bayonet points to obtain the time information of the bayonet points.
Different from the third embodiment, due to the condition that the second track is matched with the first track in the present embodiment, the start time information of the second track is the same as the start time information of the first track except that the second track and the first track include the largest number of consecutive identical bayonet points. Therefore, the offset amount of the start time information of the first track with respect to the start time information of the second track is 0, and the time information can be given to the continuous same bayonet point included in the first track and the second track in accordance with the case where the offset amount is 0 in S320.
S530: and offsetting the time information of the bayonet point in the first track to obtain the time information of the next bayonet point.
Please refer to the above step S330 for detailed description, which is not repeated here.
Fig. 7 is a flowchart illustrating a sixth embodiment of the trajectory generation method of the present application. It should be noted that, if the result is substantially the same, the flow sequence shown in fig. 7 is not limited in this embodiment. The present embodiment is a further extension of S220. As shown in fig. 7, the present embodiment may include:
s610: and judging whether a second track matched with the first track exists in the real track library.
The condition that the second track is matched with the first track comprises the following conditions: the second track and the first track comprise bayonet points which are completely the same, and the starting time information of the second track is the same as the starting time information of the first track.
If yes, S620 is executed, and if not, S630 is executed.
S620: and obtaining the time information of other bayonet points in the first track based on the starting time information of the first track and the time information of each bayonet point in the second track.
Unlike the fourth embodiment, the conditions for matching the second trajectory with the first trajectory in the present embodiment include: besides the bayonet points included by the second track and the first track are completely the same, the start time information of the second track and the first track is also the same. Therefore, the offset of the start time information of the first track relative to the start time information of the second track is 0, and the time information of the bayonet point in the second track in the intersection bayonet point sequence can be directly used as the time information in the first track.
S630: and shifting the time information of the bayonet point in the first track to obtain the time information of the next bayonet point.
For a detailed description of this step, reference is made to the previous embodiments, which are not repeated here.
In addition, in a specific embodiment of the present application, in addition to the above-described embodiment, if there is a bayonet point with a track end symbol in the first track, after time information is given to the bayonet point in the first track (S130), the first track may be cut based on the track end symbol to obtain a third track, and the third track may be subjected to subsequent processing. The third track obtained by cutting may be one or more. After the third track is cut, the third track which is too short can be removed.
It will be understood that the track end exists in the first track in the form of a bayonet point, and a bayonet point preceding the track end is referred to as a bayonet point with the track end in this application. For example, the first track is (a, B, C, D, eos, E, F, G, H, eos), where "eos" is an end marker, D, H may be referred to as a bayonet point with a track end marker, and the first track may be cut into two third tracks, respectively (a, B, C, D) and (E, F, G, H), based on the end marker.
In another embodiment of the present application, before time information is assigned to the bayonet points in the first track, the first track may be cut into a third track based on a track end symbol carried by the bayonet points in the first track, and then time information may be assigned to the third track. The method comprises the following specific steps:
fig. 8 is a flowchart illustrating a seventh embodiment of the trajectory generation method of the present application. It should be noted that, if the result is substantially the same, the flow sequence shown in fig. 8 is not limited in this embodiment. The present embodiment is a further extension of S730, and as shown in fig. 8, the present embodiment may include:
s710: based on the track end symbol, the first track is divided into third tracks.
In this embodiment, when there is a bayonet point with a track end symbol in the first track, the third track may be obtained by cutting the first track based on the track end symbol before giving time information to the bayonet point in the first track, which is different from the second embodiment. In addition, after the third track is obtained, the third track may be filtered to filter out shorter ones of the third track.
S720: and giving time information to the starting point of the third track as the starting time information of the third track.
Time information may be randomly assigned to the start point of the third track as the start time information of the third track, or a time point may be selected within a preset time period and assigned to the start point of the third track as the start time information of the third track.
Different from the second embodiment, in this step, the starting point of the third track is the first bayonet point in the third track, and the time information of the first bayonet point of the third track is the start time information of the third track.
S730: and giving time information to other bayonet points of the third track based on the starting time information of the third track.
For a detailed description of the present embodiment, refer to the second embodiment, which is not repeated here.
Fig. 9 is a flowchart illustrating an eighth embodiment of the trajectory generation method of the present application. It should be noted that, if the result is substantially the same, the flow sequence shown in fig. 9 is not limited in this embodiment. The present embodiment is a further extension of S730, as shown in fig. 9, the present embodiment may include:
s810: and judging whether a real track matched with the third track exists in the real track library.
The condition that the real track is matched with the third track comprises the following steps: the second track and the third track comprise the largest number of continuous identical bayonet points.
If yes, go to S820; if not, S830 is executed.
S820: obtaining time information of bayonet points which are continuously the same as the second track in the third track based on the initial time information of the third track and the time information of the bayonet points which are continuously the same as the third track in the second track; and for the remaining bayonet points in the third track, offsetting the time information of the bayonet point before the bayonet point to obtain the time information of the bayonet point.
S830: and offsetting the time information of the bayonet point in the third track to obtain the time information of the next bayonet point.
For a detailed description of the present embodiment, please refer to the description of the third embodiment, which is not repeated here.
Fig. 10 is a flowchart illustrating a ninth embodiment of the trajectory generation method of the present application. It should be noted that, if the result is substantially the same, the flow sequence shown in fig. 10 is not limited in this embodiment. The present embodiment is a further extension of S730, and as shown in fig. 10, the present embodiment may include:
s910: and judging whether a second track matched with the third track exists in the real track library.
The condition that the second track is matched with the third track comprises the following steps: the second track and the third track comprise bayonet points which are completely the same.
If yes, go to step S920; if not, go to S930.
S920: and obtaining the time information of other bayonet points in the third track based on the starting time information of the third track and the time information of the bayonet point in the second track.
S930: and offsetting the time information of the bayonet point in the third track to obtain the time information of the next bayonet point.
For a detailed description of the present embodiment, reference is made to the fourth embodiment, which is not repeated here.
Fig. 11 is a flowchart illustrating a tenth embodiment of the trajectory generation method according to the present application. It should be noted that, if the result is substantially the same, the flow sequence shown in fig. 11 is not limited in this embodiment. The present embodiment is a further extension of S730, as shown in fig. 11, the present embodiment may include:
s1010: and judging whether a real track matched with the third track exists in the real track library.
The condition that the real track is matched with the third track comprises the following steps: the second track and the third track comprise the largest number of continuous same bayonet points, and the start time information of the second track and the third track is the same.
The starting time information of the second track is the time information of the initial bayonet point in the second track.
If so, go to S1020; if not, S1030 is performed.
S1020: obtaining time information of bayonet points which are continuously the same as the second track in the third track based on the initial time information of the third track and the time information of the bayonet points which are continuously the same as the third track in the second track; and for the remaining bayonet points in the third track, offsetting the time information of the bayonet point before the bayonet point to obtain the time information of the bayonet point.
S1030: and offsetting the time information of the bayonet point in the third track to obtain the time information of the next bayonet point.
For a detailed description of the present embodiment, reference is made to the fifth embodiment, which is not repeated here.
Fig. 12 is a flowchart illustrating an eleventh embodiment of the trajectory generation method according to the present application. It should be noted that, if the result is substantially the same, the flow sequence shown in fig. 12 is not limited in this embodiment. The present embodiment is a further extension of S730, as shown in fig. 12, the present embodiment may include:
s1110: and judging whether a second track matched with the third track exists in the real track library.
The condition that the second track is matched with the third track comprises the following steps: the second track and the third track comprise the same bayonet points, and the start time information of the second track and the third track is the same.
The starting time information of the second track is the time information of the initial bayonet point in the second track.
If so, go to S1120; if not, then step S1130 is performed.
S1120: and obtaining the time information of other bayonet points in the third track based on the starting time information of the third track and the time information of the bayonet point in the second track.
S1130: and shifting the time information of the bayonet point in the third track to obtain the time information of the next bayonet point.
For a detailed description of the present embodiment, reference is made to the sixth embodiment, which is not repeated here.
In this application, on the basis of the above embodiment, after the third track having the time information is obtained, the third track may be bound with the vehicle identification information based on the time information of the third track.
In a specific embodiment, the binding of the third track with the vehicle identification information based on the time information of the third track may be implemented as follows:
fig. 13 is a flowchart illustrating a twelfth embodiment of the trajectory generation method according to the present application. It should be noted that, if the result is substantially the same, the flow sequence shown in fig. 13 is not limited in this embodiment. The present application is a further extension of the above-described embodiments. As shown in fig. 13, the present embodiment may include:
s1210: and arranging the plurality of third tracks according to the time sequence to obtain a third track sequence.
The third tracks may be sorted according to the time (time information) sequence of the first bayonet point in each third track, or according to the time sequence of the last bayonet point, so as to obtain a third track sequence.
S1220: and binding the pre-stored plurality of pieces of vehicle identification information with a third track in the third track sequence in turn.
The pre-stored vehicle identification information may be vehicle identification information corresponding to a real track in a real track library. One piece of vehicle identification information may include license plate number, license plate color, license plate type, etc. Each of the third tracks may be bound with one piece of vehicle identification information.
Explaining a process of binding the vehicle identification information and the third track in turn, the existing 5 pieces of vehicle identification information are vehicle identification information 1, vehicle identification information 2, …, and vehicle identification information 5, respectively. The third trajectory sequence includes { third trajectory 1, third trajectory 2, …, third trajectory 10}, then (first round) the third trajectory 1 is bound with the vehicle identification information 1, the third trajectory 2 is bound with the vehicle identification information 2, …, the third trajectory 5 is bound with the vehicle identification information 5, (second round) the third trajectory 6 is bound with the vehicle identification information 1, the third trajectory 7 is bound with the vehicle identification information 2, …, and the third trajectory 10 is bound with the vehicle identification information 5.
Through the embodiment, on the basis that the plurality of third tracks are endowed with the time information, the plurality of third tracks are sequenced according to time to obtain the third track sequence, and then the third tracks are bound with the prestored vehicle identification information in turn, so that the vehicle identification information bound with the plurality of third tracks is distributed more reasonably, the probability that the plurality of third tracks in the same time period are bound with the same vehicle identification information is reduced, and the probability that the fake-licensed vehicle (the same vehicle identification information corresponds to the third tracks with the same time information) and the conventional logic phenomenon are violated (the time information offset of the third tracks corresponding to the same vehicle identification information is too small) is further reduced.
In another specific embodiment, the binding of the third track with the vehicle identification information based on the time information of the third track may be implemented as follows:
fig. 14 is a flowchart illustrating a thirteenth embodiment of the trajectory generation method according to the present application. It should be noted that, if the result is substantially the same, the flow sequence shown in fig. 14 is not limited in this embodiment. The present application is a further extension of the above embodiment, and as shown in fig. 14, the present embodiment may include:
s1310: and selecting one piece of vehicle identification information as the current vehicle identification information.
One piece of vehicle identification information may be selected from pre-stored vehicle identification information as the current vehicle identification information.
S1320: and selecting a third track for the current vehicle identification information as the current third track.
One of the third tracks not bound with the vehicle identification information may be selected as the current third track.
S1330: and judging whether the offset between the time information of the third track bound with the current vehicle identification information and the time information of the current third track is greater than a preset offset threshold value or not.
A protection threshold (preset offset threshold) may be set for each piece of vehicle identification information, and before the current third track is bound to the vehicle identification information, it may be determined whether an offset between the time information of the third track bound to the current vehicle identification information and the time information of the current third track is greater than the protection threshold.
If yes, executing S1340; if not, the process goes to S1320 to repeat the above steps S1320-S1330.
S1340: and binding the current vehicle identification information with the current third track.
If the current vehicle identification information is larger than the current third track, the current vehicle identification information can be bound with the current third track, so that the possibility that the third track and the same vehicle identification information are bound in the same time period can be reduced, and the probability that fake-licensed vehicles appear and the conventional logic phenomenon is violated is further reduced.
In yet another embodiment, the binding of the third track with the vehicle identification information based on the time information of the third track may be implemented as follows:
fig. 15 is a flowchart illustrating a fourteenth embodiment of the trajectory generation method according to the present application. It should be noted that, if the result is substantially the same, the flow sequence shown in fig. 15 is not limited in this embodiment. The present application is a further extension of the above-described embodiments. As shown in fig. 15, the present embodiment may include:
s1410: a third track is selected as the current third track.
S1420: and selecting one piece of vehicle identification information for the current third track as the current vehicle identification information.
S1430: and judging whether the offset between the time information of the third track bound with the current vehicle identification information and the time information of the current third track is greater than a preset offset threshold value or not.
If so, go to S1440; if not, the process jumps to S1420 to repeat the above steps S1420-S1430.
S1440: and binding the current third track with the current vehicle identification information.
And if the current third track is larger than the preset third track, the current third track can be bound with the current vehicle identification information, so that the possibility of binding the third track with the same vehicle identification information in the same time period can be reduced, and the probability of occurrence of fake-licensed vehicles and the occurrence of conventional logic violation phenomena is reduced.
In order to improve the accuracy of the prediction result of the bayonet point of the LSTM network, in the above embodiment, before the LSTM network is used to generate the first track, the process of training the LSTM network may specifically be as follows:
fig. 16 is a flowchart illustrating a fifteenth embodiment of the trajectory generation method of the present application. It should be noted that, if the result is substantially the same, the flow sequence shown in fig. 16 is not limited in this embodiment. The present application is a further extension of the above-described embodiments. As shown in fig. 16, the present embodiment may include:
s1510: and acquiring a training track set.
The training trajectory set includes a plurality of real trajectories, which may be derived based on a library of original real trajectories.
Because the original real track library has real tracks with too short or too long lengths, the real tracks in the original real track library need to be cleaned and denoised in advance to filter out the real tracks with too long or too short lengths in the original real track library.
Because the filtered original real track library has the condition that the real track lengths are inconsistent, and the track length used for LSTM network training is fixed, all the real tracks in the filtered real track library can be spliced, a track ending symbol 'eos' is taken as a mark for ending each real track, and then the spliced tracks are cut according to the fixed length/the number of bayonet points to obtain a plurality of processed real tracks, so that a final real track library is formed. And then extracting a training track set from the real track library.
In addition, the final real track library can be divided into three parts, one part is used as a training track set, the other part is used as a test track set, and the other part is used as a verification track set. The training track set is used for training the LSTM network in the training process, the testing track set is used for testing the LSTM network in the training process, and the verification track set is used for verifying the LSTM network in the training process, so that the finally obtained prediction result of the LSTM network is more accurate.
For simplicity of description, this embodiment only illustrates the process of training the LSTM with a training trajectory set.
S1520: inputting the training track set into the LSTM network, so that the LSTM network respectively predicts the next bayonet point of each bayonet point included by each real track in the training track set.
The following description will be made with reference to an LSTM network including 10 neurons, as an example, for the prediction process of each bayonet point in a real trajectory:
inputting the real track (a, b, c, d, e, f, j, h, i, j) into an LSTM network, predicting a by a neuron 1 in the LSTM network to obtain a prediction result a 'of the next bayonet point of a, predicting b by a neuron 2 to obtain a prediction result b', … of the next bayonet point of b, predicting i by a neuron 9 to obtain a prediction result i 'of the next bayonet point of i, predicting j by a neuron 10 to obtain a prediction result j' of the next bayonet point of j.
S1530: based on the prediction results, the predicted loss of the LSTM network is calculated.
For example, the loss function CrossEntropyLoss may be used to calculate the predicted loss of the LSTM.
S1540: parameters of the LSTM network are adjusted based on the predicted loss.
The LSTM can capture the laws of the real trajectory during the training process, and thus can generate a trajectory (the aforementioned first trajectory) based on the captured laws of the real trajectory. In addition, in the embodiment, the parameters of the LSTM network are adjusted based on the prediction result of the training process, so that the similarity between the first track generated by the LSTM network and the real track, that is, the truth of the generated first track, can be improved.
Fig. 17 is a schematic structural diagram of an embodiment of an electronic device according to the present application. As shown in fig. 17, the electronic device includes a processor 1610, a memory 1620 coupled to the processor.
Wherein the memory 1620 is stored with program instructions for implementing the method of any of the above embodiments; processor 1610 is configured to execute program instructions stored in memory 1620 to implement the steps of the above-described method embodiments. Processor 1610 may also be referred to as a CPU (Central Processing Unit). Processor 1610 may be an integrated circuit chip having signal processing capabilities. Processor 1610 may also be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
FIG. 18 is a schematic structural diagram of an embodiment of a storage medium according to the present application. As shown in fig. 18, the storage medium 1700 of the embodiment of the present application stores program instructions 1710, and the program instructions 1710 implement the methods provided by the above-mentioned embodiments of the present application when executed. The program instructions 1710 may form a program file stored in the storage medium 1700 in the form of a software product, so that a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) executes all or part of the steps of the methods according to the embodiments of the present application. And the aforementioned storage medium 1700 includes: various media capable of storing program codes, such as a usb disk, a mobile hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, or terminal devices, such as a computer, a server, a mobile phone, and a tablet.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a unit is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit. The above embodiments are merely examples and are not intended to limit the scope of the present disclosure, and all modifications, equivalents, and flow charts using the contents of the specification and drawings of the present disclosure or those directly or indirectly applied to other related technical fields are intended to be included in the scope of the present disclosure.

Claims (17)

1. A trajectory generation method, comprising:
determining an initial bayonet point;
generating a first track with the initial bayonet point as a starting point by utilizing a neural network, wherein the neural network is obtained based on real track training, and the first track comprises a plurality of bayonet points;
and giving time information to the bayonet points in the first track.
2. The method of claim 1, wherein said assigning time information to said bayonet point in said first track comprises:
giving the time information to the starting point of the first track as the starting time information of the first track;
and giving the time information to other interface points in the first track based on the starting time information of the first track.
3. The method according to claim 2, wherein the assigning time information to the start of the first track as the start time information of the first track comprises:
randomly assigning the time information to the start point of the first track as the start time information of the first track, or,
and selecting a time point in a preset time period and assigning the time point to the starting point of the first track as the starting time information of the first track.
4. The method of claim 3, wherein assigning the time information to other bayonet points in the first track based on the start time information of the first track comprises:
judging whether a second track matched with the first track exists in a real track library, wherein the condition that the second track is matched with the first track comprises the following steps: the second track and the first track comprise the largest number of continuous same bayonet points;
if the first track exists, obtaining time information of the bayonet point in the first track which is continuously the same as the second track on the basis of the initial time information of the first track and the time information of the bayonet point in the second track which is continuously the same as the first track; for the remaining bayonet points in the first track, offsetting the time information of the bayonet point which is one bayonet point before the bayonet point to obtain the time information of the bayonet point;
and if the time information does not exist, shifting the time information of the bayonet point in the first track to obtain the time information of the next bayonet point.
5. The method of claim 3, wherein assigning the time information to other bayonet points in the first track based on the start time information of the first track comprises:
judging whether a second track matched with the first track exists in a real track library, wherein the condition that the second track is matched with the first track comprises the following conditions: the second track and the first track comprise bayonet points which are completely the same;
if the first track exists, obtaining time information of other bayonet points in the first track based on the initial time information of the first track and the time information of the bayonet points in the second track;
and if the time information does not exist, shifting the time information of the bayonet point in the first track to obtain the time information of the next bayonet point.
6. The method of claim 4 or 5, wherein the condition that the second trajectory matches the first trajectory further comprises:
the second track has the same starting time information as the first track, and the starting time information of the second track is the time information of the initial bayonet point in the second track.
7. The method of claim 1, wherein the first track has the bayonet point with a track end, and after the assigning time information to the bayonet point in the first track, the method further comprises:
based on the track end symbol, the first track is divided into third tracks.
8. The method according to claim 1, wherein the bayonet point with a track end character exists in the first track, and before giving time information to the bayonet point in the first track, the method comprises:
based on the track end symbol, cutting the first track into third tracks;
the giving time information to the bayonet point in the first track includes:
giving the time information to the starting point of the third track as the starting time information of the third track, wherein the starting point of the third track is a first bayonet point in the third track;
and giving time information to other bayonet points in the third track based on the starting time information of the third track.
9. The method according to claim 8, wherein said assigning the time information to the start of the third track as the start time information of the third track comprises:
randomly assigning the time information to the start point of the third track as the start time information of the third track, or,
and selecting a time point in a preset time period and assigning the time point to the starting point of the third track as the starting time information of the third track.
10. The method of claim 9, wherein assigning the time information to other bayonet points in the third track based on the start time information of the third track comprises:
judging whether a second track matched with the third track exists in a real track library, wherein the condition that the real track is matched with the third track comprises the following conditions: the second track and the third track comprise the largest number of continuous identical bayonet points;
if the first track exists, obtaining time information of the bayonet point in the third track which is continuously the same as the second track on the basis of the initial time information of the third track and the time information of the bayonet point in the second track which is continuously the same as the third track; for the remaining bayonet points in the third track, offsetting the time information of the bayonet point before the bayonet point to obtain the time information of the bayonet point, wherein the offset of the time information of the bayonet point before the bayonet point is within a second offset range;
and if the time information does not exist, shifting the time information of the bayonet point in the third track to obtain the time information of the next bayonet point.
11. The method of claim 9, wherein assigning the time information to other bayonet points in the third track based on the start time information of the first track comprises:
judging whether a second track matched with the third track exists in a real track library, wherein the condition that the second track is matched with the third track comprises the following conditions: the second track and the third track comprise bayonet points which are completely the same;
if the third track exists, obtaining time information of other bayonet points in the third track based on the initial time information of the third track and the time information of the bayonet points in the second track;
and if the time information does not exist, shifting the time information of the bayonet point in the third track to obtain the time information of the next bayonet point.
12. The method of claim 10 or 11, wherein the condition that the second trajectory matches the third trajectory further comprises:
the start time information of the second track is the same as the start time information of the third track, and the start time information of the second track is the time information of the initial bayonet point in the second track.
13. The method of claim 7 or 8, further comprising:
and binding the third track with vehicle identification information based on the time information of the third track.
14. The method of claim 13, wherein binding the third track with vehicle identification information based on the time information of the third track comprises:
arranging the plurality of third tracks according to a time sequence to obtain a third track sequence;
and binding a plurality of pieces of prestored vehicle identification information with a third track in the third track sequence in turn.
15. The method of claim 13, wherein the binding the third track with vehicle identification information based on the time information of the third track comprises:
selecting the third track as a current third track;
selecting one piece of vehicle identification information for the current third track as current vehicle identification information;
judging whether the offset between the time information of the third track bound with the current vehicle identification information and the time information of the current third track is larger than a preset offset threshold value or not;
if so, binding the current third track with the current vehicle identification information;
if not, the step of selecting the vehicle identification information for the current third track is executed again; or,
the binding the third track with vehicle identification information based on the time information of the third track comprises:
selecting one piece of vehicle identification information as current vehicle identification information;
selecting one third track for the current vehicle identification information as a current third track;
judging whether the offset between the time information of the third track bound with the current vehicle identification information and the time information of the current third track is larger than a preset offset threshold value or not;
if so, binding the current vehicle identification information with the current third track;
and if not, re-executing the step of selecting the third track for the current vehicle identification information.
16. An electronic device comprising a processor, a memory coupled to the processor, wherein,
the memory stores program instructions;
the processor is configured to execute the program instructions stored by the memory to implement the method of any of claims 1-15.
17. A storage medium, characterized in that the storage medium stores program instructions which, when executed, implement the method of any one of claims 1-15.
CN202010673869.8A 2020-07-14 2020-07-14 Track generation method, electronic device and storage medium Active CN112000752B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010673869.8A CN112000752B (en) 2020-07-14 2020-07-14 Track generation method, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010673869.8A CN112000752B (en) 2020-07-14 2020-07-14 Track generation method, electronic device and storage medium

Publications (2)

Publication Number Publication Date
CN112000752A true CN112000752A (en) 2020-11-27
CN112000752B CN112000752B (en) 2024-07-12

Family

ID=73467573

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010673869.8A Active CN112000752B (en) 2020-07-14 2020-07-14 Track generation method, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN112000752B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108022012A (en) * 2017-12-01 2018-05-11 兰州大学 Vehicle location Forecasting Methodology based on deep learning
CN108492562A (en) * 2018-04-12 2018-09-04 连云港杰瑞电子有限公司 Intersection vehicles trajectory reconstruction method based on fixed point detection with the alert data fusion of electricity
US20200126241A1 (en) * 2018-10-18 2020-04-23 Deepnorth Inc. Multi-Object Tracking using Online Metric Learning with Long Short-Term Memory
CN111091708A (en) * 2019-12-13 2020-05-01 中国科学院深圳先进技术研究院 Vehicle track prediction method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108022012A (en) * 2017-12-01 2018-05-11 兰州大学 Vehicle location Forecasting Methodology based on deep learning
CN108492562A (en) * 2018-04-12 2018-09-04 连云港杰瑞电子有限公司 Intersection vehicles trajectory reconstruction method based on fixed point detection with the alert data fusion of electricity
US20200126241A1 (en) * 2018-10-18 2020-04-23 Deepnorth Inc. Multi-Object Tracking using Online Metric Learning with Long Short-Term Memory
CN111091708A (en) * 2019-12-13 2020-05-01 中国科学院深圳先进技术研究院 Vehicle track prediction method and device

Also Published As

Publication number Publication date
CN112000752B (en) 2024-07-12

Similar Documents

Publication Publication Date Title
CN109684481A (en) The analysis of public opinion method, apparatus, computer equipment and storage medium
CN110428091A (en) Risk Identification Method and relevant device based on data analysis
CN106023588A (en) Traffic big data-based travel time extraction, prediction and query method
CN115080638B (en) Multi-source data fusion analysis method for microscopic simulation, electronic equipment and storage medium
CN115311889B (en) Intelligent parking lot operation management system based on smart city
CN113515606A (en) Big data processing method based on intelligent medical safety and intelligent medical AI system
CN113190653A (en) Traffic data monitoring system based on block chain
CN110032809A (en) One kind is parked scene virtual reconstruction method
CN112507624A (en) Intercity highway trip mode identification model construction and identification method and device
CN117455237A (en) Road traffic accident risk prediction method based on multi-source data
CN114297448A (en) License applying method, system and medium based on intelligent epidemic prevention big data identification
CN111860383A (en) Group abnormal behavior identification method, device, equipment and storage medium
Moosavi et al. Discovery of driving patterns by trajectory segmentation
CN113988195A (en) Private domain traffic clue mining method and device, vehicle and readable medium
CN107507413A (en) Vehicle location Forecasting Methodology and device
CN112000752B (en) Track generation method, electronic device and storage medium
CN111368868B (en) Method, device and equipment for determining vehicle fake license plate
CN111016717B (en) Method and device for identifying simultaneous charging of multiple electric vehicles
CN113160565B (en) Fake-licensed vehicle identification method and device, storage medium and terminal
CN112866295B (en) Big data crawler-prevention processing method and cloud platform system
CN114896482A (en) Model training and energy supplementing intention recognition method, device, equipment and medium
CN114973671A (en) Road network OD data processing method, device, equipment and storage medium
CN114202919A (en) Method, device and system for identifying shielding of electronic license plate of non-motor vehicle
CN113436356A (en) Vehicle track restoration method, high-speed toll determination method and device
CN112258372A (en) Data processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant